text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
C++ Programming/Chapter Object Oriented Programming Print version:
Object Oriented Programming
Structures
A structure is a compound data type that contains different members of different types. The members are accessed by their names. A value of a structure-object is a tuple of values of each member of the object. A structure can also be seen as a simple implementation of the object paradigm from (OOP).: // declare public members here protected: // declare protected members here private: // declare private members here };
The optional keywords public:, protected:, private: declare the protection status of the following members. They can be put in any order, and more than one of each may occur. If no protection status is given, then the members are public. (Private or protected members can be accessed only by methods or friends of the structure; explained in a later chapter)..
Objects of type myStructType are declared using:
/* struct */ myStructType obj1 /* , obj2, ... */;
Repeating the keyword struct at the beginning is optional.
It is possible to define objects directly in the struct definition instead of using a name for the struct-type:
struct { /*members*/ } obj1 /*, obj:
double function arguments and return types
You can write functions that take or return structures. For example, findCenter takes a Rectangle as an argument and returns a Point that contains the coordinates of the center of the Rectangle:
struct Rectangle { Point corner; double width, height; }; Point findCenter (const Rectangle& box) { double x = box.corner.x + box.width/2; double y = box.corner.y + box.height/2; Point result = {x, y}; return result; }
To call this function, we have to pass a Rectangle as an argument, and assign the return value to a Point variable:
Rectangle mybox = { {10.0, 0.0}, 100, 200 }; Point center = findCenter (mybox); printPoint (center);
The output of this program is (60, 100).
Notice that the Rectangle is being passed to function findCenter by a reference (explained in chapter Functions), because this is more efficient than copying the whole structure what would be done on passing by value. The reference is declared constant, meaning that function findCenter will not modify the argument box, especially that mybox of the caller will remain unchanged.
-.
union.
Classes
This label indicates any members within the 'public' section can be accessed freely anywhere a declared object is in scope.
private
The protected label has a special meaning to inheritance, protected members are accessible in the class that defines them and in classes that inherit from that base class, or friends of it. In the section on inheritance we will see more about it.
Inheritance (Derivation) };
Data members.
this pointer
The this keyword acts as a pointer to the class being referenced. The this pointer acts like any other pointer, although you can't change the pointer itself. Read the section concerning pointers and references to understand more about general pointers.
The this pointer is only accessible within nonstatic member functions of a class, union or struct, and is not available in static member functions. It is not necessary to write code for the this pointer as the compiler does this implicitly. When using a debugger, you can see the this pointer in some variable list when the program steps into nonstatic class functions.
In the following example, the compiler inserts an implicit parameter this in the nonstatic member function int getData(). Additionally, the code initiating the call passes an implicit parameter (provided by the compiler).
class Foo { private: int iX; public: Foo(){ iX = 5; }; int getData() { return this->iX; // this is provided by the compiler at compile time } }; int main() { Foo Example; int iTemp; iTemp = Example.getData(&Example); // compiler adds the &Example reference at compile time return 0; }
There are certain times when a programmer should know about and use the this pointer. The this pointer should be used when overloading the assignment operator to prevent a catastrophe. For example, add in an assignment operator to the code above.
class Foo { private: int iX; public: Foo() { iX = 5; }; int getData() { return iX; } Foo& operator=(const Foo &RHS); }; Foo& Foo::operator=(const Foo &RHS) { if(this != &RHS) { // the if this test prevents an object from copying to itself (ie. RHS = RHS;) this->iX = RHS.iX; // this is suitable for this class, but can be more complex when // copying an object in a different much larger class } return (*this); // returning an object allows chaining, like a = b = c; statements }
However little you may know about this, it is important in implementing any class.
static data member.
Member Functions
The static keyword can be used in four different ways:
- to create permanent storage for local variables in a function.
- to specify internal linkage.
- to declare member functions that act like non-member functions.
- to create a single copy of a data member.
static member function.
Subsumption property, if any member function of the base class is left undefined, we will create a new abstract class (this could be useful sometimes).
Sometimes we use the phrase "pure abstract class," meaning a class that exclusively has pure virtual functions (and no data). The concept of interface is mapped to pure abstract classes in C++, as there is no "interface" construct
- + private fields.
Bitwise operators
- ^ operator
- ! (NOT)
- && (AND)
- || (OR)
The logical operators AND are used when evaluating two expressions to obtain a single relational result.The operator corresponds to the boolean logical opration operator operator Reference
The string class is a part of the C++ standard library, used for convenient manipulation of sequences of characters, to replace the static, unsafe C method of handling strings. To use the string class in a program, the <string> header must be included. The standard library string class can be accessed through the std namespace.
The basic template class is
basic_string<> and its standard specializations are
string and
wstring.
Basic usage
Declaring a std string is done by using one of these two methods:
using namespace std; string std_string; or std::string std_string;
Text I/O the following program:
#include <iostream> #include <string> int main(){ std::string name; std::cout << "Please enter your first name: "; std::cin >> name; std::cout << "Welcome " << name << "!" << std::endl; return 0; }
Although a string may hold a sequence containing any character—including spaces and nulls—when reading into a string using cin and the extraction operator (>>) only the characters before the first space will be stored. Alternatively, if an entire line of text is desired, the getline function may be used:
std::getline(std::cin, name);
Getting user input
We will be using this dummy string for some of our examples.
string str("Hello World!");
This invokes the default constructor with a
const char* argument. Default constructor creates a string which contains nothing, i.e. no characters, not even a
'\0' (however std::string is not null terminated).
string str2(str);
Will trigger the copy constructor.
std::string knows enough to make a deep copy of the characters it stores.
string str2 = str;
This will copy strings using assignment operator. Effect of this code is same as using copy constructor in example above.
Size
string::size_type string::size() const; string::size_type string::length() const;
So for example one might do:
string::size_type strSize = str.size(); string::size_type strSize2 = str2.length();
The methods
size() and
length() both return the size of the string object. There is no apparent difference. Remember that the last character in the string is
size() - 1 and not
size(). Like in C-style strings, and arrays in general,
std::string starts counting from 0.
I/O
ostream& operator<<(ostream &out, string &str); istream& operator>>(istream &in, string &str);
The shift operators (
>> and
<<) have been overloaded so you can perform I/O operations on
istream and
ostream objects, most notably
cout,
cin, and filestreams. Thus you could just do console I/O like this:
std::cout << str << endl; std::cin >> str; istream& getline (istream& in, string& str, char delim = '\n');
Alternatively, if you want to read entire lines at a time, use
getline(). Note that this is not a member function.
getline() will retrieve characters from input stream
in and assign them to
str until
EOF is reached or
delim is encountered.
getline will reset the input string before appending data to it.
delim can be set to any
char value and acts as a general delimiter. Here is some example usage:
#include <fstream> //open a file std::ifstream file("somefile.cpp"); std::string data, temp; while( getline(file, temp, '#')) //while data left in file { //append data data += temp; } std::cout << data;
Because of the way
getline works (i.e. it returns the input stream), you can nest multiple
getline() calls to get multiple strings; however this may significantly reduce readability.
Operators
char& string::operator[](string::size_type pos);
Chars in
strings can be accessed directly using the overloaded subscript (
[]) operator, like in
char arrays:
std::cout << str[0] << str[2];
prints "Hl".
std::string supports casting from the older C string type
const char*. You can also assign or append a simple
char to a string. Assigning a
char* to a
string is as simple as
str = "Hello World!";
If you want to do it character by character, you can also use
str = 'H';
Not surprisingly,
operator+ and
operator+= are also defined! You can append another
string, a
const char* or a
char to any string.
The comparison operators
>, <, ==, >=, <=, != all perform comparison operations on strings, similar to the C strcmp() function. These return a true/false value.
if(str == "Hello World!") { std::cout << "Strings are equal!"; }
Searching strings
string::size_type string::find(string needle, string::size_type pos = 0) const;
You can use the
find() member function to find the first occurrence of a string inside another.
find() will look for
needle inside
this starting from position
pos and return the position of the first occurrence of the
needle. For example:
std::string haystack = "Hello World!"; std::string needle = "o"; std::cout << haystack.find(needle);
Will simply print "4" which is the index of the first occurrence of "o" in
str. If we want the "o" in "World", we need to modify
pos to point past the first occurrence.
str.find(find, 4) would return 4, while
str.find(find, 5) would give 7. If the substring isn't found,
find() returns
std::string::npos.This simple code searches a string for all occurrences of "wiki" and prints their positions:
std::string wikistr = "wikipedia is full of wikis (wiki-wiki means fast)"; for(string::size_type i = 0, tfind; (tfind = wikistr.find("wiki", i)) != string::npos; i = tfind + 1) { std::cout << "Found occurrence of 'wiki' at position " << tfind << std::endl; } string::size_type string::rfind(string needle, string::size_type pos = string::npos) const;
The function
rfind() works similarly, except it returns the last occurrence of the passed string.
Inserting/erasing
string& string::insert(size_type pos, const string& str);
You can use the
insert() member function to insert another string into a string. For example:
string newstr = " Human"; str.insert (5,newstr);
Would return Hello Human World!
string& string::erase(size_type pos, size_type n);
You can use
erase() to remove a substring from a string. For example:
str.erase (6,11);
Would return Hello!
string& string::substr(size_type pos, size_type n);
You can use
substr() to extract a substring from a string. For example:
string str = "Hello World!"; string part = str.substr(6,5);
Would return World.
Backwards compatibility
const char* string::c_str() const; const char* string::data() const;
For backwards compatibility with C/C++ functions which only accept
char* parameters, you can use the member functions
string::c_str() and
string::data() to return a temporary
const char* string you can pass to a function. The difference between these two functions is that
c_str() returns a null-terminated string while
data() does not necessarily return a null-terminated string. So, if your legacy function requires a null-terminated string, use
c_str(), otherwise use
data() (and presumably pass the length of the string in as well).
String Formatting
Strings can only be appended to other strings, but not to numbers or other datatypes, so something like
std::string("Foo") + 5 would not result in a string with the content
"Foo5". To convert other datatypes into string there exist the class
std::ostringstream, found in the include file
<sstream>.
std::ostringstream acts exactly like
std::cout, the only difference is that the output doesn't go to the current standard output as provided by the operating system, but into an internal buffer, that buffer can be converted into a
std::string via the
std::ostringstream::str() method.
Example
#include <iostream> #include <sstream> int main() { std::ostringstream buffer; // Use the std::ostringstream just like std::cout or other iostreams buffer << "You have: " << 5 << " Helloworlds in your inbox"; // Convert the std::ostringstream to a normal string std::string text = buffer.str(); std::cout << text << std::endl; return 0; }
Advanced use
Chapter Summary
- Structures
- Unions
- Classes (Inheritance, Member Functions, Polymorphism and this pointer)
- Operator overloading
- Standard Input/Output streams Library | https://en.wikibooks.org/wiki/C%2B%2B_Programming/Chapter_Object_Oriented_Programming_Print_version | CC-MAIN-2017-43 | refinedweb | 2,150 | 54.22 |
This C++ Program demonstrates implementation of Set_Intersection in STL.
Here is source code of the C++ Program to demonstrate Set_Intersection in STL. The C++ program is successfully compiled and run on a Linux system. The program output is also shown below.
/*
* C++ Program to Implement Set_Intersection in Stl
*/
#include <iostream>
#include <algorithm>
#include <vector>
using namespace std;
int main ()
{
int first[] = {5,10,15,20,25};
int second[] = {50,40,30,20,10};
vector<int> v(10);
vector<int>::iterator it;
sort (first, first + 5);
sort (second, second + 5);
it = set_intersection (first, first + 5, second, second + 5, v.begin());
v.resize(it - v.begin());
cout << "The intersection has " << (v.size()) << " elements: "<<endl;
for (it = v.begin(); it != v.end(); ++it)
cout<< *it<<" ";
cout <<endl;
return 0;
}
$ g++ set_intersection.cpp $ a.out The intersection has 2 elements: 10 20 ------------------ (program exited with code: 0) Press return to continue
Sanfoundry Global Education & Learning Series – 1000 C++ Programs.
If you wish to look at all C++ Programming examples, go to C++ Programs. | http://www.sanfoundry.com/cpp-program-implement-set-intersection-stl/ | CC-MAIN-2017-09 | refinedweb | 169 | 56.45 |
IBM releases JFS to GPL 202
PinAngel writes "IBM has released its JFS source code for Linux to the GPL. You can read more at the IBM website. " JFS is their Journaling File System - you can grab the latest tarball from their Web site.
Real Users never use the Help key.
Will the various distributions integrate this? (Score:3)
IBM gets it. (Score:2)
To those that say "show me the code"... IBM is showing us the code. I think they should be commended for their obvious support of Open Source (free) software.
Anyonw know how good the JFS is? Should we use it?
If you can't figure out how to mail me, don't.
The JFS is pretty good (Score:2)
Looks like JFS can be ported over to the client, too, now...
+----------------------------------------------
Journalist file system (Score:5)
So... what? Does this mean that the JFS will now misreport your disk usage, burrow through your hard drive for nasties and send them to the editor for publication?
What's that? It's not the Journalist File System? Never mind then.
If you can't figure out how to mail me, don't.
Nice to see/have (Score:1)
LVM would be nice too... (Score:2)
Journaling FS's and Window managers (Score:2)
I begin to wonder how many journaling file systems we will have in the end, ext3, ReiserFS, XFS, JFS now.
Hey we soon will have more journaling file systems than window managers!
I think this is a good thing (Score:3)
I wonder which UNIX vendor-contributed FS will make it into distributions first: AIX or XFS.
Can anyone explain differences/advantages/disadvantages of the two filesystems, and perhaps how they compare to some of the other solutions (ReiserFS, ext3)?
New XFMail home page [slappy.org]
YAJFS (Score:2)
Those bastards. (Score:3)
I'll bet that the file system it frickin complicated now, and I won't be able to edit the inodes by hand with magnets anymore.
Re:IBM gets it. (Score:2)
I'm not 100% sure where the jfs stops and lvm starts, but you can do some really incredible things with aix boxes.
For example I once moved actively paging paging space from a drive that was logging errors to a new drive without shutting down, having other people log off, or anything, it just worked.
Don't get too excited (Score:2)
--GnrcMan--
Whoa! (Score:3)
At lease it is good to see IBM is keeping their promises, and following the credo 'Release early and often'. (In this case, VERY EARLY).
Details of IBM JFS (Score:3) are/developer/library/jfs.html [ibm.com]
Re:IBM gets it. (Score:4)
This is a Good Day for Linux. As soon as Big Blue gets things stable for i386, I <strong>will</strong> be changing file system types.
Glenn Stone, RHCE
Unix professional since 1986
(gee, I'm glad I bought that extra disk now!)
Re:IBM gets it. (Score:2)
I've done the same thing with Digital Unix (now Tru64
:P) and their LVM. It's quite impressive to watch. I hope that part is being ported or will be ported soon. That would be a major improvement for fault-tolerance.
Swapping disk drives without the user even knowing. Now THAT'S impressive.
If you can't figure out how to mail me, don't.
Timeframe (Score:3)
Of course (and despite media rumours) Linux isn't the center of the universe. How does this release under GPL affect the chances of the other open OS's such as FreeBSD adopting this? Is it possible to include something this low level which is GPLed into the core of something licensed under the BSD licence? (I have a nasty feeling I may provoke a license flamewar here...)
Colin Scott
Choices? (Score:5)
Well, quite frankly, I LIKE having a choice? Why doesn't everyone that works for RedHat work on making the Debian project better? Hell, why do we have so many editors, vi, vim, emacs, joe, nedit, gnotepad, ed, pico... why do all of those people have to make their own editor? Why can't they just contribute to an editor that already is there?
Er... maybe because it fits a slightly different niche and philosophy? Maybe because IBM's journaling file system handles things a little bit different then ReiserFS, and for certain applications one or the other might be better? I like choices! I like competition! This much diversity is a sign of a healthy enviroment... I say, let them write their own journaling file systems, let's get 10 or 20 more, each a little bit different, each a slight bit more focused to a certain area. Diversity is wonderful, let's nurture it.
Re:IBM gets it. (Score:2)
Wow. Count me as a potential convert too. Ext2 is cool but it's not quite a workhorse.
Does JFS support file attributes (such as immutable or append-only) like ext2 does?
If you can't figure out how to mail me, don't.
Re:I think this is a good thing (Score:3)
As I understand it, there are 2 or 3 varieties of JFS. There are at least two but their might be three. The big difference between the 2 JFSs is max filesize. Standard JFS on a 32bit PowerPC or POWERx chip supports a 2G max filesize. There is a patch that will kick it up to the full 32bit int 4G limit.
I believe there is a big file version, in fact I'm positive there is because we routinely have to deal with cusomters who transfer 8G files from MVS to AIX. I'm not sure if that requires a 64bit PowerPC or POWERx chip to run the big file version or not.
That being said, a possible difference is the file size limitations between XFS and JFS. I think they are both very similar in a lot of other respects. XFS provides a promised performance level, JFS probably promises a critical level of data integrity (at IBM you absolutely cannot lose a single bit of a customers data, under any circumstances other than castastrophic hardware failure and even then every attempt is made to save it) I think they are both variants of the Veritas filesystem, but I could be wrong. Anyways, both xfs and jfs are top notch filesystems.
There may be some differences in what is put in the log, logging of only meta data isn't that unusual.
As for Reiser and Ext3. Ext3 suffers from Ext2's native int size as a limit of filesize. I'm not sure about Reiserfs, I understand they are working on 64bit support on 32bit machines. As I understand it, both ReiserFS and Ext2/3 are "light weight" compared to JFS and XFS, not in a bad way but they are simple lean and mean but I believe that XFS and JFS go to great pains to provide extra services that are outside the realm of what ext2 and reiserfs intend to provide. JFS uses a btree, reiserfs does also. I'm pretty sure xfs does, ext2/3 doesn't.
Journalling File Systems for Linux (Score:5)
JFS is the =ONLY= working journalling filing system from a commercial company. XFS would have been the first, but there needs to be more released (and soon) if anyone is to have much confidence in it.
Working does =NOT= mean functional or usable, though, but the development is in TRUE Open Source style, with bug-reporting and a read/write CVS repository for developers.
As for when distributions will use this - I don't expect to see any distribution use ANY of the journalling filing systems this quarter. Next quarter, we MIGHT see ReiserFS. This year, I'd expect to see ReiserFS and Ext3fs.
I'd expect to see JFS added to the next development tree, and therefore introduced into distributions in the next cycle of releases.
XFS might (or might not) come out before the year 3000. As far as kernel patches go, SGI are brilliant. As far as graphics, especially OpenGL, go, SGI is untouchable. As far as filing systems go, a concussed doormouse in a tarpit would move faster.
IBM releases JFS to GPL (Score:1)
Just had to keep the TLA's going...
---------------------
not sooo great (Score:1)
Business entities have little incentive to use GPL products while there inhouse IT staff are not linux experts (and this is generally the case). Also, IT system maintenance folks in NT environments "like to get on the phone with the vendor" for most problems and are generally not code-oriented people.
Can your sysadmin code?
IBM and GNU (Score:2)
Note to Bob Metcalfe and the likes: should the largest computer company in the world be treated as a communist symphatizer now?
Coincidence? (Score:2)
JFS, IBM, alright! (Score:1)
Re:Choices? (Score:1)
What's happening, I think, is that every subsystem of Linux is going through a brutal natural selection process. In some cases, one technology emerges as the de facto standard. In other cases, similar technologies serve different markets.
--
huh? (Score:1)
Plus, I'm sure whatever additions/improvements that end up in JFS from open source developers will end up back in AIX's code base. There's no motivation for IBM to take developers off of a solid, reliable, proven project and move them to an experimental open source project where they'll have to reinvent the wheel. Sounds like a waste of everyone's time.
Re:IBM gets it. (Score:1)
Why is it some pompous little cretins never get the 'pick the platform right for your application' credo? For crushing twits, a 30,000 pound 9x2 sounds about right. Imagine the squish!
True Blue Big Blue (Score:5)
I work in close affiliation with IBM and every indication that I am receiving is that they are totally genuine about their open-source actions. IBM seems to be falling into a model that allows for the greatest customer satisfaction: supporting many diversified products, listening closely to customer demands, and opening up their products to the community. I would like to see more companies follow their example. In the end users will benefit the most!
Pretty Cool; Hopefully some useful ideas (Score:4)
Which means that if it has (say) a useful B-tree implementation, that may be usable with things completely unrelated to filesystems.
The question, at this point, is to what degree it is actually usable with Linux.
People may recall that the Mozilla source code "dump" had to take out big chunks, notably including bits of Rogue Wave libraries, RSA crypto code, and some ORB whose name escapes me. As well as (for the UNIX edition) Motif.
Is IBM JFS based on Veritas? If so, then the source code that IBM is free to release doesn't include things at the low level that will be needed. That would parallel the notion of NCC having to strip out Motif support from Mozilla, with the further issue that you can't presently get anything that is quite equivalent to Veritas on Linux.
Re:not sooo great (Score:2)
So, the companies that are using Linux, and have iron-clad support contracts with third party support providers, like, say, Linux Care, are going to scoff at the notion when LinuxCare also say 'BTW, we've stamped version 1.2.1 of JFS to be solid, and we're willing to write that into our support contract'?
Linux is slowly coming to equal Solaris in terms of honest-to-Gods enterprise features. And I'm talking the simple stuff, like being able to fsck a mounted drive, or change shm_max type params without recompiling your kernel. When things like that are in place, you'll see a lot of shops moving over to Linux.
The new currency in an Open Source World (Score:5)
A journalling file system is a really critical need for our favorite Open Source OS to be taken seriously in an enterprise setting. IBM, SGI et. al. want to be able to say "See, we initially wrote your FS, so we can suppport it best!", and get more business that way. I think that's why there are so many competing projects.
This, friends, is where a new market paradigm begins - we will decide which new FS becomes the standard on our machines, based on it's merits, not marketing. Then we end up supporting it, and by default, the company that created it. It's the new currency - knowledge, the ability to use that knowledge, and our collective mind set because of that knowledge. Welcome to the new world.
Re:IBM gets it. (Score:1)
I'm really hoping they'll port the whole LVM at some point. One of the things I like most about AIX is the LVM -- being able to do stuff like increase the size of the live filesystem is wonderful (that's sort of a LVM/JFS tag team).
Re:Journalling File Systems for Linux (Score:3)
It's faster than ext2 too. It uses a b*tree and small files are packed together (less wasted space).
For my purposes, I'm very happy using the ReiserFS devel over the ext2 release. Especially since I have lost several entire partitions with ext2.
Re:IBM gets it. (Score:3)
I'd say it should certainly be an option. It will be interesting to see how it compares in a Linux implementation as compared to SGI's XFS from Irix, and also with ext3. There is also at least one other independant journaling file system being developed for Linux, but I can't remember what it is called off the top of my head. I think the next generation of Linux file systems beyond those will really be impressive if it can combine the best attributes of those.
What is a JFS? How is it different? (Score:1)
Re:huh? (Score:1)
Re:LVM would be nice too... (Score:2)
:-(
Re:Choices? (Score:1)
However, i would also like to add that 3 or 4 solid options are better than 100 crappy options. KDe and GNOME, for instance: i have no problem with 2, or maybe a third if it is obviously technically superior. But to have a million choices, none of which stack up sucks.
Re:I think this is a good thing (Score:2)
Re:The new currency in an Open Source World (Score:2)
Of course, I'm not sure it actually belongs to Compaq...I seem to recall that Digital licensed it from another company that initially developed it, and this was clearly stated in the DU4.0 man pages... but perusing the Tru64 5.0 man page, I don't see any references, so it may be they actually own it now...((drool))...
Re:Pretty Cool; Hopefully some useful ideas (Score:5)
I would say that the only thing missing is something like the README on the web page in Documentation/fs/jfs.txt (hint, hint)
Joy, JFS is a good fs, during the two years now that I have been working with AIX I have had one fs corruption, and that was fixed when we fastened the SCSI-1 cable
AFS (Do you mean the Andrew File System?) (Score:4)
Are you talking about the Andrew File System?
If you are it's not from Compaq it's from Transarc (now owned by IBM) and was originally developed by Carngie Mellon University [cmu.edu]. There's already a beta of AFS for Linux and there should be an official version "real soon now" [transarc.com]. There's also a free AFS client implementation called Arla [snoopy.net].
Also, AFS is a different sort of beast. It's a distributed filesystem (dfs). CMU's latest dfs -- CODA -- is based on AFS2 and a linux port is available [cmu.edu].Sean
XFS Update! (Score:2)
Re:IBM gets it. (Score:1)
- Bruce Byfield, Product Development, Stormix Technologies
Absolutely typical... (Score:1)
Re:What is a JFS? How is it different? (Score:4)
Essentially, each journaled device has an area on disk that acts as a transaction log (or Journal) which keeps track of the FS's state during normal use (basically, what inodes aren't synced). When a JFS system is hard-booted, you only need to check the inodes that weren't synced, rather than scan the entire slice. This results in much faster fsck times.
Also, IBM's JFS (from what I've read on the IBM site) will have LVM features (though apparently not the entire LVM system) which depend on the JFS to ensure data integrity when you start throwing exotic filesystem mangling routines (mirroring, Logical Devices (more interesting than concatenation), etc) into the mix..
In other words, JFS is a good thing. We like it. In fact, I'd like to be able to boot off it.
Cheers,
Your Working Boy,
This is great (Score:3)
In short, it is very cool. It is much better that the crap Sun gives us by default, and while I don't know much about SGI's XFS, my impression of SGI's has generally been that they suck and are slow.
Time to buy some IBM stock (anyone taking bets on whether IBM swallows redhat?)
Re:LVM would be nice too... (Score:1)
Not complete yet! (Score:5)
Note that JFS isn't complete yet. The README says that hard and soft links do not work, you can't *write* to a JFS filesystem, reading is still in progress and it will only work on the Intel architecture due to endian problems. If you want to use a journeling file system now you should probably try [ibm.com]ext3 [linux.org.uk]
Re:Pretty Cool; Hopefully some useful ideas (Score:2)
In fact, no Veritas products are available on AIX, IIRC.
Doh.. JFS breaks NFS (Score:1)
Since I'm a perl hacker, and not a c hacker... this is not quite an easy fix for me..
anyone out there got a fix... (besides turning off NFS)
ChiefArcher
Re:IBM gets it. (Score:2)
Yeah, they do seem to get the idea.
I remember, about a decade ago, when IBM were the corporation for geeks to hate. Then Apple pulled their stupid patent case concerning the mouse, and took over that posistion breifly. Finally, in the early nineties, everybody suddenly noticed Microsoft, and just how scummy they were.
IBM, after their many year long anti-trust case, seem to have reformed. They are giving the code away, not under their own license, but under the one and only GPL. They can't claim it back. That shows a lot of understanding, and the will to play this game on our terms.
The hope here, is that now Chairman Bill is out of the hotseat at Microsoft, the other people with power there will follow the example of IBM, and clean up their act.
OK, a lot off topic, but I don't know anything at all about journaling file systems, except from the phrase "they're cool and we want one" being bandied about the office...
Good OS/filesystems web page (Score:1)
It talks about different OS interfaces, and has a very thorough section on filesystems available under linux, including several other JFS's...
-Chris
Re:Choices? (Score:1)
maybe possible maintenance problems - of course this is alleviated by having common interfaces.
The death of your son??? (OT) (Score:1)
Too many choices are bad (Score:5)
Think about the situation before Qt/KDE and Gtk/Gnome, where we had a dozen different GUI toolkits, all of which sucked badly, and none of which had a momentum significantly larger than the other. An application writer would have to choose one of them, and send fixes and enhancement to one that alone, helping perhaps 5% of the other application writers in the process. Today, he can one of the two main toolkits/environments, and his fixes and enhancements will help maybe 45% of the other application writers.
Of course, some choices can be justified because they provide compatibility, for example LessTif, GnuSTEP and winelib, and there should always be room for research-like projects. What is needed is one or two choices that are clearly "mainstream", and thus can be used for focusing developer energy.
For journaling file systems, the situation isn't all bad. XFS, JFS and Ext3 are all clearly needed in order to support interoperability with SGI, IBM and Ext2 systems. And ReiserFS has some very interesting application for file system based databases, which I'm really hoping will turn out good.
Re:Pretty Cool; Hopefully some useful ideas (Score:1)
linux-2.2.12/Documentation/
linux-2.2.12/Documentation/filesystems/
linux-2.2.12/Documentation/filesystems/00-INDEX
linux-2.2.12/Documentation/filesystems/jfs.txt
linux-2.2.12/Documentation/Configure.help
linux-2.2.12/arch/
linux-2.2.12/arch/i386/
linux-2.2.12/arch/i386/defconfig
linux-2.2.12/fs/
linux-2.2.12/fs/jfs/
linux-2.2.12/fs/jfs/ref/
linux-2.2.12/fs/jfs/ref/dprintf.c
linux-2.2.12/fs/jfs/ref/fs_ioctl.c
linux-2.2.12/fs/jfs/ref/jfs_acl.c
linux-2.2.12/fs/jfs/ref/jfs_bufmgr.c
linux-2.2.12/fs/jfs/ref/jfs_cachemgr.c
linux-2.2.12/fs/jfs/ref/jfs_chkdsk.c
linux-2.2.12/fs/jfs/ref/jfs_close.c
linux-2.2.12/fs/jfs/ref/jfs_clrbblks.c
linux-2.2.12/fs/jfs/ref/jfs_create.c
linux-2.2.12/fs/jfs/ref/jfs_dasdlim.c
linux-2.2.12/fs/jfs/ref/jfs_debug.c
linux-2.2.12/fs/jfs/ref/jfs_defragfs.c
etc, etc, etc
Looks like a "patch" of some kind to me.
Re:IBM gets it. (Score:1)
Hey, if it doesn't -- add 'em!
What IBM gets. (Score:3)
What *is* interesting though, and very promising is that they've chosen to release it under the GPL. Of course under any other licence it would have been useless since it's kernel-level and the kernel *is* GPL. But it's a nice move away from the YAL (Yet Another Licence) syndrome that's been plagueing the first careful steps towards Open Source... wanting to reap benefit of the new paradigm, but not really daring to let go.
Hopefully more will follow in this direction.
-- Eythain
Re:Pretty Cool; Hopefully some useful ideas-HEY (Score:1)
BSD (Score:1)
Re:Too many choices are bad (Score:1)
The real reason why IBM is doing this... (Score:1)
I know I'd be using JFS and there LVM on any mission critical system I had...
JFS == Good candidate for "main" Linux FS? (Score:1)
To me, these sound like the ingredients needed for a main Linux FS (good on high and low end). Of course, JFS is NOT designed for small file systems (a la floppy disks, misc. removables), so it couldn't replace everything.
This is great news.
Is anyone tracking the state of the JFS projects? (Score:2)
But, with several alternatives it would be nice to see a full analysis done of each of them, and an ongoing tracker of the current state of each. This would be a great article for Linux World, or one of the other Linux peridicals.
OT: NFS opened? [was Re:JFS, IBM, alright!] (Score:2)
Apparently they are. Check this article, Sun releases NFS as open source [vnunet.com], or this one, Sun loosens its grip on NFS [infoworld.com]. Alas, it looks like it's going to be released under YAWSL (Yet Another Wacky Sun License), but it's apparently only for the Transport Independent Remote Procedure Call (TI-RPC) protocol.
JimD
Re:IBM gets it. (Score:1)
mostly working, except for file management (Score:4)
How's that again?
The JFS README file [ibm.com] lists the following TODO items left to go:
JFS TODO list:
- JFS:
- make READ fully operational
- READ file
- get write capabilities operational
- MKDIR
- CREATE file
- WRITE file
- RMDIR
- RM
- add support for hard and soft links, special files
That's a pretty broad definition of "mostly working." It does sound exciting, but I'm going to have to withhold judgement until file reading, writing, creation and removal have been made operational.
tradeoffs (Score:4)
You want speed, you dump journalling or file systems alltogether and do raw, direct disk access. That is the fastest way to get data onto and off of the disk. It has the highest bandwidth both sustained and burst. It has the highest data density. It also is the least flexible and most prone to error.
You want reliability, and/or flexibility, you start taking care how, when, where you put your data, whether or not you do copies, add error correction codes, etc. All of this takes time, which negates speed.
Some people want speed at all cost.
Some people want reliability at all cost.
Some people are somewhere in between.
No one system is going to satisfy all of them.
Re:Too many choices are good (Score:1)
Journaling file systems are in their infance for the linux kernel. So we need to explore as many possibilities as possible. And because there are only finite developer resources, only the best ones will survive. It's called survival of the fitest and thats how the open source movement works.
After all just think if there were only one journaling filesystem. Ever heard of the saying "too many cooks spoil the broth" ? So you have lots of people with different philosophies working on this single choice that you have. The results can only be horrifying. Just look at the BSD projects for an example. They have a great product, and they started with a single code base, but philosophical goals differed and eventually they evloved into many different varients. After much bad feelings is what I get from various conversations that I have followed on mailing lists and news groups.
This is the reason that you have very few people working on the linux kernel itself, and only one who decides what goes in. His philosophy decides where the linux kernel goes. If you don't like this, work on something else. Like the hurd. Eventually the better/more practical philosophy will win. And there dosen't have to be a single winner even. Many can win. And although people will flame me for this, the opensource philosophy has already proven to be the better and more praticle way of developing the support infrastructure software. And it's winning. And there are many winners. Like GPL, lGPL, BSD, artistic, NPL etc. Even in these, the better ones will survive, and others will evolve to take on the good things from their betters.
Besides, with all these codebases released under GPL/compatible licenses, you have the option of borrowing from each other to make a better product. And you have something to compare your product with. After all how would you know that there isn't a better way of doing things, if they were always done only one way and you've never even seen/thought about another?
So you see variaty is the mother of evolution. Choice is good and let the best man win.
Re:Too many choices are bad (Score:2)
When diversity starts to cause problems (e.g. GUI toolkits) then it creates an automatic need to improve interoperability. Hence there is a movement towards standardisation, drawing from the best features of the existing options.
I see the development of open source technology following something of a four stage process:
Pioneer -> Diversity -> Consolidation -> Maturity
It's an evolutionary model where the problems of the chaotic period eventually pay off by contributing to the base of code and experience that is needed for a mature open standard.
Whoa.
I've just written a pile of pseudo-scientific bullshit. I think I better stop now......
Why buy the cow when the milk is free? (Score:1)
Sure, I'll take odds against you on that one. Red Hat does not own much in the way of proprietary technology. It does have a lot of developers, but IBM has more. Sure it would generate a lot of buzz, but most of it would probably be negative in the Linux community, and Wall Street would ask whether Red Hat's really worth the money.
Buying Red Hat is the way Microsoft deals with competition in a closed-source world. IBM can do everything Red Hat does in-house, and a whole lot more.
-cwk.
Re:The real reason why IBM is doing this... (Score:3)
Enlightened self interest and the GPL (Score:4)
I think this is a smart and encouraging long-term move by IBM. The real money gets spent not on hardware or software but on support. IBM (and SGI) must reckon that Free Software is here to stay and if they are to make money they must be leaders in it.
Individuals and Universities are likely to use Free Software without commercial support. Companies will it some of the time but not for critical systems. By being leaders in Linux IBM will do little to harm their core sales to people who wouldn't use it anyway but will make their products the logical progression for people moving away from Linux. And maybe open up a profitable Linux support division too.
In this area GPL scores over BSD licensing because companies can release their source code without the fear that a competitor will use it in their propriety closed OS.
The good news is that all this appeals to one of the most powerful force on earth, that dubious thing called enlightened self-interest. Whilst pure altruism, from Stallman and Torvalds all the way down to any of us who have ever submitted a bug-fix to Free Software, is essential it will not change the world on its own. The combination of the two just might.
John
Re:IBM gets it. (Score:1)
Re:Too many choices are bad (Score:2)
I think your argument is not exactly sound. If not enough developer mass is gained within this crucial first stage, then the project stagnates. To extrapolate, if many similar projects are started and concurrently compete with each other for developers at an early stage (in which none of them are well-defined) they will stunt each other and not be able to attract any substantive amount of developers, and it will be very hard to escape the stagnation. Eventually developers will get tired and bored and go away (not necessarily to other projects either).
Jazilla.org - the Java Mozilla [sourceforge.net]
Re:Too many choices are bad (Score:2)
And that is how you develop excellently implemented but outdated technology. Don't flame, I'm not being a troll...but the path you cite takes
It's obvious that
We should be more conscious of splintering and further fractioning developer resources, and stop being so arrogant as to think that spawning hundreds of identical projects isn't really going to hurt us. According to the path you describe, it will, because the consolidation period will be very long, at which point the Cathedral has just got a head start on us.
Jazilla.org - the Java Mozilla [sourceforge.net]
JFS logging level (Score:2)
Gleaned from are/developer/library/jfs.html [ibm.com]: JFS only logs operations on meta-data, for data consistency consider using syncronous I/O.
That having been said, JFS is about as bullet-proof of a filesystem as I have ever used. This is a good thing.
Linus only has 24 hours in the day... (Score:5)
These filesystems are not as simple to interface in as the "Amiga filesystem" or other such stuff, as these FSes have expectations to be able to control somewhat how the kernel manages caches. They're not merely "drop in a patch and all will be well."
As a result, while I agree that it's good to have some diversity now to allow some experimentation, I am far less sure that it will be wise to have four (or more, if rumors of Compaq contribution of AdvFS code turn out to be true...) filesystems integrated in to the "official" kernel stream. There may be merit to having a couple of them, but not likely all of them.
So while I agree that it's quite OK for there to be 5 of them (and that ignores GFS, NTFS, and other stranger options that may be of less direct relevance), I think that there will be, ultimately, a need for several of the "integration projects" to fail.
Otherwise, Linus and others won't have time to fix up NFS3, improve memory management, implement ACLs, implement capabilities, implement IA-64 support, and all the other sorts of things that need to occupy some of their time.
The GUI comparison was pretty good; I agree with Per that it is a Good Thing that we have GNOME [gnome.org] and KDE, [kde.org] as this is sufficient diversity to ensure that there is some competition whilst not being so much as to be completely fragmenting. It is unfortunate that this leaves some potentially good toolkits like FLTK [fltk.org] or Tk or Amulet or Garnet or InterViews "out in the cold."
The point is that variety is useful at the point in time at which you're not sure what the results should look like.
But after that point, variety comes at the cost of having to support additional "development streams," and while there is logic to "letting the best man win," this has the side effect that if you agree with this, you have to also agree with the notion that the "not quite best men" need to be able to lose.
Re:Journaling FS's and Window managers (Score:3)
So, while we "may" have lots of journaling file systems someday, there's only one contender for 2.4.x, and only 3 actual public code bases. There are, at last count triple digits of window managers
---
Re:LVM would be nice too... (Score:2)
--Daniel
Reboots take minutes instead of hours. -- NOT! (Score:2)
I administer many RS/6000s (S70s, H70s, S7a, etc.), several of them clustered. As a matter of fact, I've never been able to boot one to just the operating system in under 20 minutes.
Most of the time, the IPL takes 35 minutes to an hour for the S7*s. Even with a fast IPL (which can no longer be done by software but requires touching the box (are you listening IBM?)), I'm gazing upon least 25 minutes.
My Sun Enterprise servers can be booted in under six minutes. Every time.
That said, I'd take AIX over any other operating system on the planet for a high-end server. Linux is great for unclustered single-service-per-box applications or many light services on a single box but, for 'real' work, AIX on RS/6000 is the way to go.
As for AIX's JFS, it is amazing. Seven years, several disasters and not a single bit lost. Coupled with AIX's logical volume manager (LVM) and SMIT, well, there just isn't a better place to be.
Init 'I Ain't Paid By IBM But I Would Carry Their Child' Zero
Re:ext3 (Score:2)
Supreme Lord High Commander of the Interstellar Task Force for the Eradication of Stupidity
IBM's JFS & ReiserFS (Score:2)
AdvFS (Score:2)
how hard is it to trash... don't really know. never intended to try it.
Re:Journalling File Systems for Linux (Score:2)
Re:JFS == Good candidate for "main" Linux FS? (Score:2)
You need to have at least a minimal HPFS partition because JFS is still not bootable.
Re:IBM's JFS & ReiserFS (Score:2)
Veritas *filesystem* support is what's relevant (Score:2)
Reportedly there are other UNIX vendors integrating it, likely including SCO and HP.
I was apparently wrong about there being a JFS dependancy on Veritas FS; there is, in any case, zero relevancy in this thread to their backup software.
Re:AdvFS (Score:2)
Another issue is that if we fill the file system we could get empty files and unbootable systems. That is no big deal if you can boot from other media and run a quick fdisk to mount a disk. Unfortunately, the AdvFS domains require a little more work to get mounted.
I think there are some nice ideas in AdvFS, but I also think anyone who has administered it a lot will think that it is more trouble than an unjournaled FS.
Another point about journaled file systems and linux. AFAIK, ext3 is the ONLY file system that works on something other than x86 systems.
SMIT kicks the shit out of any Linux tool. (Score:2)
- A.P.
--
"One World, one Web, one Program" - Microsoft promotional ad
File System Support and the Kernel (Score:3)
File system support is a very touchy area for most, but few see the real potential, and why we need so many file systems supported in Linux.
Just think if you had a box in the office, that if any one of your big iron machines (IBM, SGI, Compaq, etc) decided to up and fail, you could just plug the drive into, and get at your data immediately to get things done. Granted it might not be as fast as the traditional system that you use for every day operations, but this is an "emergency backup". You live with reduced performance instead of no performance at all.
I can see Linux becoming that box. That all purpose box of tricks that a System Administrator can use to his disposal. It's already there in the network doing just that job, and gaining ground. There is a lot more this little system can do that even the big irons can't compete with. And if we want Linux to be the best... *grin*
As for kernel support, all that the many systems will do is provide a very decent API system for passing data to/from the kernel for these Journalling/High Performance systems. Sure everyone does the final product differently, but if the kernel can output a generic, yet fast method for all the file systems to use, then we gain some instant advantages. Firstly we can run all these systems, which is a must, but secondly it opens up an interface that can be exploited by a newly developed system to the max, giving us the best performance possible.
This is not going to be easy, and as people improve their programming techniques and new people get into the kernel code, there is bound to be new revisions, and mebbe even total rewrites. Just look at the networking code. Major revamps by dedicated people have produced now a significantly faster network layer. True a lot of it got re-written, but that is the price you pay for progress.
So instead of bitching about it, lets just let them get on with the job of doing it, and where possible help out. When they make mistakes, don't abuse, just give them a prod in the right direction.
--- Every decision is right, it's just a matter of whose right we are refering to.
JFS -> Linux = good (Score:2)
Linux has become a powerful and economical choice for a entry to midlevel server. However, you will find very few interested in using Linux for large scale, mission critical file serving when there are so many proprietary, Sun Servers, HP, IBM, Compaq Tru64... high end unix-based servers that have tried and true journaling file systems. With a GPLed journaling file system, Linux can begin to take notice from those who might have used proprietary systems. Which also hopefully will encourage other developments previously found only on such high end and proprietary servers (hot swappable NICS comes to mind, tho I think this may be more of a hardware feature then anything, I dunno, I've never tried putting linux on the compaq proliant at work, if only they'd let me
Anyway, for all of those who say, oh wow, journaling filing, i want that on my slashdot-viewing box. You don't really need it. A journaling file system is a complex and processor demanding file system. Linux runs faster with plain old EXT2, despite its shortcomings. But for server applications, transaction journaling is the only way to go.
On a side note, does anyone know the status of XFS (another journaling file system) taking EXT2s place? I heard that that was a possibility. However, to me its unecessary to implement a full fledged journaling system (IBM or SGI) unless you really need it. But thats just my take on it.
At any rate, thanks to IBM for supporting open source.
Spyky
I've used JFS and ext2 side-by-side... (Score:2)
In fact, more generally, I'd be really hard pressed to think of anything I would want in Linux from AIX (maybe the Fortran 90 compiler).
When it comes to systems like Irix, AIX, JFS, etc., you have to realize that a lot of smart people have worked on them for a long time. Some people may view that as an advantage. I don't. The motivation of those engineers was to be able to point to new features they implemented when their performance review came up every year, to do well on benchmarks, and maybe to write some papers for technical conferences. Leaving "good enough" alone was definitely not in their interest.
And those engineers were backed by big software development organizations that debugged and tested that code for every release, and by big consulting and field support engineers that helped customers configure the zillions of options that those systems had, most of which hardly anybody ever needed.
Linux keeps things simple. It gets good performance using comparatively straightforward code. That's a big win in my book, and I think it's the reason why so many people prefer Linux to proprietary systems. Let's not spoil that advantage by incorporating all those dusty decks from IBM, SGI, and other big companies that fit neither with the Linux code development infrastructure nor with the end user support infrastructure. The only party that benefits if Linux gets overly complex is companies that sell support.
Re:Why lots of choices? (Score:2)
This is why I think we should encourage young programmers interested in free software to think really hard before staring a new project. Aren't there some existing, related project they can contribute to instead?Actually, we do, but most of them are build "on top" op Gtk+. For example, there are currently four different C++ toolkits build on top of Gtk. True, they fill different niches, but each of the teams consist of just one or two core developers.
Re:Too many choices are bad (Score:2)
So we wouldn't necessarily get faster progress if everyone piled into one JFS, particularly not when version 1.0 hadn't been released. This way we get several to choose from and they can borrow features from each other.
JFS for plain OS/2? (Score:2)
I mean, how complete is the code provided compared to the OS/2 one provided with OS/2 Warp Server for e-Business?
--
The problem is that integration is hard. (Score:2)
But that wasn't my concern.
My concern was, and is, that it is likely to be prohibitively difficult to get all of these filesystems integrated all at once into the official kernel stream.
They all have somewhat differing expectations as to the interfaces used to get at such things as disk cache. This should not be a big surprise; they were designed independently, and thus have differing ideas as to how to interface with the kernel.
The problem is that since they simultaneously require:
Note that namespace issues have already come up; ReiserFS and EXT2 had clashes due to trying to define functions by the same names. Other similar things are likely to happen.
The point is that doing justice to integration of each FS will take time and effort.
In contrast, doing justice to the wide world of Linux users that may have concerns other than just that of having cool filesystems may involve deciding that instead of working on JFS or XFS integration, they'll work on something else.
Furthermore, the issue isn't necessarily of "justice to Linux users;" it may instead be that Linus will integrate in some FSes, and then decide that the notion of adding in more bores him, and say: | https://slashdot.org/story/00/02/03/0931237/ibm-releases-jfs-to-gpl | CC-MAIN-2017-47 | refinedweb | 7,508 | 72.26 |
I come from Pygame, and there I had pygame.event.get(), which returned a list of all events (So, once I had that list of events inside the mainloop, I could check for multiple keypresses or whatever in real-time, since ALL possible events were available).
Now here's my code:
#include "init.h" #include "texture.h" int main(int argc, char** argv) { Init(); while (running) { while (SDL_PollEvent(&e) != 0) { // PROBLEM <<< I don't want "POLL". auto key = e.key.keysym.sym; if (e.type == SDL_QUIT) { running = false; } else if (e.type == SDL_KEYDOWN) { if (key == SDLK_ESCAPE) { running = false; } if (key == SDLK_UP) { square_rect.y -= PLAYER_SPEED; } else if (key == SDLK_DOWN) { square_rect.y += PLAYER_SPEED; } else if (key == SDLK_LEFT) { square_rect.x -= PLAYER_SPEED; } else if (key == SDLK_RIGHT) { square_rect.x += PLAYER_SPEED; } } } SDL_SetRenderDrawColor(renderer, 0, 0, 0, 255); SDL_RenderClear(renderer); SDL_SetRenderDrawColor(renderer, 255, 0, 0, 255); SDL_RenderFillRect(renderer, &square_rect); SDL_RenderPresent(renderer); } SDL_DestroyWindow(window); SDL_Quit(); return 0; }
In the tutorials I've seen so far, they've used SDL_PollEvent(), and I did not notice the difference until now, when I tried to move my square with the keyboard arrows. Obviously, I can't press two arrow keys at once, since the loop is capturing one event at a time.
And the problem is, there's no such thing as "SDL_Event_Get()" in the API. Only these:
Well, I don't know how Pete Shinners ported SDL 1.2 to Python and handled the events in such a marvelous way, so I'm having a hard time understanding SDL 2.0 (I'm probably not the only one). Is there such a thing as "get all the events ()" in SDL 2.0? What if I wanted to check if ALL possible keys were pressed AT THE SAME TIME? | http://www.howtobuildsoftware.com/index.php/how-do/9ts/c-pygame-sdl-sdl-2-how-do-i-get-a-list-of-all-the-events-in-real-time-in-sdl-20 | CC-MAIN-2018-51 | refinedweb | 288 | 76.11 |
Description times around the stock tank.
Input
* Line 1: Two space-separated integers: N and M
* Lines 2..M+1: Each line contains two space-separated integers A and B that describe a rope from cow A to cow B in the clockwise direction.
Output
* Line 1: A single line with a single integer that is the number of groups successfully dancing the Round Dance.
Sample Input
2 4
3 5
1 2
4 1
INPUT DETAILS:
ASCII art for Round Dancing is challenging. Nevertheless, here is a
representation of the cows around the stock tank:
_1___
/**** \
5 /****** 2
/ /**TANK**|
\ \********/
\ \******/ 3
\ 4____/ /
\_______/
Sample Output
HINT
1,2,4这三只奶牛同属一个成功跳了圆舞的组合.而3,5两只奶牛没有跳成功的圆舞
我又在刷水了……这几天好颓啊
题意就是缩点之后统计包含2个以上点的强连通分量的个数
这题可以当模板吧……
#include<cstdio> #include<iostream> #include<cstring> #include<cstdlib> #include<algorithm> #include<cmath> #include<queue> #include<deque> #include<set> #include<map> #include<ctime> #define LL long long #define inf 0x7ffffff #define pa pair<int,int> #define pi 3.1415926535897932384626433832795028841971 #define N 10010 #define M 50010 using namespace std; int n,m,cnt; struct edge{ int to,next; }e[M]; int head[N]; inline void ins(int u,int v) { e[++cnt].to=v; e[cnt].next=head[u]; head[u]=cnt; } int cnt2,cnt3; int dfn[N],low[N]; int belong[N],size[N]; int zhan[N],top;bool inset[N]; inline void dfs(int x) { zhan[++top]=x;inset[x]=1; low[x]=dfn[x]=++cnt2; for (int i=head[x];i;i=e[i].next) { if(!dfn[e[i].to]) { dfs(e[i].to); low[x]=min(low[x],low[e[i].to]); }else if(inset[e[i].to]) low[x]=min(low[x],dfn[e[i].to]); } if (dfn[x]==low[x]) { cnt3++; int p=-1; while (p!=x) { p=zhan[top--]; inset[p]=0; belong[p]=cnt3; size[cnt3]++; } } } inline void tarjan() { for (int i=1;i<=n;i++) if (!dfn[i])dfs(i); } inline LL read() { LL x=0,f=1;char ch=getchar(); while(ch<'0'||ch>'9'){if(ch=='-')f=-1;ch=getchar();} while(ch>='0'&&ch<='9'){x=x*10+ch-'0';ch=getchar();} return x*f; } int main() { n=read();m=read(); for (int i=1;i<=m;i++) { int x=read(),y=read(); ins(y,x); } tarjan(); int tot=0; for(int i=1;i<=cnt3;i++) tot+=(size[i]>1); printf("%d\n",tot); return 0; } | https://blog.csdn.net/zhb1997/article/details/39759065 | CC-MAIN-2021-25 | refinedweb | 391 | 52.29 |
From: David Abrahams (abrahams_at_[hidden])
Date: 2000-12-16 20:38:45
----- Original Message -----
From: "Andrew Green" <ag_at_[hidden]>
> Just my luck that the day I switch to digest mode is also the day I felt
> moved to contribute my two bits, so forgive me if this is has been
obsoleted
> by other posts I haven't seen yet.
FWIW you can always read the latest messages off the web at egroups if
you're so moved.
> I don't think it's a matter of thinking one's code is safe when it's not,
> but more of knowing it's bad when it actually seems to be working. Use of
a
> bad pointer might get caught by the debugger or OS, or it might not trip
> anything up under any test scenario. But it's still a bad pointer, and it
> will cause problems at some time. It came from somewhere, and an assertion
> in the right place might well have indicated its imminent 'escape', a
> significantly easier bug to fix than backtracking the provenance of the
> pointer at its point of use.
I agree with this wholeheartedly. When preconditions can be checked
reasonably efficiently, asserts are an excellent way to do it. A user who
can't afford checking can always define NDEBUG.
Even better would be a "hookable" assert for which the user could supply a
replacement behavior, e.g. drop into the debugger, throw an exception, log a
message, etc. There are usually ways to do this with the standard assert()
macro, but they involve unsavory practices like:
// file: cassert
// put this in your #include path ahead of the standard library
#ifndef MY_CASSERT
# define MY_CASSERT
# include <../include/cassert>
# undef assert
# define assert ...<your definition here>
#endif
Regards,
Dave
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2000/12/7539.php | CC-MAIN-2019-13 | refinedweb | 315 | 70.94 |
30 April 2013 22:00 [Source: ICIS news]
HOUSTON (ICIS)--US styrene production and inventories were mixed in the first quarter of 2013, according to data released by the American Fuel and Petrochemical Manufacturers (AFPM) on Tuesday.
First-quarter styrene production this year was at 2.391bn lb, up by 5% from 2.282bn in the first quarter of 2012. The first-quarter styrene production was slightly down, however, by almost 2% from the previous quarter.
Meanwhile, first quarter styrene inventory was at 466m lb, a 19% drop from 574m lb the same time a year earlier.
Major US styrene producers include Americas Styrenics, LyondellBasell, Styrolution, Total and Westlake – all of which contributed production figures included in the AFPM survey. ?xml:namespace>
Total styrene production, in billions | http://www.icis.com/Articles/2013/04/30/9663932/US-Q1-2013-styrene-production-and-inventories-mixed.html | CC-MAIN-2014-52 | refinedweb | 127 | 56.25 |
Artifact 6b9ebf0ef5761b06ce86672574c71b1e9098ef9c:
- File src/vacuum.c — part of check-in [73359037] at 2003-04-06 21:08:24 on branch trunk — Split the implementation of COPY, PRAGMA, and ATTACH into separate source code files. (CVS 902) (user: drh size: 1095)
/* ** 2003 contains code used to implement the VACUUM command. ** ** Most of the code in this file may be omitted by defining the ** SQLITE_OMIT_VACUUM macro. ** ** $Id: vacuum.c,v 1.1 2003/04/06 21:08:24 drh Exp $ */ #include "sqliteInt.h" /* ** The non-standard VACUUM command is used to clean up the database, ** collapse free space, etc. It is modelled after the VACUUM command ** in PostgreSQL. ** ** In version 1.0.x of SQLite, the VACUUM command would call ** gdbm_reorganize() on all the database tables. But beginning ** with 2.0.0, SQLite no longer uses GDBM so this command has ** become a no-op. */ void sqliteVacuum(Parse *pParse, Token *pTableName){ #ifndef SQLITE_OMIT_VACUUM /* Do nothing */ #endif } | https://sqlite.org/src/artifact/6b9ebf0ef5761b06 | CC-MAIN-2019-51 | refinedweb | 154 | 60.31 |
Build low-power, clock-controlled devices
Do you want to make a sensor with a battery life you can measure in days rather than hours? Even if it contains a (relatively!) power-hungry device like a Raspberry Pi? By cunning use of a real-time clock module, you can make something that wakes up, does its thing, and then goes back to sleep. While asleep, the sensor will sip a tiny amount of current, making it possible to remotely monitor the temperature of your prize marrow in the greenhouse for days on end from a single battery. Read on to find out how to do it.
A sleeping Raspberry Pi Zero apparently consuming no current!
You’ll need:
- DS3231 powered real-time clock module with battery backup: make sure it has a battery holder and an INT/SQW output pin
- P-channel MOSFET: the IRF9540N works well
- Three resistors: 2.2 kΩ, 4.7 kΩ, and 220 Ω
- A device you want to control: this can be a PIC, Arduino, ESP8266, ESP32, or Raspberry Pi. My software is written in Python and works in MicroPython or on Raspberry Pi, but you can find DS3231 driver software for lots of devices
- Sensor you want to use: we’re using a BME280 to get air temperature, pressure, and humidity
- Breadboard or prototype board to build up the circuit
We’ll be using a DS3231 real-time clock which is sold in a module, complete with a battery. The DS3231 contains two alarms and can produce a trigger signal to control a power switch. To keep our software simple, we are going to implement an interval timer, but there is nothing to stop you developing software that turns on your hardware on particular days of the week or days in the month. The DS3231 is controlled using I2C, which means it can be used with lots of devices.
You can pick up one of these modules from lots of suppliers. Make sure that you get one with the SQW connection, as that provides the alarm signal
MOSFET accompli
The power to our Raspberry Pi Zero is controlled via a P-channel MOSFET device operating as a switch. The 3.3 V output from Raspberry Pi is used to power the DS3231 and our BME280 sensor. The gate on the MOSFET is connected via a resistor network to the SQW output from the DS3231.
You can think of a MOSFET as a kind of switch. It has a source pin (where we supply power), a drain pin (which is the output the MOSFET controls), and a gate pin. If we change the voltage on the gate pin, this will control whether the MOSFET conducts or not.
We use a P-channel MOSFET to switch the power because the gate voltage must be pulled down to cause the MOSFET to conduct, and that is how P-channel devices function.
MOSFET devices are all about voltage. Specifically, when the voltage difference between the source and the gate pin reaches a particular value, called the threshold voltage, the MOSFET will turn on. The threshold voltage is expressed as a negative value because the voltage on the gate must be lower than the voltage on the source. The MOSFET that we’re using turns on at a threshold voltage of around -3.7 volts and off at a voltage of -1.75 volts.
The SQW signal from the DS3231 is controlled by a transistor which is acting as a switch connected to ground inside the DS3231. When the alarm is triggered, this transistor is turned on, connecting the SQW pin to ground. The diagram below shows how this works.
The resistors R1 and R2 are linked to the supply voltage at one end and the SQW pin and the MOSFET gate on the other. When SQW is turned off the voltage on the MOSFET gate is pulled high by the resistors, so the MOSFET turns off. When SQW is turned on, it pulls the voltage on the MOSFET gate down, turning it on.
Unfortunately, current leaking through R1 and R2 to the DN3231 means that we are not going to get zero current consumption when the MOSFET is turned off, but it is much less than 1 milliamp.
We’re using a BME280 environmental sensor on this device. It is connected via I2C to Raspberry Pi. You don’t need this sensor to implement the power saving
Power control
Now that we have our hardware built, we can get some code running to control the power. The DS3231 is connected to Raspberry Pi using I2C. Before you start, you must enable I2C on your Raspberry Pi using the raspi-config tool. Use sudo raspi-config and select Interfacing Options. Next, you need to make sure that you have all the I2C libraries installed by issuing this command at a Raspberry Pi console:
sudo apt-get install python3-smbus python3-dev i2c-tools
The sequence of operation of our sensor is as follows:
- The program does whatever it needs to do. This is the action that you want to perform at regular intervals. That may be to read a sensor and send the data onto the network, or write it to a local SD card or USB memory key. It could be to read something and update an e-ink display. You can use your imagination here.
- The program then sets an alarm in the DS3231 at a point in the future, when it wants the power to come back on.
- Finally, the program acknowledges the alarm in the DS3231, causing the SQW alarm output to change state and turn off the power.
Clock setting
The program below only uses a fraction of the capabilities of the DS3231 device. It creates an interval timer that can time hours, minutes, and seconds. Each time the program runs, the clock is set to zero, and the alarm is configured to trigger when the target time is reached.
Put the program into a file called SensorAction.py on your Raspberry Pi, and put the code that you want to run into the section indicated.
import smbus bus = smbus.SMBus(1) DS3231 = 0x68 SECONDS_REG = 0x00 ALARM1_SECONDS_REG = 0x07 CONTROL_REG = 0x0E STATUS_REG = 0x0F def int_to_bcd(x): return int(str(x)[-2:], 0x10) def write_time_to_clock(pos, hours, minutes, seconds): bus.write_byte_data(DS3231, pos, int_to_bcd(seconds)) bus.write_byte_data(DS3231, pos + 1, int_to_bcd(minutes)) bus.write_byte_data(DS3231, pos +2, int_to_bcd(hours)) def set_alarm1_mask_bits(bits): pos = ALARM1_SECONDS_REG for bit in reversed(bits): reg = bus.read_byte_data(DS3231, pos) if bit: reg = reg | 0x80 else: reg = reg & 0x7F bus.write_byte_data(DS3231, pos, reg) pos = pos + 1 def enable_alarm1(): reg = bus.read_byte_data(DS3231, CONTROL_REG) bus.write_byte_data(DS3231, CONTROL_REG, reg | 0x05) def clear_alarm1_flag(): reg = bus.read_byte_data(DS3231, STATUS_REG) bus.write_byte_data(DS3231, STATUS_REG, reg & 0xFE) def check_alarm1_triggered(): return bus.read_byte_data(DS3231, STATUS_REG) & 0x01 != 0 def set_timer(hours, minutes, seconds): # zero the clock write_time_to_clock(SECONDS_REG, 0, 0, 0) # set the alarm write_time_to_clock(ALARM1_SECONDS_REG, hours, minutes, seconds) # set the alarm to match hours minutes and seconds # need to set some flags set_alarm1_mask_bits((True, False, False, False)) enable_alarm1() clear_alarm1_flag() # # Your sensor behaviour goes here # set_timer(1,30,0)
The set_timer function is called to set the timer and clear the alarm flag. This resets the alarm signal and powers off the sensor. The example above will cause the sensor to shut down for 1 hour 30 minutes.
You can use any other microcontroller that implements I2C
Power down
The SensorAction program turns off your Raspberry Pi without shutting it down properly, which is something your mother probably told you never to do. The good news is that in extensive testing, we’ve not experienced any problems with this. However, if you want to make your Raspberry Pi totally safe in this situation, you should make its file system ‘read-only’, which means that it never changes during operation and therefore can’t be damaged by untimely power cuts. There are some good instructions from Adafruit here: hsmag.cc/UPgJSZ.
Note: making the operating system file store read-only does not prevent you creating a data logging application, but you would have to log the data to an external USB key or SD card and then dismount the storage device before killing the power.
If you are using a different device, such as an ESP8266 or an Arduino, you don’t need to worry about this as the software in them is inherently read-only.
The SQW output from the DS3231 will pull the gate of the MOSFET low to turn on the power to Raspberry Pi
Always running
To get the program to run when the Raspberry Pi boots, use the Nano editor to add a line at the end of the rc.local file that runs your program.
sudo nano /etc/rc.local
Use the line above at the command prompt to start editing the rc.local file and add the following line at the end of the file:
python3 /home/pi/SensorAction.py &
This statement runs Python 3, opens the SensorAction.py file, and runs it. Don’t forget the ampersand (&) at the end of the command: this starts your program as a separate process, allowing the boot to complete. Now, when Raspberry Pi boots up, it will run your program and then shut itself down. You can find a full sample application on the GitHub pages for this project (hsmag.cc/Yx7q6t). It logs air temperature, pressure, and humidity to an MQTT endpoint at regular intervals. Now, go and start tracking that marrow temperature!
Issue 30 of HackSpace magazine is out now
The latest issue of HackSpace magazine is on sale now, and you can get your copy from the Raspberry Pi Press online store. You can also download it for free to check it out first.
UK readers can take advantage of our special subscriptions offer at the moment.
3 issues for £10 & get a free book worth £10…
If you’re in the UK, get your first three issues of HackSpace magazine, The MagPi, Custom PC, or Digital SLR Photography delivered to your door for £10, and choose a free book (itself worth £10) on top!
Christopher
I’m not sure what the purpose of R1 is, as it it stands it makes the gate voltage marginal at -3.4V, the datasheet for the IRF9540N has Vgs(th) as -2.0V to -4.0V. You would be better off removing R1 completely.
Mark H Tomlin
Honestly, given how stupid powerful the CPU is and how much memory the system has even on the Zero, a Tock like OS for the Raspberry Pi hardware for actual embedded applications would be a god send. The hardware is cheap and mass produced, has a product life cycle of many, many years backed by a foundation with a great track record of delivering hardware for long periods of time. If the Raspberry Pi foundation started to invest in TockOS as the underlaying embedded OS and Rust as the programming language your devices would be in EVERYTHING in 10 years.
Harry Hardjono
Why not just have a shutdown procedure as part of the scripting process? I imagine that it is easy to do with bash script.
Martin Bonner
British Antarctic Survey has this problem – except they want the sensor to run for *months*. Part of the solution is to use a bigger battery; it turns out that while a car battery struggles to produce tens of amps at low temperatures, it is just fine if you want tens of milliamps. The other part of the solution is: only turn on the power intermittently. The problem is *when* to turn the sensor on; you don’t really understand the phenomena you are observing (otherwise you wouldn’t bother observing it), so there is always the risk that if you turn it on (say) in early Spring, you may miss the really interesting observation in late Autumn.
Seaton
Note that those DS3231 RTC modules require a Lithium battery as correctly shown in the photograph (eg LIR2032) as they are being constantly trickle charged at around 4.6V. A normal CR2032 shouldn’t be used unless a simple mod is made to remove the charging circuit (either the diode or resistor – instructions available on YouTube). If modified a non rechargeable CR2032 will last for years anyway.
Seaton
A slight correction to my previous comment: The author has used a standard non rechargeable CR2032 instead of a LIR2032 as shown in the photograph. Although most buyers of these RTC modules have probably done the same without realizing it, it does mean the CR2032 is at least at risk of a shortened lifespan, and at worse at a risk of exploding or catching fire unless the charging circuit is disabled.
Seaton
Another correction (not my day!):
Having re-read the article I now notice the author is supplying the RTC module from 3V3 so it won’t trickle charge the battery as there is a diode voltage drop in the charging circuit, unless the battery voltage drops below 2.7V.
However, my warning to take care with these modules still stands when supplying them from 5V.
Tttttt
Not your day
Michael
Thank you very much for this article.
I try this 3 times but in the end it works with a led (and resistor) but not with a Raspberry Pi.
My circuit receife the 3.3v from a lf33cf (+2x capacitor) and should switch the raspberry pi on and off.
Any Ideas?
Thank you in advanced!
Michael
Simon
How do you call a component like the red “USB Power” thing? I want to get me one! A link would be much appreciated :)
Helen Lynn
It’s a USB Power Meter, also known as a USB Power Monitor or USB Tester, and we’ve turned up two places where you can buy this particular one. HTH :)
Helen Lynn
Oh – and you can read more about it in this HackSpace magazine article from the beginning of this year: | https://www.raspberrypi.com/news/build-low-power-clock-controlled-devices/ | CC-MAIN-2022-05 | refinedweb | 2,334 | 69.41 |
Default constructor.
Recursively search the given schema for named schemas and adds them to the given container.
Adds a namespace object for the given name into the dictionary if it doesn't exist yet.
Adds a protocol object to generate code for.
Adds a schema object to generate code for.
Creates an XML documentation for the given comment.
Creates the static schema field for class types.
Generates code for the given protocol and schema objects.
Generate list of named schemas from given protocol.
Generate list of named schemas from given schema.
Gets the schema of a union with null.
Creates an enum declaration.
Creates a class declaration for fixed schema.
Generates code for the protocol objects.
Creates a class declaration.
Generates code for the schema objects.
Writes the generated compile unit into one file.
Writes each types in each namespaces into individual files.
List of generated namespaces.
Object that contains all the generated types.
List of protocols to generate code for.
List of schemas to generate code for. | http://avro.apache.org/docs/current/api/csharp/classAvro_1_1CodeGen.html | CC-MAIN-2014-10 | refinedweb | 168 | 63.36 |
American Country: The Faces & Places of Rural America
American Country: The Faces & Places of Rural America
The American Country project will document the heart and soul of the dwindling rural population and their impact on all of us.
About this project
** INTRO VIDEO COMING SOON! **
What is 'American Country'?
By definition, 'American Country' is the rural lands of the United States. It can be farmland. It can be forestland. It can be mountainous. It's the places you go on vacation when you want to 'get out of the city'. It's the fresh local produce you eat. It's the place you found inspiration for your own project maybe. Each of us have our own 'American Country' memory... which is why I want to document them as a story of the people, places, and memories we sometimes overlook and forget.
Why is this important? Why should I pledge?
Every minute of every day, the United States loses more than an acre of rural land to new development. In terms of size, the United States is 90% rural but only 16% of the population calls this land home (compared to 72% in 1910) and it continues to decline. The combination of urban sprawl and job loss in rural areas will continue to drive the population down... and someday, it might be a thing of the past. Sustainable, organic, and local have become popular words during recent years. I want to put a face and story to the people who live that on a daily basis.
Also... the cost of self-publishing is EXPENSIVE. I'm leaning on the support of my backers to be a critical part in the process. In return, I've promised a high quality product. I'll also keep all backers in the loop of future projects (both on and off of Kickstarter) and hopefully be able to pay back the support in spades down the road!
Tell me more about these rewards...
The rewards are what makes Kickstarter a unique and fun place to develop projects. As a backer, not only do you get the satisfaction of helping someone fulfill a dream, but you also get something cool in return (plus in this case... you'll get your name in a book)! Here's a brief description of each reward - if you have any questions, feel free to contact me!
- Postcard (5x7) - I'll be sending out cards from the road with a photograph from the book using 'Sincerely Ink'... just as a way to say 'thanks' and show you one of the images from the upcoming book before it's published
- Limited Edition Prints (8x10, 16x20, 24x36) - regardless of the size, each print will be done on extremely high quality, color-rich paper. These aren't your Walgreen's instant prints. (retail value: $50 / $115 / $275)
- Standard Edition Softcover (10 x 8) - the standard softcover edition will be done on a premium lustre (100#) paper, with a 10 x 8 (landscape) orientation.
- Standard Edition Hardcover (10 x 8) - the standard hardcover edition will be done on a premium lustre (100#) paper, with a 10 x 8 (landscape) orientation, and will have a dust jacket cover
- Extended Edition Hardcover (10 x 8) - the extended edition hardcover will be done on a pro-grade premium lustre (140#) paper, with a 10 x 8 (landscape) orientation, and will have a dust jacket cover... PLUS there will be approx 25% more pages/images in this edition versus the standard edition
- Limited Edition Hardcover (13 x 11) - the limited edition hardcover will be done on a pro-grade premium lustre (140#) paper, with a larger 13 x 10 (landscape) orientation, and will have a dust jacket cover... PLUS there will be approx 25% more pages/images in this edition versus the standard edition
*NOTE: I've estimated the final delivery of the books as December, but I'm aiming for early to mid-November. Either way, you'll have them in time for the holidays and they make GREAT gifts! Also - all pledge levels include shipping!
Where will you be traveling?
I'm planning on road tripping, camping, and criss-crossing the rural roads of America... from the East Coast to the Rocky Mountains. I'll be finalizing my route based on final pledge commitments.
What's your plan to complete this project?
Phase I - finalize route planning and schedule (July) - thank you emails to all of my backers will be sent and eternal gratitude will also commence from this point forward
Phase II - the criss-crossing rural road adventure begins! (July - Sept) - postcards from the road will be mailed during this phase
Phase III - the road adventure ends... now it's time to review, edit, and finalize images... as well as complete page layout designs, etc (Sept - Nov)
Phase IV - the book is sent to print and copies will be signed & mailed as we receive them! (Nov - Dec)
What does my pledge help to fund?
Most importantly, pledges go towards delivering the rewards to you! I own the equipment necessary to complete the project, so the vast majority of funding goes into the production costs of high quality prints, top notch books, shipping, packaging, etc. There's a small balance that goes into funding the project itself... i.e. travel expenses (predominantly fuel expenses). THIS IS A NOT-FOR-PROFIT PROJECT... all funds go directly towards producing the photographs, stories, and rewards that you receive!
What will you take photos of?
People, places, and things! 'American Country' will include portraits, rolling landscapes, county fairs, wildlife, livestock, candid scenes, and of course Kountry Kitchens! And if luck is on our side... a couple of rodeo clowns.
Can I be in your book?
If you live in rural America... YES! I'll be reaching out to 4H clubs, farming associations, and anyone in rural America that would be interested in spending five minutes, an hour, or a day with me to show me what it means to live American Country! Please contact me and let me know how you think you can make American Country a better project!
Will the book be only photos?
No! American Country is more than photos. It's a story. Whether it's on the farm, at the local diner, on a mountain top, or in the woods... my goal is to meet the people who make up rural America. I'll be including some of the most interesting quips, jokes, and words of wisdom that I come across along the way.
What equipment will you be using?
I will be using professional grade equipment to produce a professional grade product. I will be using a Nikon D800 as my primary camera, along with a Nikon D200 as backup. The lenses I anticipate using for a majority of the images include a Nikon 24-70mm f/2.8, Nikon 70-200mm f/2.8, and Nikon 50mm f/1.4. For a complete equipment list, feel free to contact me.
Do you have a 'stretch goal'? What does the 'stretch goal' fund?
Yes. The S T R E T C H goal is $20,000. The additional funds would go towards lengthening the trip and purchasing additional equipment (primarily portable lighting) to execute more complex images on the road. The reward for my backers, in return, would be longer books with more pages and more images, both in the standard and extended editions! So, after you pledge... run and tell your friends (both real and Facebook), family, neighbors, co-workers, mailman, dentist, and the next stranger you see on the street... ask them to pledge and let's make an even more awesome book!
Sample Images (for reference purposes only)
Sample Images (other work)
KICKSTARTER PROJECTS ARE ALL OR NOTHING... SO IF YOU PLEDGE, YOU'LL ONLY BE CHARGED IF WE CAN REACH MY GOAL! IF I FALL SHORT OF MY PLEDGE GOAL - YOU WON'T OWE ANYTHING... AND I DON'T GET A DIME OF FUNDING. TELL YOUR FRIENDS, FAMILY, AND CO-WORKERS! THANKS FOR YOUR SUPPORT IN ADVANCE!
Risks and challenges
With any project, there are unforeseeable challenges that could arise. The greatest challenge would be equipment failure, but I feel confident that won't be an issue. I will have a back-up camera in the event my primary camera experiences problems (unlikely). I will be shooting to multiple memory cards... and uploading them daily to TWO separate external hard drives for storage.
In the highly unlikely, catastrophic event that I am unable to complete this project - I will issue refunds to all of my backers.
If there are any setbacks, whatsoever along the way, I'll be sure to keep all of my backers in the loop. If you have questions or concerns, please contact me!Learn about accountability on Kickstarter
Questions about this project? Check out the FAQ
Support this project
Funding period
- (30 days) | https://www.kickstarter.com/projects/ryanpett/american-country-the-faces-and-places-of-rural-ame | CC-MAIN-2017-30 | refinedweb | 1,485 | 73.88 |
0
Hi guys. I am getting the error: LoanProgram.java:17: variable payment might not have been initialized
payment = getPayment (amount, rate, years, months, payment);
I am stumped, and the lab is due tomorrow. If anyone could help me out I would greatly appreciate it. :)
I'll put an asterisk on the line with the error.
import java.util.Scanner; import java.text.DecimalFormat; public class LoanProgram { static Scanner input = new Scanner (System.in); public static void main (String [] args) { int years, months; double amount, rate, payment; amount = getAmount (); rate = getRate (); years = getYears (); months = years * 12; *payment = getPayment (amount, rate, years, months, payment); System.out.println (payment); } public static double getAmount () { System.out.print ("Enter the amount you are borrowing: "); double amount = input.nextDouble (); if (amount < 0 || amount > 100000) { System.out.print ("invalid. Enter amount: "); amount = input.nextDouble (); } return amount; } public static double getRate () { System.out.print ("Enter the annual interest rate as a percent: "); double rate = input.nextDouble (); if (rate < 0) { System.out.print ("You can't have a negative rate. Enter amount: "); rate = input.nextDouble (); } return rate; } public static int getYears () { System.out.print ("Enter the length of the loan in years: "); int years = input.nextInt (); if (years < 1) { System.out.print ("The minimum amount of years is 1. Enter amount: "); years = input.nextInt (); } return years; } public static double getPayment (double amount, double rate, int years, int months, double payment) { months = years * 12; payment = amount * rate * (Math.pow(rate + 1, months)/((Math.pow(rate + 1, months)-1))); return payment; } } | https://www.daniweb.com/programming/software-development/threads/385202/variable-payment-might-not-have-been-initialized | CC-MAIN-2017-09 | refinedweb | 251 | 54.29 |
Here is a short summary of the prerequisites of such a project on a target system:
What to install
Microsoft Office 2003 with the following features enabled:
Why?
There are some DLLs in the following folder which are required for Office to be able to load managed solutions:
c:\Program Files\Microsoft Office\Office11
Explanation
Office 2003 uses the OTKLoadr.dll library to load Visual Studio customizations, smart documents and smart tags.
Office 2003 PIAs (Primary Interop Assemblies)
This will add the Smart Tag Library to the GAC
Managed code that implements the ISmartDocument interface requires a reference to the Microsoft.Office.Interop.SmartTag namespace.
See:
Download for Office 2003 PIA:
VSTOR 2005 SE
The VSTO runtime automatically installs the newest version of OTKLOADR.DLL and additionally adds the CLR Lockback registry key ([HKEY_CURRENT_USER\Software\Classes\Interface\{000C0601-0000-0000-C000-000000000046}]). This enables Office 2003 to run .NET 2.0 code, as initially Office 2003 could not do this.
Download for VSTOR 2005 SE
Alternately, one can install the KB907417, which includes the newest version OTKLoadr, and set up the registry key manually.
Extensibility fix
Adds the Extensibility.dll to the GAC.
This is included in the extensibilityMSM,msi file from KB 908002.
The IDTExtensibility2 interface is required for all add-ins and automation projects (it includes events like loading, unloading, updating)
.NET Framework Runtime 1.1
See the explanation for “FullTrust”
Download:
.NET Framework 2.0
Obviously, the code is written in .NET 2.0 and requires the framework
The installed DLL should be added to the “FullTrust” zone for both .NET 1.1 and .NET 2.0
Automation code that is executed via the OTKLoadr appears to follow the permission guidelines of .NET 1.1. However, as the solution most likely contains code whose call stack go through the .NET 2.0 Framework (e.g. a simple MessageBox.Show()) , the code needs to be trusted also by the .NET 2.0 Framework
Add the DLL to the Full Trust zone using the CASPol tool from the .NET Framework:
C:\Windows\Microsoft.NET\Framework\v2.0.50727\ and
C:\Windows\Microsoft.NET\Framework\v1.1.4322\
CASPol.exe –ag My_Computer_Zone –url “C:\Users\[Username\Local Settings\Application Data\Microsoft\Schemas\[NameSpace]\[Name].DLL” FullTrust
Here is a sample Visual Studio 2008 project which creates a very basic Smart Document DLL in .NET 2.0. It also contains the manifest and a sample schema:
SmartDocument.ZIP | http://blogs.msdn.com/b/cobold/archive/2010/01/10/create-smart-document-solutions-in-net-framework-2-0.aspx | CC-MAIN-2015-06 | refinedweb | 407 | 50.73 |
[
]
James Dyer commented on SOLR-4047:
----------------------------------
Igor, I'm looking at the data-config.xml snipped you posted and I can't figure out where
the "attach" namespace comes from. Is this from a parent entity that you aren't showing,
or from solrconfig.properties, or from System properties? In any case this is a pretty significant
detail as your problem seem to be it cannot find "${attach.name}", right?
It would be very helpful fixing this if you can write a failing unit test. Perhaps the best
way is to model your test on something that already exists? Take a look at "TestNestedChildren.java",
which was just added this past week: ()
This test adds 1 document to a Solr index using nested entities, getting each of 3 fields,
1 from each entity. It then queries the index to see if the document got added and if the
inner-most entity's value is part of the document. Maybe you could copy this one and make
minor changes to mimic what you're trying to do?
For general guidelines on contributing patches, see:
> dataimporter.functions.encodeUrl throughs Unable to encode expression: field.name with
value: null
> --------------------------------------------------------------------------------------------------
>
> Key: SOLR-4047
> URL:
> Project: Solr
> Issue Type: Bug
> Components: contrib - DataImportHandler
> Affects Versions: 4.0
> Environment: Windows 7
> Reporter: Igor Dobritskiy
> Priority: Critical
> Attachments: db-data-config.xml, db.sql, schema.xml, solrconfig.xml
>
>
> For some reason dataimporter.functions.encoude URL stopped work after update to solr
4.0 from 3.5.
> Here is the error
> {code}
> Full Import failed:java.lang.RuntimeException: java.lang.RuntimeException: org.apache.solr.handler.dataimport.DataImportHandlerException:
Unable to encode expression: attach.name with value: null Processing Document # 1
> {code}
> Here is the data import config snippet:
> {code}
> ...
> <entity name="account"
>
> <entity name="img_index" processor="TikaEntityProcessor"
>
> <field column="text" name="body" />
> </entity>
> </entity>
> ...
> {code}
> When I'm changing it to *not* use dataimporter.functions.encodeUrl it works but I need
to url encode file names as they have special chars in theirs | http://mail-archives.apache.org/mod_mbox/lucene-dev/201211.mbox/%3C1574470926.19301.1353684778765.JavaMail.jiratomcat@arcas%3E | CC-MAIN-2017-13 | refinedweb | 334 | 50.84 |
On Sun, 17 Apr 2005, Petr Baudis wrote: > Dear diary, on Sun, Apr 17, 2005 at 05:24:20PM CEST, I got a letter > where Daniel Barkalow <[EMAIL PROTECTED]> told me that... > > This adds support to revision.h for parsing commit records (but not going > > any further than parsing a single record). Something like this is needed > > by anything that uses revision.h, but older programs open-code it. > > > > Signed-Off-By: Daniel Barkalow <[EMAIL PROTECTED]> > > Could you please convert the current users (rev-tree.c and fsck-cache.c) > to use this in the same patch?
Advertising
They do things somewhat differently, so it would be more intrusive. Could I send an extra patch to convert them instead of doing them here? > > Index: revision.h > > =================================================================== > > ---; > > }; > > > > @@ -111,4 +112,29 @@ > > } > > } > > > > +static int parse_commit_object(struct revision *rev) > > +{ > > + if (!(rev->flags & SEEN)) { > > + void *buffer, *bufptr; > > + unsigned long size; > > + char type[20]; > > + unsigned char parent[20]; > > + > > + rev->flags |= SEEN; > > + buffer = bufptr = read_sha1_file(rev->sha1, type, &size); > > + if (!buffer || strcmp(type, "commit")) > > + return -1; > > + get_sha1_hex(bufptr + 5, rev->tree); > > + bufptr += 46; /* "tree " + "hex sha1" + "\n" */ > > + while (!memcmp(bufptr, "parent ", 7) && > > + !get_sha1_hex(bufptr+7, parent)) { > > + add_relationship(rev, parent); > > + bufptr += 48; /* "parent " + "hex sha1" + "\n" */ > > + } > > + //rev->date = parse_commit_date(bufptr); > > I don't like this. Yeah, that's left over from the not-quite the same parsing code in the other programs. > > + free(buffer); > > + } > > + return 0; > > +} > > + > > #endif /* REVISION_H */ > > BTW, I think that in longer term having this stuffed in revision.h is a > bad idea, we should have revision.c. I will accept patches putting the > stuff to revision.h for now, though (unless it gets outrageous). I'd actually like to make them commit.{c,h}, since the system calls the things they actually deal in commits, not revisions. But this is getting into stuff that's likely to cause painful divergance from Linus's repo, which is why I'm a bit leary of actually doing it now. -Daniel *This .sig left intentionally blank* - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [EMAIL PROTECTED] More majordomo info at | https://www.mail-archive.com/git@vger.kernel.org/msg00456.html | CC-MAIN-2018-05 | refinedweb | 352 | 66.84 |
Fine-grained multilayer virtualized systems analysis
- 1.3k Downloads.
KeywordsVirtualized system KVM LXC Tracing LTTng
Introduction
Among the advantages of cloud environments we can cite their flexibility, their lower cost of maintenance, and the possibility to easily create virtual test environments. Those are some of the reasons explaining why they are widely used in industry. However, using this technology also brings its share of challenges in terms of debugging and detecting performance failures. Indeed, it can be more straightforward, when using the right tools, to detect performance anomalies while working with a simple layer of virtualization. For instance, if we have information about all the processes running on a machine through time, it is then possible to know for a specific thread which processes interrupted it. Because virtual machines (VM) are running in a layer independent of their host, it becomes more tedious to detect direct and indirect interactions between tasks happening inside a VM, on the host, inside a container, or even on nested or parallel VMs.
In this study, we focus on a way to analyze information, coming from a host, multiple VMs and linux containers (LXC) [1], as if all the execution was only happening on the host. The main objective is to erase as much as possible the boundaries between a host and the different virtual environments, to help a user visualize in a clearer way how the processes are interacting with each other.
To achieve this, we use kernel tracing on both the host and VMs, synchronize those traces, aggregate them into a unique structure and finally display the structure inside a view showing the different layers of the virtual environment during the tracing period. Considering the set of recorded traces as a whole system is the core concept of our fused virtualized systems (FVS) analysis presented here.
This paper is structured as follow: Section “Related work” exposes some related work about performance anomalies related to virtual environments. Section “Fused virtualized systems analysis” explains in more details the multiple steps of the FVS analysis, including the single layered VMs (SLVMs), nested VMs (NVMs) and containers detection strategies. The same section introduces the view created to visualize the whole system. Section “Use cases and evaluation” presents some use cases for the FVS analysis and view. Section “Conclusion and future work” concludes this paper.
Related work
Dean et al. [2] created an online performance bug inference tool for production cloud computing. To accomplish this, they created an offline function signature extraction using closed frequent system call episodes. The advantage of their method is that the signature extraction can be done outside the production environment, without running a workload that usually triggers a performance default. By using their tool, they can identify a deficient function out of thousands of functions. However, their work is not adapted to performance anomalies involving multiple virtual machines.
The research investigated by Sambasivan et al. [3], proposes an approach to find, categorize and compare similar execution flows of different requests to diagnose performance changes. Their way of extracting similarities between different requests comprises some similarity to our method. However, our solution can be used in different purposes, from comparing the different execution flows to understanding the overall execution of VMs and extracting the relations between the different executions of different processes of the VMs and the host machine.
In their work, Shao et al. [4] proposed a scheduling analyzer for the Xen Virtual Machine Monitor [5]. The analyzer uses a trace provided by Xen to reconstruct the scheduling history of each virtual CPU. By doing so, it is possible to retrieve interesting metrics like the block-to-wakeup time. However, this approach is limited to Xen and not directly applicable to other hypervisors. Furthermore, a trace produced by Xen is not sufficient to identify a process inside a VM that creates a perturbation across the VMs.
To gain in generality and not rely too much on hypervisors and application code, some work was initiated with the intention to detect performance anomalies across virtual machines by using kernel tracing.
With PerfCompass [6], Dean et al. used kernel tracing on virtual machines and created an online system call trace analysis, able to extract fault features from the trace. The advantage of their work is that it only needs to trace the virtual machine’s system calls and not the host. Consequently, their solution has a low overhead impact and is able to distinguish between external and internal faults. However, it is not possible to see the direct interactions of the VM with neither the host nor the other VMs and the containers.
Another work proposed by Gebai et al. [7] focused more on the interactions between several machines. The authors proposed at first an analysis and a view showing, for each virtual CPU, when it is preempted. They also created a way to recover the execution flow of a specific process by crossing virtual machine boundaries to see which processes preempted it.
Their work is similar to ours but differs on multiple points. For instance, in their work, the Virtual Machine view displays one row for each virtual CPU. This number can easily grow if numerous VMs are traced. Consequently, the readability of the view can be altered. Additionally, by doing so, information about physical CPUs is lost. It is therefore impossible to track a VM, a virtual CPU or a process on the host. Finally, their work is dedicated to the analysis of single layered VMs, unlike our work that focuses also on nested VMs and containers.
In [8], authors used the recently introduced Intel PT ISA extensions on modern Intel Skylake processors to analyse performance of VMs. They developed interactive Resource and Process Control Flow visualization tools to analyze the hardware trace data for VM. They could trace proprietary close-sourced operating systems to diagnose abnormal executions. Despite its merits, it is limited to new Intel processor and works only for hardware-assisted virtualization, thus it cannot be used with other virtualization methods, which does not meet our flexibility requirement.
Nemati et al. [9] proposed a low-overhead technique that uses the trace from Host hypervisor to detect overcommitment of resources in host machine. Their work can detect some problems related to resource contention but is not able to detect problems occurring within the VMs.
To our knowledge, no previous work tried to retrieve information about containers from a kernel trace. Other projects, like Docker [10], give access to runtime metrics such as CPU and memory usage, memory limit, and network IO metrics, exposed by the control groups [11] used by LXC. No previous work tries to represent the full execution of a multilayered system as if everything was happening on the host. Nonetheless, in reality, every process, even in nested VMs, eventually runs on a physical CPU of the host. Our contribution is to fulfill this gap.
Fused virtualized systems analysis
A multilayered architecture is often the chosen strategy regarding the development of a software architecture. Each layer is dedicated to a specific role, independently of other layers, and is hosted by a tier, or a physical layout, that can contain multiple layers at once.
Examples of different configurations of layers of execution environment
The idea we introduce here is to erase the bounds between L0, its VMs in L1 and L2 and every container, to simplify the analysis and the understanding of complex multilayer architectures. Some methods for detecting performance degradations already exist for single-layer architectures. To reuse some of these techniques on multilayer architectures, one might remodel such systems as if all the activity was involving only one layer.
Architecture
Architecture of the fused virtual machines analysis
A trace consists of a chronologically ordered list of events characterized by a name, a time stamp and a payload. The name is used to identify the type of the event, the payload provides information relative to the event and the time stamp will specify the time when the event occurred.
In this study, we use the Linux Trace Toolkit Next Generation (LTTng) [14] to trace the machine kernels. This low impact tracing framework suits our needs, although other tracing methods can also be adopted. By tracing the kernel, there is no requirement to instrument applications. Therefore, even a program using proprietary code can be analyzed by tracing the kernel. However, some events from the hypervisors managing the VMs are needed for the efficiency of the fused analysis. The analysis needs to know when the hypervisor is letting a VM run its own code or when it is stopped. Since, in our study, we are using KVM [15], merged in the Linux kernel since version 2.6.20 [16], and because the required trace points already exist, there is no need for us to add further instrumentation to the hypervisor. In our case, with KVM using Intel x86 virtualization extensions, VMX [17], the event indicating a return to a VM running mode will always be recorded on L0 and will be generically called a VMEntry. The opposite event will be called a VMExit.
Synchronization is an essential part of the analysis. Since traces are generated on multiple machines by different instances of tracers, we have no guaranty that a time stamp for an event in a first trace will have any sense in the context of a second trace. Each machine may have its own timing sources, from the software interrupt timer to the cycle counter. When tracing the operating system kernel, each system instance (i.e., host, VM, container, etc.) uses its own internal clock to specify the events time stamps. But, in order to have a common sense of all systems behaviors, which are recorded as trace events separately in each system, it is essential to properly measure the differences and drifts between these machines.
Traces visualization without synchronization
Wrong analysis due to inaccurate synchronization
There are different possible solutions to synchronize the trace events between host kernel and VMs. One way is using TSC (Time Stamp Counter) that is built in the processors as a register. TSC is a 64-bit register which counts CPU cycles since the boot time of the system, and can be read by single assembly instruction (rdtcs) and therefore could be considered as a time reference, anywhere in the system (i.e., both kernel, hypervisor, and application). However, using TSC for timekeeping in a virtual machine has several drawbacks. The TSC_OFFSET field for VM can be changed especially during VM migration which forces tracer to keep track of this field in VMCS. If this event is lost, or the tracer is not started at that time, the calculated time will not be true anymore. Furthermore, some processors stop the TSC in their lower-power halt states which causes time shifting in VM. Also, timekeeping for full virtualization is not possible since TSC_OFFSET is part of Intel and AMD virtualization extensions.
Gebai et al. [7] used the hypercall only between L0 and L1. However, the method also applies between Ln and Ln+1, since an hypercall generated in Ln+1 will necessarily be handled by Ln. In our case, synchronization events will be generated between L0 and all its machines in L1, and between machines of L1 and their hosted machines. Consequently, a machine in L2 will be synchronized with its host that will have previously been synchronized with L0.
The purpose of the data analyzer is to extract from the synchronized traces all relevant data and to add them in a data model. Besides analyzing events specific to VMs and containers, our data analyzer should handle events generally related to the kernel activity. For this reason, the fused analysis is based on a preexisting kernel analysis used in Trace Compass [20], a trace analyzer and visualizer framework. Therefore, the fused analysis will by default handle events from the scheduler, the creation, destruction and waking up of processes, the modification of a thread’s priority, and even the beginning and the end of system calls.
Construction of the fused execution flow
KVM works in a way such that each vCPU of a VM is represented by a single thread on its host. Therefore, to complete the fused analysis, we need to map every VM’s vCPU with its respective thread. This mapping is achieved by using the payloads of both synchronization and VMEntry events. On the one hand, a synchronization event recorded on the host contains the identification number of the VM, so we can match the thread generating the event with the machine. On the other hand, a VMEntry gives the ID of the vCPU going to run. This second information allows the association of the host thread with its corresponding vCPU.
Data model
The data analysis needs an adapted structure as data model. This structure needs to satisfy multiple criteria. A fast access to data is preferred to provide a more pleasant visualizer, so it should be efficiently accessible by a view to dynamically display information to users. The structure will also need to provide a way to store and organize the state of the whole system, while keeping information relative to the different layers. For this reason, we need a design that can store information about diverse aspects of the system.
Structure of the data model
Finally, the data model provides a time dimension aspect, since the state of each object attribute in the structure is relevant for a time interval. Those intervals introduce the need for a scalable model, able to record information valid from a few nanoseconds to the full trace duration.
In this study, we chose to work with a State History Tree (SHT) [21]. A SHT is a disk-based data structure designed to manage large streaming interval data. Furthermore, it provides an efficient way to retrieve, in logarithmic access time, intervals stored within this tree organization [22].
Algorithm 1 constructs the SHT by parsing the events in the traces. If the event was generated by the host, then the CPU that created the event is directly used to handle the event. However, if the event was generated by a virtual machine, we need to recursively find the CPU of the machine’s parent harboring the virtual CPU that created the event, until the parent is L0. Only then, the right pCPU is recovered and we can handle the event. This process is presented in Algorithm 2.
The fundamental aspect of the construction of the SHT is the detection of the frontiers between the execution of the different machines and the containers. This detection is achieved by handling specific events and the application of multiple strategies.
Single layered VMs detection
In the case of SLVMs, the strategy is straightforward. The mapping is direct between the vCPUs of a VM in L1 and its threads in L0, a VM will be running its vCPU immediately after the recording of a VMEntry on its corresponding thread. Conversely, L0 stops a vCPU immediately before the recording of a VMExit.
Algorithm 3 describes the handling of a VMEntry event for the construction of the SHT. In this case, we query the virtual CPU that is going to run on the physical CPU. Then, we restore the state of the virtual CPU in the SHT, while we save the state of the physical CPU. The exact opposite treatment is done for handling a VMExit event.
Nested VMs detection
Entering and exiting L2
This architecture supersedes the previous strategy used for SLVMs. A VMEntry recorded in L1 does not imply that a vCPU of a VM in L2 is going to run immediately after. Likewise, L2 does not yield a pCPU shortly before an occurrence of a VMExit in L1, but when the hypervisor in L0 is running, preceded by its own VMExit.
The challenge we overcome here is to distinguish which VMEntries in L0 are meant for a VM in L1 or L2. Knowing that a VM of L2 is stopped is straightforward, if the previous distinction is done. If a thread of L0 resumes a vCPU of L1 or L2 with a VMEntry, then a VMExit from this same thread means that the vCPU was stopped.
We created two lists of threads in L0. The waiting list and the ready list. If a thread is in the ready list, it means that the next VMEntry generated by this thread is meant to run a vCPU of a VM in L2. The second part of Algorithm 4 shows that we retrieve the vCPU of L2 going to run by querying it from the vCPU of L1 associated to the thread. The pairing between the vCPUs of L1 and L2 is done in the first part of the algorithm, during the previous VMEntry recorded on L1. It is also at this moment that the thread of L0 is put in the waiting list.
Algorithm 5 shows that the same principle is used for handling a VMExit in L0. If the thread was ready, then we need again to query the vCPU of L2 before modifying the SHT.
When a thread of L0 is put in the waiting list, it means that a vCPU of L2 is going to be resumed. However, at this point, we don’t know for sure which VMEntry will resume the vCPU. The kvm_mmu_get_page event solves this uncertainty by indicating that the next VMEntry of a waiting thread will be for L2. Algorithm 6 shows the handling of this event and the shifting of the thread from the waiting list to the ready list.
As seen in Fig. 7, it is possible to have multiple entries and exits between L0 and L2 without going back to L1. This means that a VMExit recorded on L0 does not necessarily implies that the thread stopped being ready. In fact, the thread stops being ready when L1 needs to handle the VMExit. To do so, L0 must inject the VMExit into L1 and this action is recorded by the kvm_nested_vmexit_inject event. Algorithm 7 shows that the handling of this event consists in removing the thread from the ready list.
The process will repeat itself with the next occurrence of a VMEntry in L1.
Containers detection
The main difference between a container and a VM is that the container shares it’s kernel with its host while a VM has its own. As a consequence, there is no need to trace a container since the kernel trace of the host will suffice. Furthermore, all the processes in containers are also processes of the host. Knowing if a container is currently running comes down to whether the current running process is from the said container or not.
The strategy we propose here is to handle specific events from the kernel traces to detect all the PID namespaces inside a machine. Then, we find out the virtual IDs of each thread (vTID) contained in a PID namespace.
Payload of lttng_statedump_process_state events
Virtual TIDs hierarchy in the SHT
The analysis also needs to handle the process fork events to detect the creation of a new namespace or a new thread inside a namespace. In LTTng, the payload of this event provides the list of vTIDs of the new thread, besides of the NSID of the namespace containing it. Because the new thread’s parent process was already handled by a previous process fork or a state dump, the payload combined with the SHT contains enough information to identify all the name spaces and vTIDs of a new thread.
Visualization
After the fused analysis phase, we obtain a structure containing state information about threads, physical CPUs, virtual CPUs, VMs and containers through the traces duration. Our intention at this step is to create a view made especially for kernel analysis and able to manipulate all the information about the multiple layers contained inside our SHT. The objective is also to allow the user to see the complete hierarchy of virtualized systems. This view is called the Fused Virtualized Systems (FVS) view.
This view shows at first a machine’s entry representing L0. Each machine’s entry of the FVS view can have at most three nodes. A PCPUs node, displaying the physical CPUs used by the machine, a Virtual Machine node, containing an entry for each of the machine’s VM, and a Containers node, displaying one entry for each container. Because VMs are considered as machines, their nodes can contain the three previously mentioned nodes. However, a container will at most contain the PCPUs and Containers nodes. Even if it is possible to launch a VM from a container, we decided to regroup the VMs only under their host’s node.
High level representation of a multilayered virtualized system
Reconstruction of the full hierarchy in the FVS view
The PCPUs entries will display the state of each physical CPU during a tracing session. This state can either be idle, running in user space, or running in kernel space. Those states are respectively represented in gray, green and blue. However, there is technically no restriction on the number of CPU states, if an extension of the view is needed.
Comparison between FVS view and resources view
In this set, servers 1, 2 and 3 are VMs running on the host. All VMs are trying to take some CPU resources. As should be, the FVS view shows all the traces as a whole, instead of creating separate displays as seen in the Resources view. The first advantage of this configuration is that we only need to display the physical CPUs rows instead of one row for each CPU, physical or virtual. With this structure, we gain in visibility. The information from multiple layers is condensed within the rows of the physical CPUs.
Tooltip displayed to give more information regarding a PCPU
We noticed that, in the Resources view, the information is often too condensed. For instance, if several processes are using the CPUs, it can become tedious to distinguish them. Therefore, this situation is worse in the FVS view, because more layers come into play. For this reason, we developed a new filter system in Trace Compass that allows developers of time graph views to highlight any part of their view, depending on information contained in their data model.
Using this filter, it is possible to highlight one or more physical or virtual machines, containers, some physical or virtual CPUs, and some specifically selected processes. In particular, this filter will display what the user doesn’t want to see, as if it was covered with a semi opaque white band. Selected areas will appear highlighted by comparison. Consequently, it is possible to see the execution of a specific machine, container, CPU or process directly in that view.
VM server1 real execution on the host
PCPUs entries of each virtualized system
Use cases and evaluation
Use cases
The concept of fusing kernel traces can have very interesting applications. In this section, we expose multiple use cases.
Our first use case is selecting a specific process, running in a container inside a virtual machine, in order to observe with the FVS view when and where the process was running.
Highlighted process in the FVS view
Our next use case benefits from the fact that, by erasing the bounds between virtualized systems and the physical host, this analysis and view provide a tool to better understand the execution of an hypervisor. With the FVS view, it is possible to precisely see the interactions between the hypervisor and the host, depending on the instrumentation used.
Process wake up time for L1 and L2
Handling of an ata_piix I/O interruption by the hypervisor on the physical CPU 1
The study of those situations was highly simplified by the use of our tool. To determine if a thread of L2 is currently running on a pCPU, someone not using our tool should know the functioning of the hypervisor. He will need to determine if one of the current threads running on L0 is associated to a vCPU of L1, itself running a thread associated to a vCPU of L2, executing the thread of interest. This long process is tedious for a human being. Our tool spares the user this waste of time by showing clearly and directly what he wants without having any knowledge of the internal functioning of the hypervisor.
Evaluation
SHT’s generation time
If we compare the time needed to complete a fused analysis for a set of traces and the one needed to complete a simple kernel analysis for the same set, we come to the conclusion that the simple kernel analysis is faster. Let Ti be the time needed to analyze trace i. Since the simple kernel analysis doesn’t consider the set of traces as a whole but each trace independently, the analysis of the set can be done in parallel, each core dedicated to one trace. If we suppose that we have more cores than traces, then the elapsed time during the analysis will be max1≤i≤nTi where n is the number of traces.
If the set is considered as a whole, then it is difficult to process the traces in parallel. The elapsed time during the fused analysis will consequently be \(\sum _{1\leq i}^{n} T_{i}\).
Comparison of construction time between FusedVS analysis and Kernel analysis
SHT’s size on disk
Comparison of the SHT’s size between FusedVS analysis and Kernel analysis
Those results were obtained with an Intel core i7-3770 and with 16GB of memory.
Conclusion and future work
In this paper, we presented a new concept of kernel trace analysis adapted to cloud computing and virtualized systems that can help for the monitoring and tuning of such systems and the development of those technologies. This concept is independent of the kernel tracer and hypervisor used. By creating a new view in Trace Compass, we showed that it was possible to display an overview of the full hierarchy of the virtualized systems running on a physical host, including VMs and containers. Finally, by adding a new dynamic filter feature to the FVS view, in addition to a permanent filter for any VS, we showed how it is possible to observe the real execution on the host of a virtual machine, one of its virtual CPUs, its processes and its containers.
In the future, we can expect the concept of the fused analysis to be reused and adapted for more specific utilization like the analysis of I/O or memory usage. We could also use the same principles to analyze more thoroughly systems using applications and programs in virtual execution environments, such as Java or Python. Finally, we can also extend our work to be able to visualize VMs’ interactions between nodes to better understand the internal activity of cloud systems.
Notes
Acknowledgements
The authors would like to thank Francis Giraldeau for resolving some intricated bugs and Naser Ezzati Jivan for reviewing this paper.
Authors’ contributions
CB built the state of the art of the field, defined the objectives of this research, did the analysis of the current virtual machine monitoring tools and their limitations. He implemented the analysis tool presented in this paper, as well as the experiments. MRD initiated and supervised this research, lead and approved its scientific contribution, provided general input, reviewed the article and issued his approval for the final version. Both authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
References
- 1.Vaughan-nichols SJ (2006) New approach to virtualization is a lightweight. Computer 39(11): 12–14.CrossRefGoogle Scholar
- 2.Dean DJ, Nguyen H, Gu X, Zhang H, Rhee J, Arora N, Jiang G (2014) Perfscope: Practical online server performance bug inference in production cloud computing infrastructures In: Proceedings of the ACM Symposium on Cloud Computing, 1–13.. ACM, New York,Google Scholar
- 3.Sambasivan RR, Zheng AX, De Rosa M, Krevat E, Whitman S, Stroucken M, Wang W, Xu L, Ganger GR (2011) Diagnosing performance changes by comparing request flows In: Proceedings of the 8th USENIX Conference on Networked Systems Design and Implementation. NSDI’11, 43–56.. USENIX Association, Berkeley, Scholar
- 4.Shao Z, He L, Lu Z, Jin H (2013) Vsa: an offline scheduling analyzer for xen virtual machine monitor. Futur Gener Comput Syst 29(8): 2067–2076.CrossRefGoogle Scholar
- 5.Barham P, Dragovic B, Fraser K, Hand S, Harris T, Ho A, Warfield A (2003) Xen and the art of virtualization In: ACM SIGOPS Operating Systems Review. Vol. 37, No. 5., 164–177.. ACM,Google Scholar
- 6.Dean DJ, Nguyen H, Wang P, Gu X (2014) Perfcompass: toward runtime performance anomaly fault localization for infrastructure-as-a-service clouds In: 6th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud 14).. USENIX Association, Philadelphia. Scholar
- 7.Gebai M, Giraldeau F, Dagenais MR (2014) Fine-grained preemption analysis for latency investigation across virtual machines. J Cloud Comput 3(1): 1.CrossRefGoogle Scholar
- 8.Sharma S, Nemati H (2016) Low overhead hardware assisted virtual machine analysis and profiling. In: IEEE Globecom Workshops.. (GC Workshops), Washington DC. Scholar
- 9.Nemati H, Dagenais MR (2016) Virtual cpu state detection and execution flow analysis by host tracing In: 2016 IEEE International Conferences on Big Data and Cloud Computing (BDCloud), Social Computing and Networking (SocialCom), Sustainable Computing and Communications (SustainCom) (BDCloud-SocialCom-SustainCom), 7–14, Atlanta,.
- 10.Merkel D (2014) Docker: lightweight linux containers for consistent development and deployment. Linux J 2014(239): 2.Google Scholar
- 11.Process Containers.. Accessed 04 July 2016.
- 12.Soltesz S, Pötzl H, Fiuczynski ME, Bavier A, Peterson L (2007) Container-based operating system virtualization: a scalable, high-performance alternative to hypervisors In: ACM SIGOPS Operating Systems Review. Vol. 41., 275–287.. ACM, New York.Google Scholar
- 13.Marouani H, Dagenais MR (2008) Internal clock drift estimation in computer clusters. J Comput Syst Netw Commun 2008: 9.Google Scholar
- 14.Desnoyers M, Dagenais MR (2006) The lttng tracer: A low impact performance and behavior monitor for gnu/linux. In: Hutton AJ (ed)OLS (Ottawa Linux Symposium), 209–224.. Citeseer, Linux Symposium, Ottawa.Google Scholar
- 15.Kivity A, Kamay Y, Laor D, Lublin U, Liguori A (2007) kvm: the linux virtual machine monitor In: Proceedings of the Linux Symposium, 225–230.Google Scholar
- 16.Linux 2 6 20.. Accessed 04 July 2016.
- 17.Uhlig R, Neiger G, Rodgers D, Santoni AL, Martins FC, Anderson AV, Bennett SM, Kagi A, Leung FH, Smith L (2005) Intel virtualization technology. Computer 38(5): 48–56.CrossRefGoogle Scholar
- 18.Jabbarifar M (2013) On line trace synchronization for large scale distributed systems. PhD thesis, École Polytechnique de Montréal, Montreal.Google Scholar
- 19.Poirier B, Roy R, Dagenais M (2010) Accurate offline synchronization of distributed traces using kernel-level events. ACM SIGOPS Oper Syst Rev 44(3): 75–87.CrossRefGoogle Scholar
- 20.Trace Compass.. Accessed: 04 July 2016.
- 21.Montplaisir-Gonçalves A, Ezzati-Jivan N, Wininger F, Dagenais MR (2013) State history tree: an incremental disk-based data structure for very large interval data In: Social Computing (SocialCom), 2013 International Conference On, 716–724.. IEEE, Whashington D.C,CrossRefGoogle Scholar
- 22.Montplaisir A, Ezzati-Jivan N, Wininger F, Dagenais M (2013) Efficient model to query and visualize the system states extracted from trace data In: International Conference on Runtime Verification, 219–234.. Springer, Rennes,CrossRefGoogle Scholar
- 23.Ben-Yehuda M, Day MD, Dubitzky Z, Factor M, Har’El N, Gordon A, Liguori A, Wasserman O, Yassour BA (2010) The turtles project: Design and implementation of nested virtualization In: 9th USENIX Symposium on Operating Systems Design and Implementation (OSDI 10), vol 10., 423–436.. USENIX Association, Vancouver,. | https://link.springer.com/article/10.1186/s13677-016-0069-5 | CC-MAIN-2017-47 | refinedweb | 5,276 | 52.9 |
Talk:Wiki/Archive 5)
- Thanks for this long explanation! Currently after enabling page content language feature on this wiki, the concern about language of base content is resolved. Also I'm so pleased to tell that the primary topic of this section is solved, too! iriman (talk) 21:23, 15 June 2019 )
- Ok, final draft done. Please comment! Code and documentation can be found at Module:Sandbox/Tigerfell. --Tigerfell
(Let's talk)
- Thanks a lot for your efforts! It's a very well documented change.
- If I were looking for something to nitpick, then I might suggest using the term "id" for an OSM element's numeric identifier – it's a more standard term than "number"/"no". (The variable name "relationNo" made me think of a yes/no distinction at first.) But that's really scraping the bottom of the barrel in terms of criticism. ;) As far as I'm concerned, please go ahead! --Tordanik 15:12, 27 September 2018 (UTC)
- Thanks for the feedback. As this seemed to become some kind of mass edit and I lack the experience of that I just wanted to include other opinions if possible. In addition, I wanted no roll backs later on. I will change that. --Tigerfell
(Let's talk) 22:05, 27 September 2018 (UTC)
Rewriting Template:Node, Template:Way, and Template:Area and proposed automated User:TigerfellBot. --Tigerfell
(Let's talk) 18:27, 3 October 2018 (UTC)
- Like last time, there is a final draft at Module:Sandbox/Tigerfell. Please comment if interested. --Tigerfell
(Let's talk) 12:50, 5 October 2018 (UTC)
Roll back {{Languages}} template?
The version of the {{Languages}} template and its associated templates including {{Languages/div}}, {{LanguageLink}} and {{Available languages}} from September 2016 has several advantages over the current version:
- The implicit syntax ({{languages}} on its own) worked for all pages unless, like Uk:Об'єкти мапи the page name is different from English. It no longer works for pages (such as Map Features templates) with a colon as part of the page name, but only in some languages.
- The pages were automatically sorted in categories by the name without language prefix (with the ability to override if needed). This no longer works.
- When viewing pages in mobile mode the languages that a page is not translated into only flashed up once when the script and style sheet first loaded. They now flash up every time.
- A large number of languages not used on this wiki such as Sanskrit have been added to the template, this means that the box takes up the whole screen on mobile devices while the red links flash up and you have to wait until you can read the page.
- The 2016 template always accessed the {{Langcode}} template without arguments where Mediawiki only evaluates it once. The current version uses the {{langcode|{{{lang|}}}}} syntax which stops large pages rendering by repeated evaluation.
I am therefore proposing reverting the whole set of templates to September 2016.
As a matter of conflict of interest I was the one who created the September 2016 pages.
--Andrew (talk) 07:06, 23 September 2018 (UTC)
- Just an idea, that might actually be far more performant and flexible compared to the current template solution: store all translated pages in Wikibase. I created one such item by hand just to see what it would look like: Item:Q104#P31 -- using this data, we can generate a list of available translations with a very quick Lua script. PROs: translation template is very quick to generate, 3rd party software/bots/tool sites can easily find all available translations of a subject. CONs: a new translation has to be added to the data item (can be solved with a simple bot). --Yurik (talk) 22:38, 5 October 2018 (UTC)
- Unfortunately it seems all consequent edits were to the garbage. We all have to do them again? This is just one made now: removing pt-br since it was merged in pt, or we would see people creating again pages in "pt-br". That was already made by another user in 2016! Zermes (talk) 14:43, 6 December 2018 (UTC)
- That is good to know. I did not know that the merge was final and complete. If that is the case, I would suggest merging the last pages and templates. Maybe, you should also update the table at Wiki_Translation#OSM_Wiki_language_codes and add a note below. (Sorry for posting this slightly off-topic comment.) --Tigerfell
(Let's talk) 17:32, 6 December 2018 (UTC)
New Wikibase-based KeyDescription and ValueDescription templates
The new version of the {{KeyDescription/Sandbox}} and {{ValueDescription/Sandbox}} templates (infoboxes) are ready for review. They work as before, except that if some parameter is not given, it will be taken from the corresponding wikibase item. So far, it supports: description, image, elements (onWay/onNode/...), groups, and statuses. In some cases it will highlight if the parameter is different from the item - this way it will be easy to keep them in sync until we can safely remove them from the wiki pages. Also, for the ValueDescription, if the wikibase item does not have status or onWay/onNode/..., it will automatically take those values from the corresponding Key item. Lastly, if the item defines that certain value is restricted to just some region, e.g. noexit=yes should not be used on ways in DE-speaking regions, the infobox will still be properly shown depending on the language. Let me know what you think. --Yurik (talk) 08:51, 29 September 2018 (UTC)
-:02, 6 December 2018 (UTC)
- Great to hear that. I was looking forward seeing this in action. Is there any way we can avoid having two short descriptions one in the template one in WikiBase in the long run? --Tigerfell
(Let's talk) 17:14, 6 December 2018 (UTC)
Please update the documentation of the changed templates accordingly! I would suggest you also add an ambox to all translations. In addition, I would like to know how you suggest to change the Proposal process. I already started a threat there, Talk:Proposal process#Changes after the installation of Wikibase. Thank you. --Tigerfell
(Let's talk) 15:54, 9 December 2018 (UTC)
- Tigerfell, I agree that we should change it, but currently there is very little change needed just yet - not until taginfo starts using data items directly. Without it, we are forced to continue using all of the existing template parameters for everything, or else taginfo will show no documentation for any of the tags. I will add a few sentences at the top, with the expectation that eventually many (all?) template params will be gone. Thx for the heads up about the proposals. --Yurik (talk) 16:12, 9 December 2018 (UTC)
Optimizing wiki templates
There is a magical command to see the performance of each page:
mw.config.get('wgPageParseReport') (ran from the browser debugging tools page), or more specifically
console.table(mw.config.get('wgPageParseReport').limitreport.timingprofile), which shows the top ten slowest templates used on the page. For example, the main page shows
53.23% 1377.800 922 Template:Langcode as the top offender. Key:name --
93.36% 2951.955 1 Template:KeyDescription, etc. Yet, I think it is the {{Langcode}} that ends up causing the most slowdowns, and may need to be rewritten. Just a few observations to make the wiki faster. --Yurik (talk) 17:36, 5 October 2018 (UTC)
- That is interesting. Where can I find this debugging tools page? I currently render more of less randomly pages using preview functionality if I want to see the performance of my templates.
- Regarding the Langcode template, please see #Roll_back_.7B.7BLanguages.7D.7D_template.3F further up. --Tigerfell
(Let's talk) 17:58, 5 October 2018 (UTC)
Map extensions
This wiki currently features two extensions for displaying maps (Simple image MediaWiki Extension and Slippy Map MediaWiki Extension). Simple image extension currently suffers some problems regarding the limitations of the service and the lack of HTTPS support. Slippy Map extension has some issues as well and is marked as depreciated since 2013 [1].
That is why I suggest installing an new extension to this wiki. It would gradually replace the current ones or could be used as a backup if the others work irregularly. Reading through the repositories and MediaWiki_extension#Ideas_for_improvements I came up with the following requirements regarding such an extension:
- Dependencies towards a singular website should be avoided (see issues with Simple image extension).
- No direct dependency towards OpenStreetMap tile servers (puts loads directly on the servers).
- No extensions that require an arbitrary amount of maintenance or self-coding (missing coding/maintenance capacities in the wiki).
I found two suggestions at MediaWiki_extension#Suggested_map_extensions, Kartographer and Maps. Both seem reasonable to me, but certainly a more detailed check is necessary. Any suggestions/comments? --Tigerfell
(Let's talk) 11:01, 6 October 2018 (UTC)
- Disagreement with point 2: if we can't serve our own tiles in our Wiki, we're doing something wrong. Mmd (talk) 11:31, 6 October 2018 (UTC)
- Point 2 refers to the first version of the MediaWiki extension article written by Harry Wood. It reads:
A straightforward reference to openstreetmap.org within an <IMG> tag is probably not satisfactory because (A) This would put server load on OSM for wikipedia's traffic (B) OSM's dev server goes down, the wikipedia articles get screwed up (C) OSM's dev machine runs slowly, wikipedia articles load slowly.
- But I guess it makes the search very narrow... When questioned whether I would rather put load on the OSM server or a dependency on a singular website with no direct affiliation to OSM or no high interest in this service (like staticmaps.openstreetmap.de), I would choose the first option. Well, maybe you are right. @Harry Wood: might be able to say something about this? --Tigerfell
(Let's talk) 19:49, 6 October 2018 (UTC)
- Aren't these objections specifically about Wikipedia (as opposed to wiki.osm.org) depending on OSM's tile servers? I don't think this would restrict the choice of extensions for our own wiki. --Tordanik 08:28, 10 October 2018 (UTC)
- Yes. That's talking about use on other wikis and (the ambition that time was) on wikipedia. So not really relevant although it probably relates to part of the reason Firefishy decided to quickly label that repo as "deprecated": discourage anyone installing the code on other wikis. -- Harry Wood (talk) 09:54, 10 October 2018 (UTC)
Ok, then we can drop point 2. --Tigerfell
(Let's talk) 18:23, 10 October 2018 (UTC)
- If users select maps with google as service will it be charged? Admins will have to look that frequently. Does Maps extension supports Wikimedia Maps ?. One advantage of Wikimedia Maps is that multilingual support is there. Plus the support for wikidata queries -- Naveenpf (talk) 12:04, 6 October 2018 (UTC)
- As far as I can see, it is possible to specify the available mapping services in a file:
JeroenDeDauw/Maps/blob/master/Maps_Settings.php#L15. I would limit that to leaflet as we do not need the other services. There is no way to get charged if not API key is provided (the map would not be displayed however). As far as I can see, Maps extension does not support Wikimedia Maps. It has some querying functionality using Nominatim for instance [2].
- Those are surely an advantages, not to mention that Yurik who is one of the authors is also a wiki admin here. The maps are rendered by Wikimedia Foundation and we would have to accept their Maps Terms of Use. --Tigerfell
(Let's talk) 19:49, 6 October 2018 (UTC)
- My strong request would be to be able to continue to select from among various layers in slippymap directives, like layer=transport or layer=cycle. I can do that now (and do in many wikis I've written, example1, example2) and I want to see those (or something like them) continue. Rail and biking are very important to my mapping and wiki-writing here in the USA (I've spoken at SOTM-US conferences on each, in 2016 and 2014). Yes, there is a watermark stamped across the front of each such map "API Key Required" and I can live with that. To be very clear: both of these layer directives WORK TODAY and while slippymap is said to be "deprecated," part of the definition in my dictionary for that word includes "typically due to having been superseded." It does not appear that slippymap HAS been superseded and if this discussion is about fixing that, OK, fine. But if I find that capability disappearing with no good alternative, I will be upset that we are destroying working (or very nearly working) wiki functionality. Anybody would be upset with that. Let's not make ourselves upset, please. Stevea (talk) 08:25, 10 October 2018 (UTC)
- Well, currently this is the only extension displaying a map. As I understand Harry, it is not maintained. I fear that there will be a day when it does not work any more (because it is not maintained and the wiki software needs to be updated for safety reasons). Currently not installed Maps extension seems to me a good precaution as I do not want to have a broken wiki either. Could you have a look and say if it satisfies your needs? It looked ok to me, but I might have missed something. --Tigerfell
(Let's talk) 18:23, 10 October 2018 (UTC)
The current situation is... the SlippyMap extension works OK-ish, but could be improved. From a code point of view it's quite a mess. Not following modern mediawiki extension code layout/conventions. Developing it may be a bit of dead-end (hence "deprecated") because really the way to improve it would be to ditch it and switch this wiki to use a more fully featured well developed extension. There are a few, but we'd need to investigate and find one which has a "no support for google maps" config. Supported wikitext syntax would also be an issue.
The StaticMap extension is the one which is actually broken at the moment, because StaticMapsLite service is broken, so this needs more urgent attention. Other than people annoyingly failing to keep services running, I feel this extension is kind of OK actually. It doesn't really need to be more fully featured (although it's also not following MediaWiki conventions). Not sure which repo that extension is getting fetched from these days. Is it also labelled as "deprecated"? Probably
-- Harry Wood (talk) 10:01, 10 October 2018 (UTC)
- Regarding the operations of the SlippyMap extension, right now there is no option to have two maps on one wiki page (StaticMap broken, SlippyMap cannot deal with that). Talking of switching to a well-developed extension, Maps seems reasonable in that case. As already pointed out, it is possible to disable Google Maps support. In addition, many renderings can be displayed (including the currently available ones). Please have a look at the configuration files (link above). The extension uses a different syntax: {{display: ... }}.
- The current situation regarding StaticMap extension was the cause for writing point 1: Dependencies towards a singular website should be avoided. Apparently (I can not confirm that), the extension works, but the map providing service does not and it sounds to me as if this will take some time. Therefore, a more than OK-ish working map option would be desirable for me. I acknowledge that replacing StaticMap extension with some slippy map is not a go ahead. --Tigerfell
(Let's talk) 18:23, 10 October 2018 (UTC)
- I agree with Harry that "the SlippyMap extension works OK-ish..." as IMO it does (except for the API watermark). It is not a major limitation (for me) that I can't display more than one slippymap on a wiki page, but I'm guessing that's a problem for others. Tigerfell, while it is wise to look to a future where if (because of a yet-to-exist security threat?) updating wiki architecture wholesale breaks an existing extension (like SlippyMap) I'm not sure that "fearing" that day (living one's wiki life in fear?) is a healthy motivating factor. Still, I recognize that SlippyMap is code its author calls "quite a mess" and "not following modern...conventions." I don't quite understand what is meant by "supported wikitext syntax would also be an issue" but as "an issue," it seems it could be worked out. Although, the direction to go in the direction of Maps seems "healthier" for OSM's wiki future: it's more modern, maintainable code, though "it also doesn't follow MediaWiki conventions" causes me some concern that we'll go through this entire exercise again someday.
- Yes, it appears (from the Gallery and User Documentation glances I gave them) that Maps is a suitable substitute which might one day supersede SlippyMap, so again I agree with Harry. But as it is not installed here, I can't give it the "QA Test Drive" I would like to so I may fully answer that. It looks like a richer and more modern version of what we now have, so for those reasons I would support a migration towards integrating it. However, I do not understand fully the technical issues, except that "people annoyingly fail to keep services running." (I am especially enamored of the "disable Google Maps support" feature, good for that capability, good for OSM for flipping that switch On if/as we install it). Perhaps this is pie-in-the-sky of me (pleasant to contemplate, very unlikely to be realized) but it seems WE (OSM, somewhere, somehow) can be those "who keep services running." Much about how all the pieces work together is opaque to me (I'm learning), though as has been said earlier, "if we can't serve our own map tiles in our wiki, we're doing something wrong." Stevea (talk) 19:29, 10 October 2018 (UTC)
I added a request at
openstreetmap/operations/issues/249. --Tigerfell
(Let's talk) 22:11, 13 October 2018 (UTC)
Unfortunately, this had to be ruled out as the Maps extension requires the installation of 'Composer'. --Tigerfell
(Let's talk) 12:15, 15 October 2018 (UTC)
After this rather not so successful action I would suggest taking following steps. If one fails I would continue with the next.
- examining if Kartographer extension works with maps from our tile servers
- requesting if Maps extension can be altered to drop the necessity of using composer.
- changing SlippyMap extension
Any suggestions, comments? --Tigerfell
(Let's talk) 11:17, 22 October 2018 (UTC)
Now I found MultiMaps extension. Looks okay to me, but we would have to check if/how to use tiles from different URLs (like Bicycle layer vs. Mapnik) and their support integration with Composer. Commercial layers can be disabled. --Tigerfell
(Let's talk) 08:20, 16 November 2018 (UTC)
- I think should try Kartographer extension . -- Naveenpf (talk) 13:44, 14 August 2019 (UTC)
- MultiMaps extension is available in this wiki for about two months now. The only thing that keeps me from changing the old style slippy maps is the fact that the currently installed version 0.7.2 does not display Transport Map tiles and for compatibility reasons the system administrators do not want to install version 0.7.3 which was tested on MediaWiki >1.33.x (newest "pre-release") while we are running 1.31. So, we are essentially waiting for a new MediaWiki release. Meanwhile, I listed all the options for displaying maps (except for the old extension) at Wiki:Maps.
- Or are you looking for features that MultiMaps does not offer? --Tigerfell
(Let's talk) 18:51, 14 August 2019 (UTC)
- Hello tigerfell, we are mapping admin boundaries. if we have kartographer we can include here or -- Naveenpf (talk) 00:32, 15 August 2019 (UTC)
- How many maps do you want to create? If it is just a few it is maybe easier to take a screenshot? Unfortunately, I have not found any option for changing the map style to OSM Carto and I think it might be confusing to have a different style for the wiki maps. --Tigerfell
(Let's talk) 16:00, 15 August 2019 (UTC)
Lua errors with empty BrowseRelation value
I'm still not exactly sure how all the moving parts work together, and this might be a better communication channel to discuss this.
It appears that because of the way that our wiki's Relation code (Lua script?) works (specifically the BrowseRelation function), a blurb of wiki markup text of two open curly braces followed by BrowseRelation then a vertical bar then two close curly braces (note the empty value after the vertical bar) USED TO return a polite warning of Relation not defined yet. Now, it returns a clickable link with larger-type red text:
Lua error in Module:Element at line 11: Given relation number parameter is not a number.
While true (it ISN'T a number, it is simply empty), it sure would be nice if the "old behavior" of Relation not defined yet were to return to OSM's wiki. Clicking the link shows a backtrace erring in function "chunk."
An example is at
In short, "recent changes for apparent improvements have broken something." Perhaps it will remain broken for the sake of the apparent improvements, perhaps it can be patched/fixed/improved so the original behavior (supporting a certain imperfect syntax, I agree) returns and massive amounts of existing wiki don't have to be modified to back-fix for the sake of the "apparent improvements."
Thank you. Stevea (talk) 14:07, 9 October 2018 (UTC)
- {{BrowseRelation}} links to {{Relation}}, so please have a look at this template's documentation.
- If you have any questions regarding the change of this, please ask them here or at the template's talk page.
- This behaviour will be simulated for the existing pages by changing the wikitext in an automatted process. This is described at Task 'Undefined Elements'. The fact that this takes a while to change is mostly due to the Automated Edits Code of Conduct. --Tigerfell
(Let's talk) 18:36, 9 October 2018 (UTC)
This Wiki need a better refresh new image configuration
(bugtrack here?)
Hi. This Wiki is used for software documentation, software use guidelines, etc. and we need illustrations...
A good illustration wiki interface need fast response, so there are a kind of bug: the Wiki (nowadays) is waiting a lot of time to change illustration. As users, we need to wait less (seconds not half hour or hours) to see our changes. --Krauss (talk) 14:20, 12 October 2018 (UTC)
PS: it is not brower cache, and tested many times.
- Could you name an example case, so someone can look at this specific image case? Thanks! --Tigerfell
(Let's talk) 11:37, 15 October 2018 (UTC)
- Hi @Tigerfell:, when I updated File:UMLclassOf-osm2pgsql-schema.png, the page Osm2pgsql/schema not updated. --Krauss (talk) 22:35, 15 October 2018 (UTC)
- Sorry, I forgot to answer. It sounds to me as this is an "issue" with the MediaWiki software. When updating templates for instance, it adds all pages that need changes to a waiting queue and works on them with some delay. If the server load is not arbitrary high, one can "purge" a page, forcing the server to render the most current version. Does that work? --Tigerfell
(Let's talk) 09:57, 22 October 2018 (UTC)
- After facing the problems myself, I figured out that it helps in my situation to dismiss the browser cache, because otherwise it still uses the old images. --Tigerfell
(Let's talk) 17:00, 26 January 2019 (UTC)
Installation instructions for Maps
I updated MediaWiki to version 1.31 after 5 years. Since SimpleMap is not maintained anymore, I had to look for a replacement. After hours of searching I finally found that there are cartographers and maps running on the current version of MediaWiki. At the moment I am using cartographers, but I want to install maps because of the variety of possibilities. I came up to the installation of the composer. After that I failed with the installation instructions, which are available in the MediaWiki.
Question: Is there an installation guide in the OSM section that helps me here? (Bks29) 5:30, 22 October 2018
- Hello again. We do not use "Maps" extensions because the composer (which is required) does not work with our technical setup (see current end of section #Map extensions, the red cross indicates failure). You can view a list of all currently installed extensions in this wiki at Special:Version --Tigerfell
(Let's talk) 08:50, 22 October 2018 (UTC)
No problem? Please wiki-community, a declaration of agree
Hi, our community is working on contract models (CMs), examples CM/pt/BR/004, CM/pt/BR, CM/pt... No problem?
We are building a "namespace infrastructure" from CM and a first draft of contract model redaction process and foundations... Perhaps in 2019 we have good local-results and a good proposal for all other OSM-communities.
So (sorry my English), to go forward in our local-OSM-community work, and not being exposed to a risk of your "PLEASE DELETE-ALL" or similar thing, we need your "declaration of agree". --Krauss (talk) 22:55, 1 November 2018 (UTC)
- Hi, I do not have the power to allow or disallow anything in this wiki. I can just speak from the perspective of an ordinary user.
- To me your idea seems to be "okay", because it is related to OSM. The naming however seems to be a bit overcomplicated. I would suggest something like
Pt:Contract models/BR/A. B.-School.
- If the page is in Portuguese, it should be located at
Pt:...or
Pt-br:...(I do not know which one you currently use). These prefixes work fine with current templates which do automatic translation. In addition, I would propose
Contract modelsinstead of
CMbecause I think that abbreviations in page titles are confusing and it would be consistent with
Proposed features/.... I am not a fan of numbering pages and would use the name of the addressee instead. --Tigerfell
(Let's talk) 20:40, 5 November 2018 (UTC)
- Thanks @Tigerfell:, if seems "okay" for all, we will continue/expand the proposal with more Contract Models.
About naming: it is not "bit overcomplicated" if you see it as a short label (them complicated will be a ugly long=long name).
Any document ID as a book or scientific article have a public ID: books use ISBN, articles use DOI, legislation use ELI and collections of articles (journals) use ISSN... All are short, we love short ID, not big ID.
And at this Wiki is easy to redirect the ID into expanded name, for example
CM/pt/BR/004is automatically expanded to
"CM/pt/BR/004 - Comunicado para faculdades", see the link CM/pt/BR/004.
--Krauss (talk) 20:08, 14 November 2018 (UTC)
- I have the following problem with IDs as page names: If you look at a category (like: Category:Labelled for deletion), you see the page names only. You can not check for a topic to be in this category, if the page name does not mention the topic. All pages in this wiki have a page ID anyway (e.g. 243873 for this page, see MediaWiki handbook). In addition, we also have Template:Shortcut, which could be used. I have never used it, though.
- As a compromise, I would suggest to name the pages something like
Pt:CM/BR/004 - Comunicade para faculdades. Would that be okay for everyone? --Tigerfell
(Let's talk) 08:42, 16 November 2018 (UTC)
- I would try to gather some opinions on the osm-talk mailing list, hope someone there speaks Portugese. RicoZ (talk) 20:33, 6 November 2018 (UTC)
- When searching the wiki, I found some pages referring to such contracts already. You might want to add them to your schema (?):
- Permissions - a country-specific documentation of obtained permissions mostly for data imports
- Import/GettingPermission - containing three templates for letters like yours
- Category:Import - a category containing further possibly relevant pages
- --Tigerfell
(Let's talk) 15:29, 7 November 2018 (UTC)
- Thanks @Tigerfell:, I will use it in the text step, and perhaps we can unify all at CM namespace... Specifically English permissions, the main ones was consolidated by OSMF at
--Krauss (talk) 20:08, 14 November 2018 (UTC)
Formatting pages with historic content
Some pages in this wiki describe historical services or components and may be worth keeping in order to document the history of OSM. In order to mark such pages clearly and similarly I created a draft page which I would later transform into a template. I would also propose to name a category "Historical artefacts" and categorise the articles there. Comments? Suggestions? --Tigerfell
(Let's talk) 21:02, 14 December 2018 (UTC)
The two templates {{Historic artifact start}} and {{Historic artifact end}} do the job now. --Tigerfell
(Let's talk) 14:10, 23 January 2019 (UTC)
MediaWiki "Thanks" extension
I like the Thanks extension and would be happy to see it rolled out to the OSM wiki. See the image for what it does. This may all seem a bit silly, but let me explain:
I have quite a lot of pages on my watchlist and reguarly check the recent changes. I actually like most of the changes I see. However, the contributors making those changes never learn that someone saw and appreciated their efforts. Instead, the only time people tend to actually contact other contributors is when they criticise and/or revert their contributions. This plugin offers an easy way to say "Hey, I like your edits!" every now and then, and hopefully balance out the negativity a bit. --Tordanik 20:34, 14 January 2019 (UTC)
- I have also been thinking that it would benefit the wiki editing dynamics. I do not know what's involved tbh, but I think it should be fairly easy to set it up.--Yurik (talk) 22:59, 14 January 2019 (UTC)
- I would also appreciate it. Technically, this depends on the w:mw:Extension:Echo which is used for the messaging, and there might be problems in case on of these extensions needs w:mw:Composer, because this might interfere with the wiki server configuration management "Chef". "Composer" wants to 'update' libraries and creates additional files for that, which in turn get stuck in "Chef" which builds the settings files based on files, "Composer" previously changed. I do not really have time to check this now, but this should not stop you if you want to add it now. Otherwise, I can take a look in about a week or Yurik just creates a pull request to
openstreetmap/operations. --Tigerfell
(Let's talk) 00:00, 18 January 2019 (UTC)
- Someone opened an issue and I commented on it today.
openstreetmap/operations/issues/265 --Tigerfell
(Let's talk) 11:48, 23 January 2019 (UTC)
Done in
openstreetmap/chef/commit/ceab552534e0b204ce2eecd05584603d0ad23cfc. --Tigerfell
(Let's talk) 16:57, 26 January 2019 (UTC)
- Thanks for Thanks.
;^)By the way, this issue proposes implementing something similar for osm.org. – Minh Nguyễn 💬 13:14, 28 January 2019 (UTC)
Wiki Adminship for Minh Nguyen
I would like to nominate Minh Nguyen to gain this wiki's administration rights. Minh is an experienced wiki contributor, both here and on various Wikipedia languages/sites, and he has extensive technical skills. His latest work is the creation of the dataitemlinks gadget that I just deployed -- it modified our Data items pages to convert wiki documentation text (P31) into proper links. I think he will be a great addition to the admins. --Yurik (talk) 01:36, 28 January 2019 (UTC)
Support - for the reasons stated above. Yurik (talk) 01:36, 28 January 2019 (UTC)
Support Minh has extensive experience in Wikimedia projects and he would be a good addition to the people who can help out here. —seav (talk) 16:58, 28 January 2019 (UTC)
Support Minh has been very supportive of mappers on the OSM-US Slack (including me), is very knowledgable about OSM and Mediawiki, and has been instrumental in getting Wikibase working on the OSM Wiki. —Todrobbins (talk) 22:09, 28 January 2019 (UTC)
Support Minh is a SUPER-qualified OSM "technician/engineer/supervisor/administrator" as he is active in Wikipedia, OSM and many excellent endeavors that show both his good spirit as an OSM volunteer and as a highly competent technical guru. He has the chops, he has the right attitude and spirit. Stevea (talk) 03:19, 29 January 2019 (UTC)
Support Good to have Minh as wiki admin -- Naveenpf (talk) 05:13, 29 January 2019 (UTC)
Support Minh is a great steward of OSM community in San Jose, happy to have him confirmed as admin --Smaffulli (talk) 22:27, 29 January 2019 (UTC)
Support - Agree he's a great asset to the wiki for all reasons stated above.—Nicomar (talk) 22:32, 29 January 2019 (UTC)
Support Mmd (talk) 07:18, 30 January 2019 (UTC)
Support--Władysław Komorek (talk) 09:43, 30 January 2019 (UTC)
Seems there is a consensus. Could anyone of the bureaucrats @Firefishy:, @Harry Wood:, @Lyx:, @Pigsonthewing:, @Steve: grant Minh the adminship? Thanks! --Yurik (talk) 01:58, 12 March 2019 (UTC)
- Done. Congratulations, Minh! Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 15:48, 15 March 2019 (UTC)
Wiki Adminship for Tigerfell (renounced)
I would like to nominate Tigerfell to gain this wiki's admin rights. Tigerfell has been instrumental at many changes at this wiki, including many technical changes like the Thanks extension with his pull request. Tigerfell is a good contributor with extensive skillset, and would help this wiki grow. --Yurik (talk) 01:36, 28 January 2019 (UTC)
Support - for the reasons stated above. Yurik (talk) 01:36, 28 January 2019 (UTC)
Support - Active OSM wiki editor + good handle on mediawiki -- Naveenpf (talk) 05:42, 28 January 2019 (UTC)
Oppose - Tigerfell is still very new to OSM and describes themselves as a "Wiki" person and obviously has some technical skillset. Unfortunately, Tigerfell lost lots of goodwill in their local community due to the way they handled a recent proposal voting process. Copy-and-pasting forum content and unilateral kicking off a voting process without prior consultation, and unilaterally changing the end date of an ongoing voting process was generally considered as crossing red lines (many have seen this as "tweaking the results in a certain direction", although Tigerfell wanted to give others the opportunity to voice their opinion, in stark contrast to how proposal voting normally works). Link to discussion in the osm forum: ... Although I see the general will to improve the wiki and the good intent behind their actions, I wouldn't be very comfortable seeing someone with such basic understanding how OSM processes work in an OSM wiki admin position. Connecting more with the local OSM community would certainly help in this situation. My suggestion would be to try again in 1 year, and keep going improving the Wiki! Mmd (talk) 06:48, 30 January 2019 (UTC)
Support - Has been very supportive. --Władysław Komorek (talk) 09:49, 30 January 2019 (UTC)
Thank you for the nomination and the feedback. I have thought about this by myself now and I have drawn the conclusion that I lack some experience to do this now, but I would be happy to take the responsibility at a later time. --Tigerfell
(Let's talk) 19:47, 30 January 2019 (UTC)
Criteria for deleting wiki content
There is an ongoing discussion about when to delete proposal pages in the forum:.
This is based on recent reverts in this wiki as well as the need to formulate rules to avoid such actions in the future. Please feel free to join. --Tigerfell
(Let's talk) 12:42, 3 February 2019 (UTC)
- We crafted a more general deletion policy for all wiki content which is currently located at User:Tigerfell/Crafting. Please feel free to review or comment. We plan to vote on the draft in about a week (depending how the discussion goes on), so this is somewhat of a last call. :-) --Tigerfell
(Let's talk) 13:13, 22 March 2019 (UTC)
Proposal to change Proposal process
I would like to propose a change to how we discuss new tags. I think we can make the process a bit more streamlined, with the fewer "gotchas". In short, I think we should just create new Key:* or Tag:* pages, add some warning at the top (e.g. "this tag has not yet been approved by the community"), and use the corresponding "talk" page for discussion and voting. There are a number of advantages of this method:
- All documentation is always in the same structure, without the need to copy some of the discussed content to the tag pages after the discussion is over.
- New users already tend to create new tag:* or key:* pages when making proposals, and wiki maintainers often have to move those to the Proposals namespace, so the new way will be more intuitive.
- Corresponding data items will become more useful, because they will be associated with the right pages and contain proper description, and thus if someone already uses those tags in OSM, iD and taginfo will show proper description from the start, even before the tag is approved.
- Discussion will always be attached to the tag (as part of the talk page)
- For non-English speakers, it will be more straightforward to make a proposals in another language, rather than trying to figure out if it should be Proposal Pt:* vs FR:Proposal/*.
- Translating proposals would be the same as translating any other tag - e.g. given Pt:Key:Foo, someone could create a corresponding Key:Foo for the English speakers and a more general discussion.
- If the proposal is rejected, there still will be a dedicated page to the rejected tag, with a clear note at the top discouraging its usage.
--Yurik (talk) 18:37, 15 February 2019 (UTC)
- I see several issues with that idea:
- Many proposals suggest more than one key or tag, and they only make sense as a whole.
- Proposals do change or deprecate tags, they don't just introduce new ones.
- Sometimes, there are competing ideas for the same tag.
- Overall, a proposal isn't a key or tag – it's a suggestion to add, change, or remove one or more keys/tags. If we imagine key/tag pages as source files in a repository, then a proposal would be a pull request (which contain parts of source files, but are not themselves a source file).
- I absolutely agree that there should be data items for proposals, by the way. But I feel it would be more appropriate for them to be instances of "proposal" (rather than key or tag) with appropriate properties: Proposer, dates of draft/RFC/vote, vote outcome, affected keys and tags, ... --Tordanik 20:14, 15 February 2019 (UTC)
- @Tordanik: thanks! I have been thinking about some of those concerns too. I think the main difference is how we approach OSM tag documentation - proactive vs reactive. In proactive, we plan before tagging. A mapper would create a proposal detailing all the changes, participate in a community discussion, vote, and finally move proposal content to the Key:* and Tag:* pages + translate it. In reactive, some mapper would simply start adding new tag to the objects in OSM, possibly after agreeing about it with their local community. While we may desire the former, the reality is usually the later. We have 70,000+ unique keys, not even counting tag values - compare that with the number of proposals :). So I think wiki should reflect that reality, no matter how much we may wish for it to be different. It is better to have "some" documentation for the tags used in the live OSM data, and steer the discussion/voting process afterwards, and have discussion attached to the wiki page after it completes, rather than to try to have a top-down process that is not followed far too often, thus creating a duality - some keys have wiki pages, some have proposals, some just exist without much documentation.
- My idea would allow us to address that - by having a data item for each unique key and tag (when warranted), plus a wiki page if there is more content than basic infobox, plus an easy place to discuss it on the talk pages, even if the key/tag is already in use. Essentially we would have the same process for both proactive and reactive - regardless how the key/tag was first introduced. Moreover, the data item could be created directly by iD or JOSM when user tries to add a new key with a short description - thus we will always know what the user meant when they added it, and experienced mappers will have a chance to steer them towards better practices. To start a discussion about a data item/wiki page, simply add a talk page with some header template, e.g. {{Discussion}}.
- Changing existing tags or keys is essentially a discussion about a subject -- that's what talk pages were initially created to handle - you simply create a new section with the proposed change, and have a well defined way to discuss that change. Multiple changes go under different headers. In case it goes too long, you can always use subpages.
- Multiple keys can be discussed in two ways: either on a "primary" key talk page, with other key pages linking to it, or on a dedicated proposal page, with every related key/tag page linking to it. This is similar to what we have now, but the important difference is that the key:* page will exist and reflect that it is already being used in OSM data, even if used by 10 objects, and provide documentation for those 10 objects. I don't think there will be too many of those "meta discussions".
- I think adding "instance-of = proposal" data item would have very little value -- it will not be discoverable by iD or JOSM (which use
Key:foowhen looking up documentation). Much easier to add status (P6) = under discussion / approved / rejected / ... to the proper key/tag data item, and have iD/JOSM react properly to that. --Yurik (talk) 21:26, 15 February 2019 (UTC)
- There are a lot of tags and keys (probably a vast majority of those 70,000) which do indeed not require any long-winded proposals – adding a new shop=* value or such. And those most of the time do not go through the proposal process at all, which is perfectly fine. The same goes for the kind of minor changes that can be discussed on a talk page or the tagging mailing list.
- But when it comes to big steps for our data model, my experience has been that these tend to succeeded only with a lot of documentation, discussion, and planning in advance. I'm thinking of concepts like the :lanes suffix, Simple 3D Buildings or Conditional restrictions. Mappers spontaneously making up a new tag and typing an explanatory sentence into iD/JOSM is just not going to result in a solution for these more complex problems. Not one that can be used robustly by a wide range of data consumers, in any case. This, too, is a reality we need to accept.
- So while topics requiring a "meta discussion" may be less common in absolute numbers, they also seem far more important for our project's future. Therefore, I feel it's essential that our proposal process is designed to handle them especially well. They should not just be an afterthought compared to simpler, single-tag proposals. --Tordanik 19:54, 16 February 2019 (UTC)
- @Tordanik: 70,000 are just the keys, I did not even look at the number of values we would have for the "enum-like" keys. Also, just in the past 2 months, at least 4 key/tag pages were renamed from "Key:..." to "Proposed features/..." form, e.g. @Woodpeck: has moved motorcycle:scale to proposals because it was not well established, even though there are over 100 usages of that key in OSM data. This means that while in discussion, there is no proper documentation for motorcycle:scale key, and it is less likely that many users will even bother documenting a new key if there is a documentation delay like that.
- I agree with you about big strategic discussions. E.g. disputed borders was clearly a proposal that spans far wider audience than a specific tag. I think my proposal should only target simple new keys and tags, and also changes that are scoped to a singel key/tag. --Yurik (talk) 18:27, 18 February 2019 (UTC)
- "Corresponding data items will start working right away" - that is not a good reason to change anything in proposal process Mateusz Konieczny (talk) 21:32, 15 February 2019 (UTC)
- @Mateusz Konieczny: That was a side benefit, not the primary reason :) Data items get created right away as soon as they are added to the OSM data. When created, they are missing description, making them less useful. Following this process makes them far more useful, while still allowing us to discuss it, vote on it, change status properly, etc. --Yurik (talk) 21:43, 15 February 2019 (UTC) (P.S. I updated it above)
- I think that this is a big step forward in allowing many "shy" mappers to create proposal pages. It will also trigger the involvement of people who do not speak English, and it would make the whole proposal process much more streamlined. For example, among the Polish community, we have a few very active people, but they are afraid to write a proposal because their knowledge of English is too weak. Good job! --Władysław Komorek (talk) 07:51, 16 February 2019 (UTC)
- Hi, I liked the idea at the first glance, but than I had second thoughts.
- I still think it's a good idea, but only when limited to creating new tags. When changing something either we will have the actual change somewhere among existing information and it will be difficult to see what we are exactly talking about, or the changed part will be separate (like at the end of wiki page) and it will be easy to discuss, but what after voting? Again, two possibilities - either it will stay at the end, then we will see the process of changes, but wiki page will be inconsistent; or the author will have to edit the page so that documentation for tag is consistent, but that means that the page needs editing anyway, so we lose the advantage of having wiki page ready even before the proposal.
- So, your idea is good as an alternative way of creating a proposal for a new tag, but it can't replace the existing proposal process.
- Rmikke (talk) 14:01, 18 February 2019 (UTC)
- @Rmikke: thanks, I agree about significant changes. For minor changes limited to a single key or tag, you could use the "discussion/talk" page of that Key:... (just like we are doing now with this discussion) - create a section saying "lets change this key to be X", discuss it, vote for it, and when the voting has been decided, update the primary key page with the new information. This way all single key-related discussions will stay attached to the Key:... page. So yes, someone will have to update the primary page after the discussion. --Yurik (talk) 17:17, 18 February 2019 (UTC) | https://wiki.openstreetmap.org/wiki/Talk:Wiki/Archive_5 | CC-MAIN-2022-27 | refinedweb | 7,958 | 60.55 |
cinch_gen 1.0.2
cinch_gen: ^1.0.2 copied to clipboard
cinch is powerful http client for dart, support native and js, http client using dio.
Use this package as a library
Depend on it
Run this command:
With Dart:
$ dart pub add cinch_gen
This will add a line like this to your package's pubspec.yaml (and run an implicit
dart pub get):
dependencies: cinch_gen: ^1.0.2
Alternatively, your editor might support
dart pub get.
Check the docs for your editor to learn more.
Import it
Now in your Dart code, you can use:
import 'package:cinch_gen/cinch_gen.dart'; | https://pub.dev/packages/cinch_gen/install | CC-MAIN-2021-17 | refinedweb | 101 | 75.3 |
parboiled2 – A Macro-Based PEG Parser Generator for Scala 2.11+
Contents of this Document
- Introduction
- Features
- Installation
- Example
- Quick Start
- How the Parser matches Input
- The Rule DSL
- Error Reporting
- Advanced Techniques
- Common Mistakes
- Grammar Debugging
- Access to Parser Results
- Running the Examples
- Alternatives
- Roadmap
- Contributing
- Support
- References
- Credits
- License
Introduction
parboiled2 is a Scala 2.11+ library enabling lightweight and easy-to-use, yet powerful, fast and elegant parsing of arbitrary input text. It implements a macro-based parser generator for Parsing Expression Grammars (PEGs), which runs at compile time and translates a grammar rule definition (written in an internal Scala DSL) into corresponding JVM bytecode.
PEGs are an alternative to Context-Free Grammars (CFGs) for formally specifying syntax, they make a good replacement for regular expressions and have some advantages over the "traditional" way of building parsers via CFGs (like not needing a separate lexer/scanner phase).
parboiled2 is the successor of parboiled 1.x , which provides a similar capability (for Scala as well as Java) but does not actually generate a parser. Rather parboiled 1.x interprets a rule tree structure (which is also created via an internal DSL) against the input, which results in a much lower parsing performance. For more info on how parboiled 1.x and parboiled2 compare see parboiled2 vs. parboiled 1.x. You might also be interested in reading about parboiled2 vs. Scala Parser Combinators and parboiled2 vs. Regular Expressions.
Features
- Concise, flexible and type-safe DSL for expressing parsing logic
- Full expressive power of Parsing Expression Grammars, for effectively dealing with most real-world parsing needs
- Excellent reporting of parse errors
- Parsing performance comparable to hand-written parsers
- Easy to learn and use (just one parsing phase (no lexer code required), rather small API)
- Light-weight enough to serve as a replacement for regular expressions (also strictly more powerful than regexes)
Installation
The artifacts for parboiled2 live on Maven Central and can be tied into your SBT-based Scala project like this:
libraryDependencies += "org.parboiled" %% "parboiled" % "2.1.5"
The latest released version is 2.1.5. It is available for Scala 2.11, 2.12, 2.13-M4 as well as Scala.js 0.6.
parboiled2 has only one single dependency that it will transitively pull into your classpath: shapeless (currently version 2.3.3).
Note: If your project also uses
"io.spray" %% "spray-routing" you'll need to change this to
"io.spray" %% "spray-routing-shapeless2" in order for your project to continue to build since the "regular" spray builds use shapeless 1.x.
Once on your classpath you can use this single import to bring everything you need into scope:
import org.parboiled2._
There might be potentially newer snapshot builds available in the sonatype snapshots repository located at:
You can find the latest ones here: (Scala 2.11) and (Scala 2.12)
Example
This is what a simple parboiled2 parser looks like:
import org.parboiled2._ class Calculator(val input: ParserInput) extends Parser { def InputLine = rule { Expression ~ EOI } def Expression: Rule1[Int] = rule { Term ~ zeroOrMore( '+' ~ Term ~> ((_: Int) + _) | '-' ~ Term ~> ((_: Int) - _)) } def Term = rule { Factor ~ zeroOrMore( '*' ~ Factor ~> ((_: Int) * _) | '/' ~ Factor ~> ((_: Int) / _)) } def Factor = rule { Number | Parens } def Parens = rule { '(' ~ Expression ~ ')' } def Number = rule { capture(Digits) ~> (_.toInt) } def Digits = rule { oneOrMore(CharPredicate.Digit) } } new Calculator("1+1").InputLine.run() // evaluates to `scala.util.Success(2)`
This implements a parser for simple integer expressions like
1+(2-3*4)/5 and runs the actual calculation in-phase with the parser. If you'd like to see it run and try it out yourself check out Running the Examples.
Quick Start
A parboiled2 parser is a class deriving from
org.parboiled2.Parser, which defines one abstract member:
def input: ParserInput
holding the input for the parsing run. Usually it is best implemented as a
val parameter in the constructor (as shown in the Example above). As you can see from this design you need to (re-)create a new parser instance for every parsing run (parser instances are very lightweight).
The "productions" (or "rules") of your grammar are then defined as simple methods, which in most cases consist of a single call to the
rule macro whose argument is a DSL expression defining what input the rule is to match and what actions to perform.
In order to run your parser against a given input you create a new instance and call
run() on the top-level rule, e.g:
val parser = new MyParser(input) parser.topLevelRule.run() // by default returns a ``scala.util.Try``
For more info on what options you have with regard to accessing the results of a parsing run check out the section on Access to Parser Results.
How the Parser matches Input
PEG parsers are quite easy to understand as they work just like most people without a lot of background in parsing theory would build a parser "by hand": recursive-descent with backtracking. They have only one parsing phase (not two, like most parsers produced by traditional parser generators like ANTLR), do not require any look-ahead and perform quite well in most real-world scenarios (although they can exhibit exponential runtime for certain pathological languages and inputs).
A PEG parser consists of a number of rules that logically form a "tree", with one "root" rule at the top calling zero or more lower-level rules, which can each call other rules and so on. Since rules can also call themselves or any of their parents the rule "tree" is not really a tree but rather a potentially cyclic directed graph, but in most cases the tree structure dominates, which is why its useful to think of it as a tree with potential cycles.
When a rule is executed against the current position in an input buffer it applies its specific matching logic to the input, which can either succeed or fail. In the success case the parser advances the input position (the cursor) and potentially executes the next rule. Otherwise, when the rule fails, the cursor is reset and the parser backtracks in search of another parsing alternative that might succeed.
For example consider this simple parboiled2 rule:
def foo = rule { 'a' ~ ('b' ~ 'c' | 'b' ~ 'd') }
When this rule is confronted with the input
abd the parser matches the input in these steps:
- Rule
foostarts executing, which calls its first sub-rule
'a'. The cursor is at position 0.
- Rule
'a'is executed against input position 0, matches (succeeds) and the cursor is advanced to position 1.
- Rule
'b' ~ 'c' | 'b' ~ 'd'starts executing, which calls its first sub-rule
'b' ~ 'c'.
- Rule
'b' ~ 'c'starts executing, which calls its first sub-rule
'b'.
- Rule
'b'is executed against input position 1, matches (succeeds) and the cursor is advanced to position 2.
- Rule
'c'is executed against input position 2 and mismatches (fails).
- Rule
'b' ~ 'c' | 'b' ~ 'd'notices that its first sub-rule has failed, resets the cursor to position 1 and calls its 2nd sub-rule
'b' ~ 'd'.
- Rule
'b' ~ 'd'starts executing, which calls its first sub-rule
'b'.
- Rule
'b'is executed against input position 1, matches and the cursor is advanced to position 2.
- Rule
'd'is executed against input position 2, matches and the cursor is advanced to position 3.
- Rule
'b' ~ 'd'completes successfully, as its last sub-rule has succeeded.
- Rule
'b' ~ 'c' | 'b' ~ 'd'completes successfully, as one of its sub-rules has succeeded.
- Rule
foocompletes execution successfully, as its last sub-rule has succeeded. The whole input "abd" was matched and the cursor is left at position 3 (after the last-matched character).
The Rule DSL
In order to work with parboiled2 effectively you should understand the core concepts behind its rule DSL, mainly the "Value Stack" and how parboiled2 encodes value stack operations in the Scala type system.
Rule Types and the Value Stack
Apart from the input buffer and the cursor the parser manages another important structure: the "Value Stack". The value stack is a simple stack construct that serves as temporary storage for your Parser Actions. In many cases it is used for constructing an AST during the parsing run but it can also be used for "in-phase" computations (like in the Example above) or for any other purpose.
When a rule of a parboiled2 parser executes it performs any combination of the following three things:
- match input, i.e. advance the input cursor
- operate on the value stack, i.e. pop values off and/or push values to the value stack
- perform side-effects
Matching input is done by calling Basic Character Matching rules, which do nothing but match input and advance the cursor. Value stack operations (and other potential side-effects) are performed by Parser Actions.
It is important to understand that rules in parboiled2 (i.e. the rule methods in your parser class) do not directly return some custom value as a method result. Instead, all their consuming and producing values happens as side-effects to the value stack. Thereby the way that a rule interacts with value stack is encoded in the rule's type.
This is the general definition of a parboiled2 rule:
class Rule[-I <: HList, +O <: HList]
This can look scary at first but is really quite simple. An
HList is defined by shapeless and is essentially a type of list whose element number and element types are statically known at compile time. The
I type parameter on
Rule encodes what values (the number and types) the rule pops off the value stack and the
O type parameter encodes what values (the number and types) the rule then pushes onto the value stack.
Luckily, in most cases, you won't have to work with these types directly as they can either be inferred or you can use one of these predefined aliases:
type Rule0 = RuleN[HNil] type Rule1[+T] = RuleN[T :: HNil] type Rule2[+A, +B] = RuleN[A :: B :: HNil] type RuleN[+L <: HList] = Rule[HNil, L] type PopRule[-L <: HList] = Rule[L, HNil]
Here is what these type aliases denote:
- Rule0
- A rule that neither pops off nor pushes to the value stack, i.e. has no effect on the value stack whatsoever. All Basic Character Matching rules are of this type.
- Rule1[+T]
- Pushes exactly one value of type
Tonto the value stack. After
Rule0this is the second-most frequently used rule type.
- Rule2[+A, +B]
- Pushes exactly two values of types
Aand
Bonto the value stack.
- RuleN[+L <: HList]
- Pushes a number of values onto the value stack, which correspond to the given
L <: HListtype parameter.
- PopRule[-L <: HList]
- Pops a number of values off the value stack (corresponding to the given
L <: HListtype parameter) and does not produce any new value itself.
The rule DSL makes sure that the rule types are properly assembled and carried through your rule structure as you combine Basic Character Matching with Rule Combinators and Modifiers and Parser Actions, so as long as you don't write any logic that circumvents the value stack your parser will be completely type-safe and the compiler will be able to catch you if you make mistakes by combining rules in an unsound way.
Basic Character Matching
The following basic character matching rules are the only way to cause the parser to match actual input and "make progress". They are the "atomic" elements of the rule DSL which are then used by the Rule Combinators and Modifiers to form higher-level rules.
- implicit def ch(c: Char): Rule0
Charvalues can be directly used in the rule DSL and match themselves. There is one notable case where you will have to use the explicit
chwrapper: You cannot use the
|operator directly on chars as it denotes the built-in Scala binary "or" operator defined on numeric types (
Charis an unsigned 16-bit integer). So rather than saying
'a' | 'b'you will have to say
ch('a') | 'b'.
- implicit def str(s: String): Rule0
Stringvalues can be directly used in the rule DSL and match themselves.
- implicit def predicate(p: CharPredicate): Rule0
- You can use
org.parboiled2.CharPredicatevalues directly in the rule DSL.
CharPredicateis an efficient implementation of character sets and already comes with a number pre-defined character classes like
CharPredicate.Digitor
CharPredicate.LowerHexLetter.
- implicit def valueMap[T](m: Map[String, T]): R
Values of type
Map[String, T]can be directly used in the rule DSL and match any of the given map's keys and push the respective value upon a successful match. The resulting rule type depends on
T:
- def anyOf(chars: String): Rule0
- This constructs a
Rule0which matches any of the given strings characters.
- def noneOf(chars: String): Rule0
- This constructs a
Rule0which matches any single character except the ones in the given string and except EOI.
- def ignoreCase(c: Char): Rule0
- Matches the given single character case insensitively. Note: The given character must be specified in lower-case! This requirement is currently NOT enforced!
- def ignoreCase(s: String): Rule0
- Matches the given string of characters case insensitively. Note: The given string must be specified in all lower-case! This requirement is currently NOT enforced!
- def ANY: Rule0
- Matches any character except EOI (end-of-input).
- def EOI: Char
- The EOI (end-of-input) character, which is a virtual character that the parser "appends" after the last character of the actual input.
- def MATCH: Rule0
- Matches no character (i.e. doesn't cause the parser to make any progress) but succeeds always. It's the "empty" rule that is mostly used as a neutral element in rule composition.
- def MISMATCH[I <: HList, O <: HList]: Rule[I, O]
- A rule that always fails. Fits any rule signature.
- def MISMATCH0: Rule0
- Same as
MISMATCHbut with a clearly defined type. Use it (rather then
MISMATCH) if the call site doesn't clearly "dictate" a certain rule type and using
MISMATCHtherefore gives you a compiler error.
Rule Combinators and Modifiers
Rules can be freely combined/modified with these operations:
- a ~ b
Two rules
aand
bcan be combined with the
~operator resulting in a rule that only matches if first
amatches and then
bmatches. The computation of the resulting rule type is somewhat involved. Here is an illustration (using an abbreviated HList notation):
- a | b
- Two rules
aand
bcan be combined with the
|operator to form an "ordered choice" in PEG speak. The resulting rule tries to match
aand succeeds if this succeeds. Otherwise the parser is reset and
bis tried. This operator can only be used on compatible rules.
- &(a)
Creates a "positive syntactic predicate", i.e. a rule that tests if the underlying rule matches but doesn't cause the parser to make any progress (i.e. match any input) itself. Also, all effects that the underlying rule might have had on the value stack are cleared out, the resulting rule type is therefore always
Rule0, independently of the type of the underlying rule.
Note that
¬ itself consuming any input can have surprising implications in repeating constructs, see Non-Termination when using Syntactic Predicates for more details.
- !a
Creates a "negative syntactic predicate", i.e. a rule that matches only if the underlying one mismatches and vice versa. A syntactic predicate doesn't cause the parser to make any progress (i.e. match any input) and also clears out all effects that the underlying rule might have had on the value stack. The resulting rule type is therefore always
Rule0, independently of the type of the underlying rule.
Note that
!not itself consuming any input can have surprising implications in repeating constructs, see Non-Termination when using Syntactic Predicates for more details.
- optional(a)
Runs its inner rule and succeeds even if the inner rule doesn't. The resulting rule type depends on the type of the inner rule:
The last case is a so-called "reduction rule", which leaves the value stack unchanged on a type level. This is an example of a reduction rule wrapped with
optional:
capture(CharPredicate.Digit) ~ optional(ch('h') ~> ((s: String) => s + "hex"))
The inner rule of
optionalhere has type
Rule[String :: HNil, String :: HNil], i.e. it pops one
Stringoff the stack and pushes another one onto it, which means that the number of elements on the value stack as well as their types remain the same, even though the actual values might have changed.
As a shortcut you can also use
a.?instead of
optional(a).
- zeroOrMore(a)
Runs its inner rule until it fails, always succeeds. The resulting rule type depends on the type of the inner rule:
The last case is a so-called "reduction rule", which leaves the value stack unchanged on a type level. This is an example of a reduction rule wrapped with
zeroOrMore:
(factor :Rule1[Int]) ~ zeroOrMore('*' ~ factor ~> ((a: Int, b) => a * b))
The inner rule of
zero
zeroOrMore(a).
- oneOrMore(a)
Runs its inner rule until it fails, succeeds if its inner rule succeeded at least once. The resulting rule type depends on the type of the inner rule:
The last case is a so-called "reduction rule", which leaves the value stack unchanged on a type level. This is an example of a reduction rule wrapped with
oneOrMore:
(factor :Rule1[Int]) ~ oneOrMore('*' ~ factor ~> ((a: Int, b) => a * b))
The inner rule of
one
oneOrMore(a).
- xxx.times(a)
Repeats a rule a given number of times.
xxxcan be either a positive
Intvalue or a range
(<x> to <y>)whereby both
<x>and
<y>are positive
Intvalues. The resulting rule type depends on the type of the inner rule:
The last case is a so-called "reduction rule", which leaves the value stack unchanged on a type level. This is an example of a reduction rule wrapped with
oneOrMore:
(factor :Rule1[Int]) ~ (1 to 5).times('*' ~ factor ~> ((a: Int, b) => a * b))
The inner rule here.
- a.separatedBy(separator: Rule0)
You can use
a.separatedBy(b)to create a rule with efficient and automatic support for element separators if
ais a rule produced by the
zeroOrMore,
oneOrMoreor
xxx.timesmodifier and
bis a
Rule0. The resulting rule has the same type as
abut expects the individual repetition elements to be separated by a successful match of the
separatorrule.
As a shortcut you can also use
a.*(b)or
(a * b)instead of
zeroOrMore(a).separatedBy(b). The same shortcut also works for
+(
oneOrMore).
- a ~!~ b
- Same as ~ but with "cut" semantics, meaning that the parser will never backtrack across this boundary. If the rule being concatenated doesn't match a parse error will be triggered immediately. Usually you don't need to use this "cut" operator but in certain cases it can help in simplifying grammar construction.
Parser Actions
The Basic Character Matching rules and the Rule Combinators and Modifiers allow you to build recognizers for potentially complex languages, but usually your parser is supposed to do more than simply determine whether a given input conforms to the defined grammar. In order to run custom logic during parser execution, e.g. for creating custom objects (like an AST), you will have to add some "actions" to your rules.
- push(value)
push(value)creates a rule that matches no input (but always succeeds, as a rule) and pushes the given value onto the value stack. Its rule type depends on the given value:
Also note that, due to the macro expansion the parboiled2 rule DSL is based on, the given value expression behaves like a call-by-name parameter even though it is not marked as one! This means that the argument expression to
pushis (re-)evaluated for every rule execution.
- capture(a)
Wrapping a rule
awith
captureturns that rule into one that pushes an additional
Stringinstance onto the value stack (in addition to all values that
aalready pushes itself): the input text matched by
a.
For example
capture(oneOrMore(CharPredicate.Digit))has type
Rule1[String]and pushes one value onto the value stack: the string of digit characters matched by
oneOrMore(CharPredicate.Digit).
Another example:
capture("foo" ~ push(42))has type
Rule2[Int, String]and will match input "foo". After successful execution the value stack will have the String
"foo"as its top element and
42underneath.
- test(condition: Boolean): Rule0
testimplements "semantic predicates". It creates a rule that matches no input and succeeds only if the given condition expression evaluates to true. Note that, due to the macro expansion the parboiled2 rule DSL is based on, the given argument behaves like a call-by-name parameter even though it is not marked as one! This means that the argument expression to
testis (re-)evaluated for every rule execution, just as if
testwould have been defined as
def test(condition: => Boolean): Rule0.
- a ~> (...)
The
~>operator is the "action operator" and as such the most frequently used way to add custom logic to a rule. It can be applied to any rule and appends action logic to it. The argument to
~>is always a function, what functions are allowed and what the resulting rule type is depends on the type of
a.
The basic idea is that the input of the function is popped off the value stack and the result of the function is pushed back onto it. In its basic form the
~>operator therefore transforms the top elements of the value stack into some other object(s).
Let's look at some examples:
(foo: Rule1[Int]) ~> (i => i * 2)
This results in a
Rule1[Int]which multiplies the "output" of rule
fooby 2.
(foo: Rule2[Int, String]) ~> ((i, s) => s + i.toString)
This results in a
Rule1[String]which combines the two "outputs" of rule
foo(an
Intand a
String) into one single
String.
(foo: Rule2[Int, String]) ~> (_.toDouble)
This results in a
Rule2[Int, Double]. As you can see the function argument to
~>doesn't always have to "take" the complete output of the rule its applied to. It can also take fewer or even more elements. Its parameters are simply matched left to right against the top of the value stack (the right-most parameter matching the top-level element).
(foo: Rule1[String]) ~> ((i :Int, s) => s + i.toString)
This results in a
Rule[Int :: HNil, String :: HNil], i.e. a rule that pops one
Intvalue off the stack and replaces it with a
String. Note that, while the parameter types to the action function can be inferred if they can be matched against an "output" of the underlying rule, this is not the case for parameters that don't directly correspond to an underlying output. In these cases you need to add an explicit type annotation to the respective action function parameter(s).
If an action function returns
Unitit doesn't push anything on the stack. So this rule
(foo: Rule1[String]) ~> (println(_))
has type
Rule0.
Also, an action function can also be a
Function0, i.e. a function without any parameters:
(foo: Rule1[String]) ~> (() => 42)
This rule has type
Rule2[String, Int]and is equivalent to this:
(foo: Rule1[String]) ~ push(42)
An action function can also produce more than one output by returning an
HListinstance:
(foo: Rule1[String]) ~> (s => s.toInt :: 3.14 :: HNil)
This has type
Rule2[Int, Double].
One more very useful feature is special support for case class instance creation:
case class Person(name: String, age: Int) (foo: Rule2[String, Int]) ~> Person
This has type
Rule1[Person]. The top elements of the value stack are popped off and replaced by an instance of the case class if they match in number, order and types to the case class members. This is great for building AST-like structures! Check out the Calculator2 example to see this form in action.
Note that there is one quirk: For some reason this notation stops working if you explicitly define a companion object for your case class. You'll have to write
~> (Person(_, _))instead.
And finally, there is one more very powerful action type: the action function can itself return a rule! If an action returns a rule this rule is immediately executed after the action application just as if it had been concatenated to the underlying rule with the
~operator. You can therefore do things like
(foo: Rule1[Int]) ~> (i => test(i % 2 == 0) ~ push(i))
which is a
Rule1[Int]that only produces even integers and fails for all others. Or, somewhat unusual but still perfectly legal:
capture("x") ~> (str(_))
which is a
Rule0that is identical to
'x' ~ 'x'.
- run(expression)
runis the most versatile parser action. It can have several shapes, depending on the type of its argument expression. If the argument expression evaluates to
- a rule (i.e. has type
R <: Rule[_, _]) the result type of
runis this rule's type (i.e.
R) and the produced rule is immediately executed.
- a function with 1 to 5 parameters these parameters are mapped against the top of the value stack, popped and the function executed. Thereby the function behaves just like an action function for the
~>operator, i.e. if it produces a
Unitvalue this result is simply dropped.
HListresults are pushed onto the value stack (all their elements individually), rule results are immediately executed and other result values are pushed onto the value stack as a single element. The difference between using
runand attaching an action function with the
~>operator is that in the latter case the compiler can usually infer the types of the function parameters (if they map to "output" values of the base rule) while with
runyou always have to explicitly attach type annotation to the function parameters.
- a function with one
HListparameter the behavior is similar to the previous case with the difference that the elements of this parameter
HListare mapped against the value stack top. This allows for consumption of an arbitrary number of value stack elements (Note: This feature of
runis not yet currently implemented.)
- any other value the result type of
runis an always succeeding
Rule0. Since in this case it doesn't interact with the value stack and doesn't match any input all it can do is perform "unchecked" side effects. Note that by using
runin this way you are leaving the "safety-net" that the value stack and the rule type system gives you! Make sure you understand what you are doing before using these kinds of
runactions!
Also note that, due to the macro expansion the parboiled2 rule DSL is based on, the given block behaves like a call-by-name parameter even though it is not marked as one! This means that the argument expression to
runis (re-)evaluated for every rule execution.
- runSubParser(f: ParserInput ⇒ Rule[I, O]): Rule[I, O]
- This action allows creation of a sub parser and running of one of its rules as part of the current parsing process. The subparser will start parsing at the current input position and the outer parser (the one calling
runSubParser) will continue where the sub-parser stopped.
There are a few more members of the
Parser class that are useful for writing efficient action logic:
- def cursor: Int
- The index of the next (yet unmatched) input character. Note: Might be equal to
input.lengthif the cursor is currently behind the last input character!
- def cursorChar: Char
- The next (yet unmatched) input character, i.e. the one at the
cursorindex. Identical to
if (cursor < input.length) input.charAt(cursor) else EOIbut more efficient.
- def lastChar: Char
- Returns the last character that was matched, i.e. the one at index
cursor - 1and as such is equivalent to
charAt(-1). Note that for performance optimization this method does not do a range check, i.e. depending on the
ParserInputimplementation you might get an exception when calling this method before any character was matched by the parser.
- def charAt(offset: Int): Char
- Returns the character at the input index with the given delta to the cursor and as such is equivalent to
input.charAt(cursor + offset). Note that for performance optimization this method does not do a range check, i.e. depending on the
ParserInputimplementation you might get an exception if the computed index is out of bounds.
- def charAtRC(offset: Int): Char
- Same as
charAtbut range-checked. Returns the input character at the index with the given offset from the cursor. If this index is out of range the method returns
EOI.
You can use these to write efficient character-level logic like this:
def hexDigit: Rule1[Int] = rule { CharPredicate.HexAlpha ~ push(CharUtils.hexValue(lastChar)) }
Additional Helpers
- Base64Parsing
- For parsing RFC2045 (Base64) encoded strings parboiled provides the
Base64Parsingtrait which you can mix into your
Parserclass. See its source for more info on what exactly it provides. parboiled also comes with the
org.parboiled2.util.Base64class which provides an efficient Base64 encoder/decoder for the standard as well as custom alphabets.
- DynamicRuleDispatch
- Sometimes an application cannot fully specify at compile-time which of a given set of rules is to be called at runtime. For example, a parser for parsing HTTP header values might need to select the right parser rule for a header name that is only known once the HTTP request has actually been read from the network. To prevent you from having to write a large (and not really efficient)
matchagainst the header name for separating out all the possible cases parboiled provides the
DynamicRuleDispatchfacility. Check out its test for more info on how to use it.
- StringBuilding
- For certain high-performance use-cases it is sometimes better to construct Strings that the parser is to produce/extract from the input in a char-by-char fashion. To support you in doing this parboiled provides the
StringBuildingtrait which you can mix into your
Parserclass. It provides convenient access to a single and mutable
StringBuilderinstance. As such it operates outside of the value stack and therefore without the full "safety net" that parboiled's DSL otherwise gives you. If you don't understand what this means you probably shouldn't be using the
StringBuildingtrait but resort to
captureand ordinary parser actions instead.
Error Reporting
In many applications, especially with grammars that are not too complex, parboiled provides good error reports right out of the box, without any additional requirements on your part. However, there are cases where you want to have more control over how parse errors are created and/or formatted. This section gives an overview over how parse error reporting works in parboiled and how you can influence it.
The Error Collection Process
As described in the section about How the Parser matches Input above the parser consumes input by applying grammar rules and backtracking in the case of mismatches. As such rule mismatches are an integral part of the parsers operation and do not generally mean that there is something wrong with the input. Only when the root rule itself mismatches and the parser has no backtracking options remaining does it become clear that a parse error is present. At that point however, when the root rule mismatches, the information about where exactly the problematic input was and which of the many rule mismatches that the parser experienced during the run were the "bad" ones is already lost.
parboiled overcomes this problem by simply re-running the failed parser, potentially many times, and "watching" it as it tries to consume the erroneous input. With every re-run parboiled learns a bit more about the position and nature of the error and when this analysis is complete a
ParseError instance is constructed and handed to the application as the result of the parsing run, which can then use the error information on its level (e.g. for formatting it and displaying it to the user). Note that re-running the parser in the presence of parse errors does result in unsuccessful parsing runs being potentially much slower than successful ones. However, since in the vast majority of use cases failed runs constitute only a small minority of all parsing runs and the normal flow of application logic is disrupted anyway, this slow-down is normally quite acceptable, especially if it results in better error messages. See the section on Limiting Error Re-Runs if this is not true for your application.
In principle the error reporting process looks like this:
- The grammar's root rule is run at maximum speed against the parser input. If this succeeds then all is well and the parsing result is immediately dispatched to the user.
- If the root rule did not match we know that there we have a parsing error. The parser is then run again to establish the "principal error location". The principal error location is the first character in the input that could not be matched by any rule during the parsing run. In order words, it is the maximum value that the parser's
cursormember had during the parsing run.
- Once the error location is known the parser is run again. This time all rule mismatches against the input character at error location are recorded. These rule mismatches are used to determine what input the grammar "expects" at the error location but failed to see. For every such "error rule mismatch" the parser collects the "rule trace", i.e. the stack of rules that led to it. Currently this is done by throwing a special exception that bubbles up through the JVM call stack and records rule stack information on its way up. A consequence of this design is that the parser needs to be re-run once per "error rule mismatch".
- When all error rule traces have been collected all the relevant information about the parse error has been extracted and a
ParseErrorinstance can be constructed and dispatched to the user.
Note: The real process contains a few more steps to properly deal with the
atomic and
quiet markers described below. However, knowledge of these additional steps is not important for understanding the basic approach for how
ParseError instances are constructed.
Formatting Parse Errors
If a parsing runs fails and you receive a
ParseError instance you can call the
formatError method on your parser instance to get the error rendered into an error message string:
val errorMsg = parser.formatError(error)
The
formatError message can also take an explicit
ErrorFormatter as a second argument, which allows you to influence how exactly the error is to be rendered. For example, in order to also render the rule traces you can do:
val errorMsg = parser.formatError(error, new ErrorFormatter(showTraces = true))
Look at the signature of the
ErrorFormatter constructor for more information on what rendering options exist.
If you want even more control over the error rendering process you can extend the
ErrorFormatter and override its methods where you see fit.
Tweaking Error Reporting
While the error collection process described above yields all information required for a basic "this character was not matched and these characters were expected instead" information you sometimes want to have more control over what exactly is reported as "found" and as "expected".
The
atomic Marker
Since PEG parsers are scanner-less (i.e. without an intermediate "TOKEN-stream") they operate directly on the input buffer's character level. As such, by default, parboiled reports all errors on this character level.
For example, if you run the rule
"foo" | "fob" | "bar" against input "foxes" you'll get this error message:
Invalid input 'x', expected 'o' or 'b' (line 1, column 3): foxes ^
While this error message is certainly correct, it might not be what you want to show your users, e.g. because
foo,
fob and
bar are regarded as "atomic" keywords of your language, that should either be matched completely or not at all. In this case you can use the
atomic marker to signal this to the parser. For example, running the rule
atomic("foo") | atomic("fob") | atomic("bar") against input "foxes" yields this error message:
Invalid input "fox", expected "foo", "fob" or "bar" (line 1, column 1): foxes ^
Of course you can use the
atomic marker on any type of rule, not just string rules. It essentially moves the reported error position forward from the principal error position and lifts the level at which errors are reported from the character level to a rule level of your choice.
The
quiet Marker
Another problem that more frequently occurs with parboiled's default error reporting is that the list of "expected" things becomes too long. Often the reason for this are rules that deal match input which can appear pretty much anywhere, like whitespace or comments.
Consider this simple language:
def Expr = rule { oneOrMore(Id ~ Keyword ~ Id).separatedBy(',' ~ WS) ~ EOI } def Id = rule { oneOrMore(CharPredicate.Alpha) ~ WS } def Keyword = rule { atomic(("has" | "is") ~ WS) } def WS = rule { zeroOrMore(anyOf(" \t \n")) }
When we run the
Expr rule against input "Tim has money, Tom Is poor" we get this error:
Invalid input 'I', expected [ \t \n] or Keyword (line 1, column 20): Tim has money, Tom Is poor ^
Again the list of "expected" things is technically correct but we don't want to bother the user with the information that whitespace is also allowed at the error location. The
quiet marker let's us suppress a certain rule from the expected list if there are also non-quiet alternatives:
def WS = rule { quiet(zeroOrMore(anyOf(" \t \n"))) }
With that change the error message becomes:
Invalid input 'I', expected Keyword (line 1, column 20): Tim has money, Tom Is poor ^
which is what we want.
Naming Rules
parboiled uses a somewhat involved logic to determine what exactly to report as "mismatched" and "expected" for a given parse error. Essentially the process looks like this:
- Compare all rule trace for the error and drop a potentially existing common prefix. This is done because, if all traces share a common prefix, this prefix can be regarded as the "context" of the error which is probably apparent to the user and as such doesn't need to be reported.
- For each trace (suffix), find the first frame that tried to start its match at the reported error position. The string representation of this frame (which might be an assigned name) is selected for "expected" reporting.
- Duplicate "expected" strings are removed.
So, apart from placing
atomic and
quiet markers you can also influence what gets reported as "expected" by explicitly naming rules. One way to do this is to pick good names for the rule methods as they automatically attach their name to their rules. The names of
val or
def members that you use to reference
CharPredicate instances also automatically name the respective rule.
If you don't want to split out rules into their own methods you can also use the
named modifier. With it you can attach an explicit name to any parser rule. For example, if you run the rule
foo from this snippet:
def foo = rule { "aa" | atomic("aaa").named("threeAs") | 'b' | 'B'.named("bigB") }
against input
x you'll get this error message:
Invalid input 'x', expected 'a', threeAs, 'b' or bigB (line 1, column 1): x ^
Manual Error Reporting
If you want to completely bypass parboiled's built-in error reporting logic you can do so by exclusively relying on the
fail helper, which causes the parser to immediately and fatally terminate the parsing run with a single one-frame rule trace with a given "expected" message.
For example, the rule
"foo" | fail("a true FOO") will produce this error when run against
x:
Invalid input 'x', expected a true FOO (line 1, column 1): x ^
Limiting Error Re-Runs
Really large grammars, especially ones with bugs as they commonly appear during development, can exhibit a very large number of rule traces (potentially thousands) and thus cause the parser to take longer than convenient to terminate an error parsing run. In order to mitigate this parboiled has a configurable limit on the maximum number of rule traces the parser will collect during a single error run. The default limit is 24, you can change it by overriding the
errorTraceCollectionLimit method of the
Parser class.
Recovering from Parse Errors
Currently parboiled only ever parses up to the very first parse error in the input. While this is all that's required for a large number of use cases there are applications that do require the ability to somehow recover from parse errors and continue parsing. Syntax highlighting in an interactive IDE-like environment is one such example.
Future versions of parboiled might support parse error recovery. If your application would benefit from this feature please let us know in this github ticket.
Advanced Techniques
Meta-Rules
Sometimes you might find yourself in a situation where you'd like to DRY up your grammar definition by factoring out common constructs from several rule definitions in a "meta-rule" that modifies/decorates other rules. Essentially you'd like to write something like this (illegal code!):
def expression = rule { bracketed(ab) ~ bracketed(cd) } def ab = rule { "ab" } def cd = rule { "cd" } def bracketed(inner: Rule0) = rule { '[' ~ inner ~ ']' }
In this hypothetical example
bracketed is a meta-rule which takes another rule as parameter and calls it from within its own rule definition.
Unfortunately enabling a syntax such as the one shown above it not directly possible with parboiled. When looking at how the parser generation in parboiled actually works the reason becomes clear. parboiled "expands" the rule definition that is passed as argument to the
rule macro into actual Scala code. The rule methods themselves however remain what they are: instance methods on the parser class. And since you cannot simply pass a method name as argument to another method the calls
bracketed(ab) and
bracketed(cd) from above don't compile.
However, there is a work-around which might be good enough for your meta-rule needs:
def expression = rule { bracketed(ab) ~ bracketed(cd) } val ab = () ⇒ rule { "ab" } val cd = () ⇒ rule { "cd" } def bracketed(inner: () ⇒ Rule0) = rule { '[' ~ inner() ~ ']' }
If you model the rules that you want to pass as arguments to other rules as
Function0 instances you can pass them around. Assigning those function instances to
val members avoids re-allocation during every execution of the
expression rule which would come with a potentially significant performance cost.
Common Mistakes
Disregarding Order Choice
There is one mistake that new users frequently make when starting out with writing PEG grammars: disregarding the "ordered choice" logic of the
| operator. This operator always tries all alternatives in the order that they were defined and picks the first match.
As a consequence earlier alternatives that are a prefix of later alternatives will always "shadow" the later ones, the later ones will never be able to match!
For example in this simple rule
def foo = rule { "foo" | "foobar" }
"foobar" will never match. Reordering the alternatives to either "factor out" all common prefixes or putting the more specific alternatives first are the canonical solutions.
If your parser is not behaving the way you expect it to watch out for this "wrong ordering" problem, which might be not that easy to spot in more complicated rule structures.
Non-Termination when using Syntactic Predicates
The syntactic predicate operators,
& and
!, don't themselves consume any input, so directly wrapping them with a repeating combinator (like
zeroOrMore or
oneOrMore) will lead to an infinite loop as the parser continuously runs the syntactic predicate against the very same input position without making any progress.
If you use syntactic predicates in a loop make sure to actually consume input as well. For example:
def foo = rule { capture(zeroOrMore( !',' )) }
will never terminate, while
def foo = rule { capture(zeroOrMore( !',' ~ ANY )) }
will capture all input until it reaches a comma.
Unchecked Mutable State
parboiled2 parsers work with mutable state as a design choice for achieving good parsing performance. Matching input and operating on the value stack happen as side-effects to rule execution and mutate the parser state. However, as long as you confine yourself to the value stack and do not add parser actions that mutate custom parser members the rule DSL will protect you from making mistakes.
It is important to understand that, in case of rule mismatch, the parser state (cursor and value stack) is reset to what it was before the rule execution was started. However, if you write rules that have side-effects beyond matching input and operating on the value stack than these side-effects cannot be automatically rolled-back! This means that you will have to make sure that you action logic "cleans up after itself" in the case of rule mismatches or is only used in locations where you know that rule execution can never fail. These techniques are considered advanced and are not recommended for beginners.
The rule DSL is powerful enough to support even very complex parsing logic without the need to resort to custom mutable state, we consider the addition of mutable members as an optimization that should be well justified.
Handling Whitespace
One disadvantage of PEGs over lexer-based parser can be the handling of white space. In a "traditional" parser with a separate lexer (scanner) phase this lexer can simply skip all white space and only generate tokens for the actual parser to operate on. This can free the higher-level parser grammar from all white space treatment.
Since PEGs do not have a lexer but directly operate on the raw input they have to deal with white space in the grammar itself. Language designers with little experience in PEGs can sometime be unsure of how to best handle white space in their grammar.
The common and highly recommended pattern is to match white space always immediately after a terminal (a single character or string) but not in any other place. This helps with keeping your grammar rules properly structured and white space "taken care of" without it getting in the way.
In order to reduce boilerplate in your grammar definition parboiled allows for cleanly factoring out whitespace matching logic into a dedicated rule. By defining a custom implicit conversion from
String to
Rule0 you can implicitly match whitespace after a string terminal:
class FooParser(val input: ParserInput) extends Parser { implicit def wspStr(s: String): Rule0 = rule { str(s) ~ zeroOrMore(' ') } def foo = rule { "foobar" | "foo" } // implicitly matches trailing blanks def fooNoWSP = rule { str("foobar") | str("foo") } // doesn't match trailing blanks }
In this example all usages of a plain string literals in the parser rules will implicitly match trailing space characters. In order to not apply the implicit whitespace matching in this case simply say
str("foo") instead of just
"foo".
Parsing the whole Input
If you don't explicitly match
EOI (the special end-of-input pseudo-character) in your grammar's root rule the parser will not produce an error if, at the end of a parsing run, there is still unmatched input left. This means that if the root rule matches only a prefix of the whole input the parser will report a successful parsing run, which might not be what you want.
As an example, consider this very basic parser:
class MyParser(val input: ParserInput) extends Parser { def InputLine = rule { "foo" | "bar" } } new MyParser("foo").InputLine.run() // Success new MyParser("foot").InputLine.run() // also Success!!
In the second run of the parser, instead of failing with a
ParseError as you might expect, it successfully parses the matching input
foo and ignores the rest of the input.
If this is not what you want you need to explicitly match
EOI, for example as follows:
def InputLine = rule { ("foo" | "bar") ~ EOI }
Grammar Debugging
TODO
(e.g., use
parse.formatError(error, showTraces = true))
Access to Parser Results
In order to run the top-level parser rule against a given input you create a new instance of your parser class and call
run() on it, e.g:
val parser = new MyParser(input) val result = parser.rootRule.run()
By default the type of
result in this snippet will be a
Try[T] whereby
T depends on the type of
rootRule:
The contents of the value stack at the end of the
rootRule execution constitute the result of the parsing run. Note that
run() is not available on rules that are not of type
RuleN[L <: HList].
If the parser is not able to match the input successfully it creates an instance of class
ParseError , which is defined like this
case class ParseError(position: Position, charCount: Int, traces: Seq[RuleTrace]) extends RuntimeException
In such cases the
Try is completed with a
scala.util.Failure holding the
ParseError. If other exceptions occur during the parsing run (e.g. because some parser action failed) these will also end up as a
Try failure.
parboiled2 has quite powerful error reporting facilities, which should help you (and your users) to easily understand why a particular input does not conform to the defined grammar and how this can be fixed. The
formatError method available on the
Parser class is of great utility here, as it can "pretty print" a parse error instance, to display something like this (excerpt from the ErrorReportingSpec_):
Invalid input 'x', expected 'f', Digit, hex or UpperAlpha (line 1, column 4): abcx ^ 4 rules mismatched at error location: targetRule / | / "fgh" / 'f' targetRule / | / Digit targetRule / | / hex targetRule / | / UpperAlpha
Alternative DeliverySchemes
Apart from delivering your parser results as a
Try[T] parboiled2 allows you to select another one of the pre-defined
Parser.DeliveryScheme alternatives, or even define your own. They differ in how they wrap the three possible outcomes of a parsing run:
- parsing completed successfully, deliver a result of type
T
- parsing failed with a
ParseError
- parsing failed due to another exception
This table compares the built-in
Parser.DeliveryScheme alternatives (the first one being the default):
ec:
Running the Examples
Follow these steps to run the example parsers defined here on your own machine:
Clone the parboiled2 repository:
git clone git://github.com/sirthias/parboiled2.git
Change into the base directory:
cd parboiled2
Run SBT:
sbt "project examples" run
Alternatives
parboiled2 vs. parboiled 1.x
TODO
(about one order of magnitude faster, more powerful DSL, improved error reporting, fewer dependencies (more lightweight), but Scala 2.10.3+ only, no error recovery (yet) and no Java version (ever))
parboiled2 vs. Scala Parser Combinators
TODO
(several hundred times (!) faster, better error reporting, more concise and elegant DSL, similarly powerful in terms of language class capabilities, but Scala 2.10.3+ only, 2 added dependencies (parboiled2 + shapeless))
parboiled2 vs. Regular Expressions
TODO
(much easier to read and maintain, more powerful (e.g. regexes do not support recursive structures), faster, but Scala 2.10.3+ only, 2 added dependencies (parboiled2 + shapeless))
Roadmap
TODO
Contributing
TODO
Support
In most cases the parboiled2 mailing list is probably the best place for your needs with regard to support, feedback and general discussion.
Note: Your first post after signup is going to be moderated (for spam protection), but we'll immediately give you full posting privileges if your message doesn't unmask you as a spammer.
You can also use the gitter.im chat channel for parboiled2:
References
TODO
Credits
Much of parboiled2 was developed by Alexander Myltsev during GSoc 2013, a big thank you for his great work!
Also, without the Macro Paradise made available by Eugene Burmako parboiled2 would probably still not be ready and its codebase would look a lot more messy.
License
parboiled2 is released under the Apache License 2.0 | https://index.scala-lang.org/sirthias/parboiled2/parboiled/1.0.0?target=_2.12 | CC-MAIN-2019-13 | refinedweb | 8,564 | 51.07 |
In today’s Programming Praxis exercise we need to write an improved version of a factorization algorithm. I was on vacation when the original exercise was posted, so let’s see what we can do with it.
As usual, some imports:
import Data.Bits import Data.List
We need the same expm function we have used in several previous exercises. Alternatively we could use the expmod function from Codec.Encryption.RSA.NumberTheory, but it’s a lot slower than this version.
expm :: Integer -> Integer -> Integer -> Integer expm b e m = foldl' (\r (b', _) -> mod (r * b') m) 1 . filter (flip testBit 0 . snd) . zip (iterate (flip mod m . (^ 2)) b) . takeWhile (> 0) $ iterate (`shiftR` 1) e
The scheme solution has some duplication in it: the start of the pollard1 and pollard2 functions is nearly identical. Since programmers hate repeating themselves, let’s factor that out into a separate function.
pollard :: (Integer -> t) -> (Integer -> t) -> Integer -> Integer -> t pollard found notFound n b1 = f 2 2 where f a i | i < b1 = f (expm a i n) (i + 1) | 1 < d && d < n = found d | otherwise = notFound a where d = gcd (a - 1) n
pollard1 then becomes very simple: if we don’t find anything we stop, otherwise we return the result.
pollard1 :: Integer -> Integer -> Maybe Integer pollard1 = pollard Just (const Nothing)
pollard2 is a bit more involved, because we now have an extra step if we don’t find anything. The structure of this is very similar to the pollard function, but there are enough differences that it’s not worth the bother of abstracting it.
pollard2 :: Integer -> Integer -> Integer -> Maybe (String, Integer) pollard2 n b1 b2 = pollard (Just . (,) "stage1") (f b1) n b1 where f j a | j == b2 = Nothing | 1 < d && d < n = Just ("stage2", d) | otherwise = f (j + 1) a where d = gcd (expm a j n - 1) n
And of course the test to see if everything’s working correctly:
main :: IO () main = do print $ pollard1 15770708441 150 print $ pollard1 15770708441 180 print $ pollard2 15770708441 150 180
Tags: algorithm, bonsai, code, factorization, Haskell, kata, pollard, praxis, programming | https://bonsaicode.wordpress.com/2010/03/19/programming-praxis-extending-pollard%E2%80%99s-p-1-factorization-algorithm/ | CC-MAIN-2017-30 | refinedweb | 352 | 59.74 |
Details
- Type:
Bug
- Status:
Resolved
- Priority:
Minor
- Resolution: Fixed
- Affects Version/s: None
- Fix Version/s: JRuby 1.7.0.pre2
- Component/s: None
- Labels:None
- Number of attachments :
Description
Using /home/conrad/.rvm/gems/ruby-1.9.3-p194
def Object.foo; class_eval("binding").eval("_method_"); end
Object.foo
=> :foo
Using /home/conrad/.rvm/gems/jruby-1.6.7
def Object.foo; class_eval("binding").eval("_method_"); end
Object.foo
=> :class_eval
Activity
Hi Charles,
Certainly not important at all, but it was causing pry some test failures. ()
We use the name of the binding to determine how to treat commands like "show-source". If you're inside a binding (binding.pry) we show you the source of the method; if you're inside a module (YARD.pry) we show you the source of the module.
I've fixed this in pry too:.
Conrad
Will it affect stack traces? In that case it is rather important to make development with JRuby easier.
JRuby generates stack traces for evaluated bindings based on the place they're actually called anyway, so that's already a minor divergence.
Looks like it's limited to class_eval, so it may be trivial:
system ~/projects/jruby $ jruby -e "def Object.foo; eval('binding').eval('__method__'); end; p Object.foo" :foo system ~/projects/jruby $ jruby -e "def Object.foo; class_eval('binding').eval('__method__'); end; p Object.foo" :class_eval
commit cc6e09f4b3e83964fb2603fb9aec23e1718d4e4d Author: Charles Oliver Nutter <headius@headius.com> Date: Tue Jul 3 10:50:03 2012 -0500 Fix JRUBY-6753 class_eval should inherit __name__ from the caller class_eval/module_eval were still being framed, which caused them to interfere with getting the actual caller name. Removed the framing and the bug is fixed. :100644 100644 22aa412... bd18837... M src/org/jruby/RubyModule.java
For some reason this doesn't feel like a high-priority item. Did this affect something real or did you just stumble upon it? | http://jira.codehaus.org/browse/JRUBY-6753 | CC-MAIN-2014-35 | refinedweb | 317 | 61.63 |
Send Lora message to non pycom device
Hi,
I just got my LoPy device up and running, I'm using a simple script to send a "hello" message via Lora to another Lora board, the connect2Pi;
Here's my script:
from network import LoRa
import socket
import machine
import time
lora = LoRa(mode=LoRa.LORA, frequency=863000000)
s = socket.socket(socket.AF_LORA, socket.SOCK_RAW)
counter = 0
while True:
s.setblocking(True) s.send('Hello') print('Hello sent...{}'.format(counter)) counter += 1 if counter >= 100: # reset counter at 100 counter = 1 time.sleep(1)
My connect2Pi does not seem to be receiving any data. It has the following settings:
channel0
Power level: P9
Bandwidth: B0
Band Plan: b0
I'm by no means a Lora expert but has anyone any ideas/ suggestions?
Thanks,
Paul
@jmarcelino said in Send Lora message to non pycom device:
You just need to come with with a common set of values for both modules and in the LoPy case set those when initializing the LoRa class
Hi again, sorry for the delay in the update. So below is what I've set my LOPY to:
lora = LoRa(mode=LoRa.LORA, region=LoRa.EU868, sf=12, frequency=869850000, bandwidth=LoRa.BW_125KHZ, public=True)
s = socket.socket(socket.AF_LORA, socket.SOCK_RAW)
Seems fairly standard, but what does the "public" parameter do?
On the ERA-LORA side via the companion GUI software I have the following st:
Bandplan (frequency): 869.85MHz
Bandwidth: 125KHz
Channel: 0
Spreading Factor: 12
This seems to be a match on both sides though I still cannot get them to communicate :(
Does the LOPY use any encryption by default?
Any other defaults I need to be aware of?
Thanks,
Paul
@jmarcelino Hello again,
thanks for your reply, below is my current script on the LoPy:
from network import LoRa import socket import machine import time # initialize LoRa in LORA mode # more params can also be given, like frequency, tx power and spreading factor lora = LoRa(mode=LoRa.LORA, frequency=869850000, sf=10, bandwidth=LoRa.BW_250KHZ, public=True) # create a raw LoRa socket s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) counter = 0 while True: # send some data s.setblocking(True) s.send('Hello_from_LoPy') print('Hello_from_LoPy...{}'.format(counter)) counter += 1 # get any data received... s.setblocking(False) data = s.recv(64) print(data) if counter >= 100: # reset counter at 100 counter = 1 time.sleep(1) # wait a random amount of time time.sleep(5) #time.sleep(machine.rng() & 0x0F)
- jmarcelino last edited by jmarcelino
@pkilcoyne
What does your LoRa initialisation look like? Are you setting LoRa.BW_500KHZ etc?
You may have to play with the public and rx_iq settings. Sorry I don’t know what your other device expects for those.
@jmarcelino Hi again,
so what I've done since is send messages between two LPRS devices sucessfully. The LoPy does not seem to receive these messages nor does it's programmed TX messages be received by either LPRS device.
Right now I have two parallel "lora" networks, one of two LPRS devices sending and receiving messages and one of two LoPy devices.
Maybe they just aren't meant to work together.
Paul
- jmarcelino last edited by
@pkilcoyne
Group ID is not a LoRa radio parameter so I'd guess it's just some data that gets sent - like a number prefix.
Once you match the two unit's radio parameters try to see if you receive any data on the LoPy if you try to send from your device... maybe the Group ID will show up there.
I guess you can also ask the manufacturer to explain what Group ID is.
@jmarcelino said in Send Lora message to non pycom device:
set
Hi again,
thanks for the link, I've looked through it and I've matched relevant parameters. Also, I've contacted the manufacturer and they said I also need to match the "group ID", default is 1234, but I cannot see any reference to this parameter in the Pycom documentation.
Any ideas?
Thanks,
Paul
- jmarcelino last edited by jmarcelino
@pkilcoyne
All the information you need how to set the module and what frequencies their “channels” really mean is in their datasheet.
You just need to come with with a common set of values for both modules and in the LoPy case set those when initializing the LoRa class
os.uname()
(sysname='LoPy', nodename='LoPy', release='1.10.1.b1', version='v1.8.6-839-g536c958c on 2017-11-15', machine='LoPy with ESP32', lorawan='1.0.0')
Antenna attached.
I don't know how to match band plan / channel to the frequency I'm sending on :(
Do you mean UART data rates or rates at which I send over the air? Not sure how to adjust these.
Thanks,
Paul
@xykon said in Send Lora message to non pycom device:
Can you please confirm which module you're trying to use?
Yes sorry about my first post, the latter module is the one I'm actually using, the ERA-LORA.
For the ERA-LORA there is a companion software where you can select the following attributes:
Radio power level: P9 - the highest
Channel: 0 ( 0-31)
bandwidth: 500kHz
band plan: b0 - 869.85MHz
Spreading Factor: SF10
I just don't know how to match the above with the LoPy modules so they can communicate.
Regards,
Paul
- Xykon administrators last edited by
@pkilcoyne That is not the same module you linked in your initial post.
Can you please confirm which module you're trying to use?
@xykon said in Send Lora message to non pycom device:
aybe I'm missing something but I can't see any mention of that thing using LoRa.
I found this, which mentions LoRa, on their website:
Curious...
Thanks all,
so the EasyRadio modules not being LoRa I wasn't testing like with like...got it.
Many thanks,
Paul
- jmarcelino last edited by
Yes, the LPRS easyRadio system is based on the TI CC430F5137 and uses FSK not LoRa.
- Xykon administrators last edited by
@pkilcoyne said in Send Lora message to non pycom device:
According to the datasheet: Utilising proprietary LPRS easyRadio technology operating in the 868MHz (UK & Europe) & 915MHz (US) Industrial Scientific & Medical (ISM) bands the Connect2Pi USB ‘dongle’ provides a simple ‘wireless bridge’ between Raspberry Pi (Pi2Pi), a Raspberry Pi and a PC or any other device that supports USB serial communications.
Maybe I'm missing something but I can't see any mention of that thing using LoRa.
@pkilcoyne A few things to check:
- have you properly upgraded the LoPy firmware (
os.unamewill tell you the version you're running)?
- do you have an antenna connected to the LoPy, on the right port (the one next to the LED)?
- does the band plan / channel match the frequency you're sending on?
- have you tried setting different data rates? | https://forum.pycom.io/topic/2174/send-lora-message-to-non-pycom-device/16 | CC-MAIN-2020-50 | refinedweb | 1,140 | 62.88 |
2007/1/19, Stepan Mishura <stepan.mishura@gmail.com>:
>
>.
I'm afraid it is... Both RI and Harmony does not parse these symbols in the
URL class. Both of them passed the following test case:
public class URLTest extends{
public void test_pathname() throws Exception {
URL url1 = new URL("file:/home/../home/1.txt");
assertEquals("/home/../home/1.txt", url1.getFile());
URL url2 = new URL("file:/home/1.txt");
assertEquals("/home/1.txt", url2.getFile());
assertFalse(url1.equals(url2));
}
}
Thus, perhaps it is a good choice to add a method such as
PolicyUtils.normalizeURL(String url). Any comments?
>
>
--
Best regards,
Ruth Cao
China Software Development Lab, IBM | http://mail-archives.apache.org/mod_mbox/harmony-dev/200701.mbox/%3C843dd4f00701190431w1bb22b92h3c27694ae5355c1b@mail.gmail.com%3E | CC-MAIN-2015-06 | refinedweb | 106 | 51.85 |
In Machine Learning, StandardScaler is used to resize the distribution of values so that the mean of the observed values is 0 and the standard deviation is 1. In this article, I will walk you through how to use StandardScaler in Machine Learning.
StandardScaler is an important technique that is mainly performed as a preprocessing step before many machine learning models, in order to standardize the range of functionality of the input dataset.
Also, Read – Why Python is the best language for Machine Learning.
Some machine learning practitioners tend to standardize their data blindly before each machine learning model without making the effort to understand why it should be used, or even whether it is needed or not. So you need to understand when you should use the StandardScaler to scale your data.
When and How To Use StandardScaler?
StandardScaler comes into play when the characteristics of the input dataset differ greatly between their ranges, or simply when they are measured in different units of measure.
StandardScaler removes the mean and scales the data to the unit variance. However, outliers have an influence when calculating the empirical mean and standard deviation, which narrows the range of characteristic values.
These differences in the initial features can cause problems for many machine learning models. For example, for models based on the calculation of distance, if one of the features has a wide range of values, the distance will be governed by that particular characteristic.
The idea behind the StandardScaler is that variables that are measured at different scales do not contribute equally to the fit of the model and the learning function of the model and could end up creating a bias.
So, to deal with this potential problem, we need to standardize the data (μ = 0, σ = 1) that is typically used before we integrate it into the machine learning model.
Now, let’s see how to use StandardScaler using Scikit-learn:
Code language: PHP (php)Code language: PHP (php)
from sklearn.preprocessing import StandardScaler import numpy as np # 4 samples/observations and 2 variables/features X = np.array([[0, 0], [1, 0], [0, 1], [1, 1]]) # the scaler object (model) scaler = StandardScaler() # fit and transform the data scaled_data = scaler.fit_transform(X) print(X)
[[0, 0], [1, 0], [0, 1], [1, 1]])
Code language: PHP (php)Code language: PHP (php)
print(scaled_data)
[[-1. -1.] [ 1. -1.] [-1. 1.] [ 1. 1.]]
To verify the mean of features is 0:
scaled_data.mean(axis = 0)
array([0., 0.])
I hope you liked this article on the StandardScaler in Machine Learning. Feel free to ask your valuable questions in the comments section below.
Also, Read – How to predict IPL winner with Machine Learning. | https://thecleverprogrammer.com/2020/09/22/standardscaler-in-machine-learning/ | CC-MAIN-2021-04 | refinedweb | 448 | 52.39 |
| I would also want to echo Simon's comment that the new hierarchical | namespace and library proposal should go a long way to making the | library situation better in the very near future. We are currently | discussing a general layout and re-naming of the existing modules from | hslibs. If Hugs users and developers don't join in the discussion, | then we could be wasting our effort. We really want to develop | a de-facto cross-compiler standard, not just a one-compiler or | two-compiler standard. ...and indeed, no one from OGI is subscribed to either the FFI or the Libraries discussion list. I'm ccing some of them so they can take a look at the thread: Staying out of these discussions is a perfectly reasonable choice on their part (there are only so many hours in the day) but it does mean that it's unreasonable to expect Hugs to track either debate. In the minutes of the Haskell Implementors Meeting in Jan we had: Hugs: OGI in maintenance mode. There's a danger that Hugs will gradually die, which none of us want. One idea: advertise openly for a home for Hugs. JOHN will take this suggestion to OGI. John, did anything come of this? Simon | http://www.haskell.org/pipermail/ffi/2001-March/000274.html | CC-MAIN-2014-49 | refinedweb | 210 | 61.46 |
Next: Mixed-radix FFT routines for complex data, Previous: Overview of complex data FFTs, Up: Fast Fourier Transforms [Index]
The radix-2 algorithms described in this section are simple and compact, although not necessarily the most efficient. They use the Cooley-Tukey algorithm to compute in-place complex FFTs for lengths which are a power of 2—no additional storage is required. The corresponding self-sorting mixed-radix routines offer better performance at the expense of requiring additional working space.
All the functions described in this section are declared in the header file gsl_fft_complex.h.
These functions compute forward, backward and inverse FFTs of length
n with stride stride, on the packed complex array data
using an in-place radix-2 decimation-in-time algorithm. The length of
the transform n is restricted to powers of two. For the
transform version of the function the sign argument can be
either
forward (-1) or
backward (+1).
The functions return a value of
GSL_SUCCESS if no errors were
detected, or
GSL_EDOM if the length of the data n is not a
power of two.
These are decimation-in-frequency versions of the radix-2 FFT functions.
Here is an example program which computes the FFT of a short pulse in a sample of length 128. To make the resulting Fourier transform real the pulse is defined for equal positive and negative times (-10 … 10), where the negative times wrap around the end of the array.
#include <stdio.h> #include <math.h> #include <gsl/gsl_errno.h> #include <gsl/gsl_fft_complex.h> #define REAL(z,i) ((z)[2*(i)]) #define IMAG(z,i) ((z)[2*(i)+1]) int main (void) { int i; double data[2*128]; for (i = 0; i < 128; i++) { REAL(data,i) = 0.0; IMAG(data,i) = 0.0; } REAL(data,0) = 1.0; for (i = 1; i <= 10; i++) { REAL(data,i) = REAL(data,128-i) = 1.0; } for (i = 0; i < 128; i++) { printf ("%d %e %e\n", i, REAL(data,i), IMAG(data,i)); } printf ("\n"); gsl_fft_complex_radix2_forward (data, 1, 128); for (i = 0; i < 128; i++) { printf ("%d %e %e\n", i, REAL(data,i)/sqrt(128), IMAG(data,i)/sqrt(128)); } return 0; }
Note that we have assumed that the program is using the default error
handler (which calls
abort for any errors). If you are not using
a safe error handler you would need to check the return status of
gsl_fft_complex_radix2_forward.
The transformed data is rescaled by 1/\sqrt n so that it fits on the same plot as the input. Only the real part is shown, by the choice of the input data the imaginary part is zero. Allowing for the wrap-around of negative times at t=128, and working in units of k/n, the DFT approximates the continuum Fourier transform, giving a modulated sine function.
Next: Mixed-radix FFT routines for complex data, Previous: Overview of complex data FFTs, Up: Fast Fourier Transforms [Index] | http://www.gnu.org/software/gsl/manual/html_node/Radix_002d2-FFT-routines-for-complex-data.html | CC-MAIN-2017-43 | refinedweb | 491 | 61.97 |
index
Master Java In A Week
Master Java In A Week
Master Java Programming Language in a week. This tutorial is good for
beginners in Java and it will teach you the basics and advanced concepts of Java
programming Language. Java is versatile, platform
Java Programming: Chapter 7 Index
| Main Index int buffer.
;intBuf.get(index));
}
}
Output
C:>java...How to get specific index value from int buffer.
In this tutorial, we will discuss how to get specific index value from
int buffer.
IntBuffer
interfaces - Java Beginners
visit the following links:
Hope that it will be helpful for you.
Thanks
java - Java Beginners
://
http.../java/master-java/abstract-class.shtml your website is best
look i am intension of this java
java
java hi im new to java plz suggest me how to master java....saifjunaid@gmail.com
Masters of Java Assignment Plugin
it significantly easier to develop Master of Java
Assignments by making use of the Eclipse Java IDE.
The plugin has the following features... Masters of Java Assignment Plugin
java what are abstract methods
Please visit the following link:
charAt() method in java
charAt() method in java
In this section you will get detail about charAt() in java. This method comes
in java.lang.String package. charAt() return the character at
the given index within the string, index starting from 0
java from Scratch - Java Beginners
/java/master-java/index.shtml
Thanks...java from Scratch Hi experts,
I am new one in dotnet ,I want to switch to java side,& having theritically knowledge of java,Kindly suggest me
java
, visit the following links:
polymorphism - Java Beginners
,
Please visit the following links:
Hope that it will be helpful for you.
Thanks
java
://
java
??all the database must be in master computer only and all other computer must retrieve that data from the master computer only.please help
abstract class - Java Interview Questions
://
Hope that it will be helpful for you.
Thanks
java
and explain how to correct it.
i) For(index=0.1;index!=1.0;index+=0.1)
System.out.println("index="+index);
ii)Switch(x)
{
case 1:
System.out.println... of all your for statement is not correct.
Do correction: for(double index=0.1;index
Java Courses
RoseIndia Java Courses provided online for free can be utilized by beginners
in Java anywhere in the world. They can use this course to master Java language
and become Java developers. The courses provided here cover every topic
java
java based on id the message should display in bean or java file[tool tip]
Hi Friend,
Please visit the following link:
Hope that it will be helpful for you.
Thanks
java - Java Interview Questions
Helpful Java Interview questions Need Helpful Java Interview questions
about package - Java Beginners
://
Hope that it will be helpful for you.
Thanks... in the root directory.In my system java is installed in c:\ibm\java142\bin how can i run
What is Index?
What is Index? What is Index
java - Java Interview Questions
link:
Dynamic... following URL.
Hope
Java - Java Beginners
Java Java Master get ready....
Can we make plugin for Browser in java ?
Any reply is appriciated
A Program to find area and perimeter of Rectangle
in
java. If you are new to java programming language then the example will help you...
and perimeter of a rectangle in java.
Code Description : Using this example
Real time examples - Java Beginners
/java/master-java/method_overloading.shtml overloading concept in java and explain with real time examples?
java code - Java Beginners
,
Please visit the following links: code when will go for abstract and when will go for interface
Java made easy
but to master it. The duration of the Java course is not fixed...RoseIndia has made learning Java easy with its online Java course, Java
tutorials, java projects, Java programs, Java video tutorials and Java lectures
Java for complete beginners
Java Guide is available at RoseIndia that help beginners to master...
and application development for mobile phone, Java is in great demand and so are
Java developers. Java platform is flexible, a program developed in it can run
SCJP Module-1 Question-11
SCJP Module-5 Question Web Services Online Training
of Java Web services.
syllabus and what it takes to master java web services...Java Web Services Online Training
Java Web Services online training enables students and learners to make
interactive web based services using Java and its
Java
Java 1) WAP in java to accept the full name of a person and output...();
input.close();
int index = name.lastIndexOf( " " ) + 1;
String st...)+".");
}
scan.close();
System.out.print(name.substring(index
java
helpful for my project so plsss kindly respond soon.
java
java .doc to html converter in java
it's urgent buddies
Hi Friend,
Try the following code:
import java.io.*;
import...
poi-3.7-20101029.jar
Hope that the above code will be helpful for you.
Thanks
java - Java Beginners
:// hi,
i'm chandrakanth.k. i dont know about java. but i'm
Where to learn java programming language
and want to learn Java and become master of the Java programming language? Where... fast and easily.
New to programming
Learn Java In A Day
Master Java Tutorials
Installing Java (JDK 7) on Windows 7
Java tutorials index page
Java frameworks
Getting a absolute path
;
If you are new in Java programming then our tutorials
and examples will be helpful in understanding Java programming in the most
simplest way. Here...( "java" + File.separatorChar+ str);
java lab programs - Java Beginners
java lab programs 1. Develop a Java package with simple Stack... for Complex numbers in Java. In addition to methods for basic operations on complex... to demonstrate dynamic polymorphism.
5. Design a Java interface for ADT Stack. Develop two
Infix to Prefix - Java Beginners
infix) {
StringBuffer sb = new StringBuffer(infix);
int index...);
if (tempIndex < index && tempIndex >= 0) {
index = tempIndex;
operand = operators[x
core java - Java Beginners
/java/language/java-keywords.shtml
Thanks...core java how many keywords are in java? give with category?
java with xml parsing - Java Beginners
java with xml parsing Hi,
I need the sample code for parsing complex data xml file with java code.
Example product,category,subcategory these type of xml files and parse using java.
Please send the code immediately its very
Learn Java online
and
eventually master it.
The other fact is learning Java online is free...Learning Java is now as easy as never before. Many websites today provide the
facility of learning Java programming online by providing enough material like
Write a program for calculating area and perimeter of a rectangle
are a newbie in Java programming then our
tutorials and examples will be helpful in understanding Java programming in the
most simplest way. Here after reading
Search index
core java - Java Beginners
");
}
}
-------------------------------------------
Read for more information.
Thanks...core java how to write a simple java program? Hi friend
java - Java Beginners
java what is inheritance Hi Friend,
Please visit the following link:
Hope that it will be helpful for you.
Thanks
Declaring Data Types in Java
Declaring Data Types in Java What is the best way of declaring data types in Java?... this will be helpful for you.
Java Code - Java Beginners
Java Code Write a Java Program that display an Image and Apply... the following links:
Drop Index
Drop Index
Drop Index is used to remove one or more indexes from the current database.
Understand with Example
The Tutorial illustrate an example from Drop Index
java servlet - Java Beginners
java servlet how to use java servlet? and what the purpose of servlet? Hi Friend,
Please visit the following link:
Hope that it will be helpful for you.
Thanks
java - Java Interview Questions
java How to form a Singleton class? Hi Friend,
Please visit the following link:
Hope that it will be helpful for you.
Thanks
java - Java Beginners
java Develop a multi-threaded GUI application of your choice. Hi Friend,
Please visit the following link:
Hope that it will be helpful
java - Java Interview Questions
Friend,
Please visit the following links:
java - Java Beginners
the following links:
http... may refer to different methods.
In java,there are two type of polymorph | http://www.roseindia.net/tutorialhelp/comment/100092 | CC-MAIN-2014-23 | refinedweb | 1,358 | 56.45 |
Advertisement
request
Hai sir, i well understand the concepts at same time plz responds of ours with suitable examples and deployement description. and my personal request plz send a full dsetail of j2ee notes to my email sir plzzzzzzzzzzzzzzzz
throws Java Keyword
throws Java Keyword
throws " is a keyword defined in the java
programming language. Keywords... :
-- The throws keyword
in java programming language is applicable to a method
help please!!! T_T
help please!!! T_T what is wrong in this?:
import java.io.*;
class...(String[]args) throws Exception
{
String name1;
String name2;
System.out.println... BufferedReader (reader);
public static void main(String[]args) throws Exception
throws example program java
throws example program java how to use throws exception in java?
The throws keyword is used to indicate that the method raises.... The throws keyword performs exception handling and display the exception
throws IOException - Java Beginners
://
Thanks...throws IOException throws IOException means
Hi Friend
request to help
request to help how to write the program for the following details in java
Employee Information System
An organization maintains the following data about each employee.
Employee Class
Fields:
int Employee ID
String Name
T - Java Terms
java keyword
throws " is a keyword defined in the java
programming...
T - Java Terms
....
Java ?throw? Keyword
Exceptions are thrown to signal
Difference between throw and throws in java.
Difference between throw and throws in java.
Throws and throw both are keywords in java, used for handling the exception.
When a method is not able to handle the checked exception, it
should declared with throws keyword
Java throw and throws Keyword Example
Java throw and throws Keyword Example
In this section we will read about how to throw the caught exception using
throw and throws keywords in Java.
throws and throw keywords in Java are used
in the Java Exceptions. Keyword throw
Java throw ,throws
Java throw ,throws What is the difference between throw and throws
Java throw and throws
Java throw and throws What is the difference between throw and throws | http://www.roseindia.net/tutorialhelp/allcomments/35932 | CC-MAIN-2015-18 | refinedweb | 334 | 61.56 |
Interesting thing: technically there is a way to call new on an interface to create an object. How? Using a feature in the C# compiler for COM-interop support:
using System.Runtime.InteropServices; class Program { static void Main(string[] args) { IFoo foo = new IFoo(); } } class Foo : IFoo { } [ComImport] [Guid("DC1CB768-0BE5-4200-8D0A-C844BFBE3DE7")] [CoClass(typeof(Foo))] interface IFoo { }
Here you specify that Foo is a CoClass for the interface IFoo using the three attributes CoClass, ComImport and Guid. It does not matter that no real COM objects are involved, C# compiler is fine with that. What it does, it replaces the call to the IFoo() "constructor" to the equivalent constructor on the co-class Foo.
Interestingly enough, Foo doesn't have to even implement IFoo - the program will compile just fine and it will create an instance of type Foo at runtime, but it will fail when we try to put an object of type Foo into a local variable of type IFoo.
It's yet another way to instantiate a type without mentioning it in source code. As such, it can potentially be a way to achieve what factory methods do - instead of mentioning the concrete type in instantiations all over your code, you can just have a centralized place where you say what type to instantiate. With this you can easily substitute the concrete type via the CoClass attribute.
However, this is not as powerful as factory methods (you have to recompile your app to change concrete class and you can't have multiple concrete types at the same time).
I wouldn't encourage using this stuff anyway because this will probably confuse readers who read your source. But despite of anything, this *is* an interesting technique.
Every time a developer discovers one of the ever increasing number of ways to subvert the intent of the type system in C#, a fairy dies.
"However, this is not as powerful as factory methods (you have to recompile your app to change concrete class and you can’t have multiple concrete types at the same time)."
In this case, Unity is probably the solution.
commongenius: I respectfully disagree. My version below:
Every time a developer USES one of the ever increasing number of ways to subvert the intent of the type system in C# WITHOUT THINKING, a fairy dies.
Every time a developer LEARNS more about the details and intricacies of the type system in C# TO SOLVE PROBLEMS MORE EFFECTIVELY, a rainbow shines and a unicorn is born.
Matthieu: wow, I didn’t even know about. It all looks like MEF to me 🙂 Need to check it out.
Kirill, when would anyone EVER need to use this to solve OO problems more efficiently?
Awesome 🙂
When I knew that you don’t have to implement IEnumerable in your collection to use it in foreach statement, I had about the same feeling 🙂
Matt: I have NO IDEA. But I personally feel it’s important to know all the options. It’s like having a rich toolset – you never know when you might need a tool.
Eugene: we understand each other 🙂
Kirill,
Sadly I have become rather cynical about the state of the programming profession today, to the point where I consider my version and the first half of your version to be roughly equivalent.
I will concede however, that, strictly speaking, your version is more accurate, to the relief of fairies everywhere.
commongenius: ha-ha, welcome to the club. Being cynical about the state of the programming profession today is my permanent state. One discovery I made is that our profession nowadays is all about human nature, not computers. In the end, computers and software fade away in importance and are just an invisible medium that connects humans. When I look at certain code, I clearly see a human who wrote it back then and it is this human and their nature at its purest that I connect to by means of looking at the produced code. I won’t be surprised if we’ll require a degree in psychology to become a computer scientist in the near future 😉 Surprisingly, computer science is about humans, not computers.
The classic COM factory method :). Thanks Kirill for posting this. By the way you know that CoClass can be looked up in the registry if it is not specified in the attributes (via interface Guid) giving you more powerful ways to consume COM. | https://blogs.msdn.microsoft.com/kirillosenkov/2009/08/15/calling-new-on-an-interface/ | CC-MAIN-2019-09 | refinedweb | 740 | 59.13 |
7699/can-a-static-reference-be-made-to-a-non-static-method
I have a class named md which has a method named setLI:
public void setLI(String loan) {
this.onloan = loan;
}
I am trying to call this method from a class named GUI in the following way:
public void loanItem() {
Md.setLI("Yes");
}
But I am getting the error
non-static method setLI(java.lang.String) cannot be referenced from a static context
non-static method setLI(java.lang.String) cannot be referenced from a static context
I have looked at other topics with the same error message but nothing is clicking!
setLoanItem() isn't a static method, it's an instance method, which means it belongs to a particular instance of that class rather than that class itself.
Essentially, you haven't specified what media object you want to call the method on, you've only specified the class name. There could be thousands of media objects and the compiler has no way of knowing what one you meant, so it generates an error accordingly.
You probably want to pass in a media object on which to call the method:
public void loanItem(Md m) {
m.setLI("Yes");
}
public class Test {
...READ MORE
One possible solution could be using calendar ...READ MORE
How to manage two JRadioButtons in java ...READ MORE
Here are two ways illustrating this:
Integer x ...READ MORE
class Program
{
int count ...READ MORE
A static keyword can be used with ...READ MORE
You can't call something that doesn't exist. ...READ MORE
Interfaces are concerned with polymorphism which is ...READ MORE
Hey Techies,
Non-static variables are part of the objects ...READ MORE
You can use readAllLines and the join method to ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/7699/can-a-static-reference-be-made-to-a-non-static-method?show=7702 | CC-MAIN-2022-27 | refinedweb | 315 | 65.73 |
Help With Costructing Selection sort?
separated by : (colon) sorted by the first dimension.
I got the two dimensional array... in advance!
Please visit the following link:
Java Selection Sort...Help With Costructing Selection sort? Using a selection sort
program1 - Java Beginners
program1 Sir, can u help to learn about bubble sorting and selection... array[] = {12,9,4,99,120,1,3,10};
System.out.println("RoseIndia\n\n");
System.out.println("Selection Sort\n\n");
System.out.println("Values Before
Two-
Selection Sort In Java
Selection Sort In Java
... same steps.
Working of the selection sort :Say we have an array... are going to sort the values of an array using
selection sort.In selection sorting
Selection Sort in Java
Selection sort in Java is used to sort the unsorted values in an array... in Java.
In selection sort algorithm, first assign minimum index in key
as index... the whole list is sorted.
Example of Selection Sort in Java:
public class
one dimensional array program
one dimensional array program Design and implement a java program that will read a file containing numbers and compute the following statistics... the median, you must first sort the numbers
The range of numbers is: -44.... to 7654
Merge Sort Java
Merge Sort in Java is used to sort integer values of an array. There are many
methods to sort Java like bubble sort, insertion sort, selection sort, etc. In
Merge sort unsorted values are divided into two equal parts
Two Dimensional Array Program Using Nested For Loop
Two Dimensional Array
Program Using Nested For Loop ...;. We are going to
make a integer for array declaration Two dimensional array... of Java. In this lesson we
will learn how to display arrange form of two
Merge Sort In Java
Merge Sort in Java
...
single element. Merge the two 1/2 values together and sort the values. Do
same... are followed by merge sort algorithm to sort the values of an array.
Step1:Spliting
Two Dimensional array program
Two Dimensional array program consider a two dimensional array... to elements of the array such that the last element become the first one and the first become the last.let the program output elements of the first array
Declare string array in Java (one and two dimensional)
Declare string array in Java (one and two dimensional... program also teaches you how to declare two dimensional array in Java... dimensional array and two
dimensional as well.
1. String arr[] = new String
array sort - Java Beginners
array sort hi all,
can anybody tell me how to sort an array... array[], int len){
for (int i = 1; i < len; i++){
int j = i;
int tmp = array[i];
while ((j > 0) && (array[j-1] > tmp
Quick Sort in Java
Quick Sort in Java
Quick Sort in Java is used to sort elements of an array..., insertion sort, heap sort and other sorting algorithms. First it divides an array into two sub-arrays.
The complexity of quick sort in the average case is &Theta
Heap Sort in Java
Heap Sort in Java is used to sort integer values of an array. Like quicksort...\sorting>Javac heap_Sort.java
C:\array\sorting>java heap_Sort
Heap Sort..., an
array can be divided in two parts: a heap (which has the unsorted list
Array in Java
.
Different types of array used in Java are One-dimensional, Two-dimensional and multi...[] = {50,60,70,80,90};
Two-dimensional arrays:
Two-dimensional arrays... of an Array
Initialization of an Array
Arrays in Java for different data
Insertion Sort In Java
Insertion Sort In Java
...
C:\array\sorting>java InsertionSort
RoseIndia
Selection Sort
Values Before
Insertion Sort - Java Beginners
you help me.What is the code for Insertion Sort and Selection Sort that displays...:
public class InsertionSort {
public static void sort(String[] array) {
int...) {
String[] array ={"S","D", "A","B","Z", "M","O", "L","H", "Y"};
sort
Display tow dimensional array using one for loop in java
Display tow dimensional array using one for loop in java Display tow dimensional array using one for loop in java
Need help in constructing bubble sort
by : (colon) sorted by the second dimension.
I already got the two dimensional array figured out just couldnt dont know how to plug in the bubble sort...Need help in constructing bubble sort using a bubble sort, for each
Quick Sort In Java
Quick Sort in Java
...:
In quick sort algorithm pick an element from array...;java QuickSort
RoseIndia
Quick Sort
Three Dimensional Array program
Three Dimensional Array program
...
will learn how to use three dimensional array. Firstly, we have to define class...
are going to make three dimensional array having multi rows and columns.
By using
Quick Sort in Java
Quick sort in Java is used to sort integer values of an array... into a sorted array.
Example of Quick Sort in Java:
public class QuickSort... QuickSort.java
C:\array\sorting>java QuickSort
RoseIndia
Quick Sort
Display tow dimensional array by matrix form using one for loop in java
Display tow dimensional array by matrix form using one for loop in java Display tow dimensional array by matrix form using one for loop in java
Bidirectional Bubble Sort in Java
Bidirectional Bubble Sort in Java
...-directional bubble
sort algorithm to sort the values of an array.
...:\array\sorting>java BidirectionalBubbleSort
RoseIndia
String array sort
String array sort Hi here is my code. If i run this code I am... language="java"%>
<%@ page session="true"%>
<%
Connection... for loop is to remove the null from the source array*/
for(int i=0;i<
need help with two dimensional array that takes input from user and bubble sorts and selections sorts
need help with two dimensional array that takes input from user and bubble... cannot figure out how i can declare a two dimensional array without knowing how... that performs Bubble Sort using 2 d array.
public class BubbleSortWith2D
Sort
with this
A program is required to ask users to rate the Java programming language... Scanner(System.in);
System.out.print("Rate Java(0-10): ");
int rate...");
}
} }
}
System.out.print("Invalid! Rate Java within the range(0-10): ");
rate=input.nextInt
Java insertion sort with string array
Java insertion sort with string array
In this tutorial, you will learn how to sort array of strings using string
array with Insertion Sort. For this, we... array using inserting sort algorithm. In the main method, we
have invoked
Odd Even Transposition Sort In Java
Odd Even Transposition Sort In Java
... is based on the Bubble Sort technique
of comparing two numbers and swapping... OddEvenTranspositionSort.java
C:\array\sorting>java
Extra Storage Merge Sort in Java
Extra Storage Merge Sort in Java
... values divide into two equal parts
iteratively and create an array for store data value in extra storage. Then merge the two parts
, sort it and store
Heap Sort in Java
_Sort.java
C:\array\sorting>java heap_Sort
Heap Sort...
Heap Sort in Java
... are going to sort integer values of an array using heap sort.
There are two types
Insertion Sort Java
Insertion Sort in Java is an algorithm that is used to sort integer values..., Insertion Sort in Java is less efficient when it comes to larger data sets... decreasing.
How does Insertion Sort works in Java?
Assume that you have
Multidimensional Array Java
used two dimensional
array. A two dimensional array can be thought as a grid...Multidimensional Array Java
... to store it in an
array. We use mostly two types of arrays that is simple array
Bubble Sort in Java
Bubble Sort aka exchange sort in Java is used to sort integer values... of Bubble Sort in Java:
public class BubbleSort {
public static void main(String... Sort compares first pair of adjacent elements and put larger value at higher
Java insertion sort question
Java insertion sort question I've got another program that I need help with. I am trying to write a Java method that accepts an array of strings, and sorts the strings using the insertion sort algorithm. Then I need to write
bubble sort - Java Beginners
bubble sort how to write program
The bubble-sort algorithm in double... Hi friend,
Bubble Sort program :
public class BubbleSortExam{
public static void main(String a[]){
int i;
int array
heap sort in java
("\n Heap Sort\n---------------\n");
System.out.println("\n Unsorted Array\n\n...heap sort in java plz modify this program so that it can take input as integers and string both.....
public class heap_Sort{
public static void
Displaying files on selection of date.
for same.
Here is a java swing code that accepts two dates and search...Displaying files on selection of date. Hi,
I am developing a GUI, where i select from and to date. On selection of from and to date the GUI should
Distance conversion - Java Beginners
in metres. The program will then present a user menu of conversion types, converting the distance to kilometres, feet and inches, according to the user?s selection...() for the user selection.
? Write a method to read a distance in metres from
Selection With Ajax and JSP
Selection With Ajax and JSP I am working at a jsp page using ajax for country , state, city selection.
so if he select country it will populate the state and city selection (both).
After selecting country if he select city
The Array Palindrome Number in Java
The Array Palindrome Number in Java
.... In
this section you will read how to uses palindrome one dimensional array... a class named "Palindome" and
two integer type values from user. Java
Bubble Sort Program in Java
Bubble Sort Program in Java
In this tutorial, you will know about the Bubble....
During the process, the bubble sort algorithm array is reached from 0... the sort:\n");
for(i = 0; i < array.length; i++)
System.out.print(array[i
insert multiple selection - Java
insert multiple selection - Java how to insert multiple selection values from html into database using servlets
Java - Array in Java
Array Example - Array in Java
... of arrays are used in any programming language such as: one - dimensional,
two - dimensional or can say multi - dimensional.
Declaration of an array:
Selection based on other selection in jsp
Selection based on other selection in jsp I am trying to create... category there are products which are taken from a ms access table using a java... selection on same jsp page such that when someone select a category only the products
Conversion - Development process
is the theory of conversion from java to .NET? How it is done?
Hi friend,
Two paths are available for migrating Java applications to .Net... Conversion Assistant (JLCA). Code that calls Java APIs can convert to comparable C# code
A single dimensional array is called
A single dimensional array is called What is a single dimensional array called?
A single dimensional array is called LIST
Dividing of two Matrix in Java
;Here you will learn how to use two
matrix array for developing Java
program.
The java two dimensional array program is
operate the two matrix. Now we...
Dividing of two Matrix
java conversion
java conversion how do i convert String date="Aug 31, 2012 19:23:17.907339000 IST" to long value
How to sort ArrayList in java
How to sort ArrayList in java
In this section you will learn how to sort ArrayList in java. ArrayList
support dynamic array that can grow as needed. Array... which sort the Arraylist. Now here is
the code to sort the ArrayList in java
Array sorting - Java Beginners
Array sorting Hello All.
I need to sort one array based on the arrangement of another array.
I fetch two arrays from somewhere... need to sort the "name" array alphabetically. I can do that easily using
Java Arrays Tutorial
Dimensional Array
Two-dimensional arrays are defined as
" an array... in an array.
Heap Sort Algorithm
In Java...
of Java Arrays
Now lets study the structure of Arrays in java. Array
Multiplication of two Matrix
.
The Java two dimensional array program is operate
to the two matrix number...;
This is a simple Java multidimensional array program... to declare two multidimensional array of type integer.
Here in this program use two
Java array
Java array Java program to find first two maximum numbers in an array,using single loop without sorting array
Array sort
Array sort Program that uses a function to sort an array of integers
To find first two maximum numbers in an array
To find first two maximum numbers in an array Java program to find first two maximum numbers in an array,using single loop without sorting array
insertion sort
insertion sort write a program in java using insertion sort
bubble sort
bubble sort write a program in java using bubble sort
buble sort
buble sort ascending order by using Bubble sort programm
Java BubbleSort Example
Bubble Sorting in Java
:\array\sorting>java bubbleSort
Values Before the sort:
12 9 4... is a simplest
sorting algorithm. In bubble sort algorithm array is traversed from... of bubble sort algorithm:
Say we have an array
else if (selection = * 'M'); - Java Beginners
else if (selection = * 'M'); I am trying to get 2 numbers 2... if (selection = * 'M');
^
this is my program - what am i...;
System.out.print("Enter A(dd), S(ubtract), M(ultiply):");
selection = (char
interrelated two selection box
interrelated two selection box hi i need two selection box .in 1 box all designation like manager, ceo etc , onclick on manager i should get list of managers names in second selection box.. like wise so on. from database
Java Array declaration
in the memory with array size 5 which hold five element.
To declare two-dimensional...Java Array declaration
In this section you will learn how to declare array in java. As we know an
array is collection of single type or similar data type
SEARCH AND SORT
SEARCH AND SORT Cam any one provide me the code in java that :
Program to search for MAX,MIN and then SORT the set using any of the Divide and conquer method
C array sort example
C array sort example
In this section, you will learn how to sort an array in C.
For sorting... the
implementation of quicksort algorithm to sort the elements of an array.
Syntax
JavaScript array sort numerically
JavaScript array sort numerically
...
alphabetically we have used the default sort method to sort array elements but
here...;
<head>
<title>
JavaScript array sort() numerically
<
java array
java array q4.array
Write a program that accepts two arrays, an array of fruit names and an array of price of
fruits, and a fruit name and returns the price of the fruit. (Assume that a price in the
second array corresponds | http://www.roseindia.net/tutorialhelp/comment/93982 | CC-MAIN-2015-06 | refinedweb | 2,445 | 55.84 |
VIS.JS Visualization in Jupyter Notebook
Visualization in Jupyter Notebook using vis.jsVisualization in Jupyter Notebook using vis.js
What is vis.js?What is vis.js?
vis.js is a dynamic, browser-based visualization library. The library is designed to be easy to use, to handle large amounts of dynamic data, and to enable manipulation of and interaction with the data. It allows us to visualize data in a variety of forms and to add control on physics options.
Download page: vis.js
What is Jupyter?What is Jupyter?
The
Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text.
Download page: jupyter
What is Anaconda?What is Anaconda?
Anaconda is the leading open data science platform powered by Python. It bundles Jupyter Notebook and includes over 100 of the most popular Python, R and Scala packages for data science.
Download page: anaconda
Why should I care?Why should I care?
The reason for Jupyter Notebook success is it excels in the form of programming called literate programming. Literate programming is a software development style pioneered by Stanford computer scientist, Donald Knuth. This type of programming emphasizes a prose first approach where exposition with human-friendly text is punctuated with code blocks. It excels at demonstration, research, and teaching objectives especially for science.
By adding advanced visualization tools in combining with javascript, we can extend its functionality to add fine grain tuning.
Quick NotesQuick Notes
- We will use Anaconda 4.3.1 with Python 3.6 version.
vis.jsversion 4 is used to visualize the graph.
- Javascript and Python knowledge is advised for this tutorial.
- This tutorial doesn't dwell into configuring and installing
anaconda.
Installing Required LibraryInstalling Required Library
Before we start, we will need anaconda, which bundles Jupyter Notebook and a bunch of useful libraries.
Once installed we will download vis.ja and unzip it into the desired location.
We will need to configure
vis.js ditribution folder inside
.jupyter/jupyter_notebook_config.py by adding the following line (where
dist is the location of the distribution folder of
vis.js libraries.):
c.NotebookApp.extra_static_paths = ["/Users/isai/Documents/Jupyter/Viz/dist"]
Starting Jupyter NotebookStarting Jupyter Notebook
To start Jupyter Notebook open a terminal and execute:
jupyter notebook
Once the notebook starts, our default browser will open the Jupyter navigator. We will create a new file in the desired location.
Creating the VisualizationCreating the Visualization
There are two options to showcase networks with
vis.js,
- Processing the information in Javascript and displaying the results using inline HTML.
- Processing the information in Python, sending it to an inline Javascript and showing it in inline HTML.
Optionally we can have physics controls in an inline HTML
Inline JavascriptInline Javascript
To display javascript in Jupyter Notebook, we will first create a python cell in which we will import the needed iPython library required to show HTML snippets.
from IPython.core.display import display, HTML from string import Template import json
The next step is to create an inline HTML object that will contain the visualization. Here, we create a
div element in the notebook.
%%html <div id="mynetwork"></div>
And finally we add an inline javascript cell which uses require to specify the
vis.js library that we previously added to the jupyter configuration, we create a network and add the container
div element to it.
%%javascript requirejs.config({ paths: { vis: 'vis' } }); require(['vis'], function(vis){ var'} ]; // create an array with edges var edges = [ {from: 1, to: 2}, {from: 1, to: 3}, {from: 2, to: 4}, {from: 2, to: 5} ]; // create a network var container = document.getElementById('mynetwork'); var data= { nodes: nodes, edges: edges, }; var options = { width: '800px', height: '400px' }; var network = new vis.Network(container, data, options); });
Once we run all the cells, we get the following visualization under the HTML cell.
Python to Inline JavascriptPython to Inline Javascript
Now, we're going to display this graph in the notebook with
vis.js receiving the data from Python. The first step is to bring this graph to Javascript. We choose here to export the graph in JSON. Since we want to avoid saving the JSON file to disk, we translate the data to the frontend.
Note that
vis.js expects each edge to be an object with a source and a target.
from IPython.core.display import display, HTML from string import Template'} ] edges = [ {'from': 1, 'to': 2}, {'from': 1, 'to': 3}, {'from': 2, 'to': 4}, {'from': 2, 'to': 5} ]
This one’s a bit of a hack. Since the
%javascript magic is run client-side, the window is set. So we bind data to the window so that it’s globally accessible.
All browsers support the window object. It represents the browser's window. All global JavaScript objects, functions, and variables automatically become members of the window object. Global variables are properties of the window object.
But wait, it gets better: Python JSON.dumps transform the output into a JSON string! The only trick now is managing to execute some JS code that loads the JSON dump.
from IPython.display import Javascript import json # Transform the graph into a JSON graph data = {"nodes":nodes, "edges":edges} jsonGraph = json.dumps(data, indent=4) # Send to Javascript Javascript("""window.jsonGraph={};""".format(jsonGraph))
The next step is to create an inline HTML object that will contain the visualization. Here, we create a
div element in the notebook.
%%html <div id="mynetwork"></div>
And finally, we add an inline Javascript cell which receives the nodes and edges from the global
window.jsonGraph variable.
%%javascript requirejs.config({ paths: { vis: 'vis' } }); require(['vis'], function(vis){ // create a network var container = document.getElementById('mynetwork'); var options = { width: '800px', height: '400px' }; // We load the JSON graph we generated from iPython input var graph = window.jsonGraph; // Display Graph var network = new vis.Network(container, graph, options); });
Once we run all the cells, we get the following visualization under the HTML cell.
Wrapping Things UpWrapping Things Up
By adding
vis.jsvisualization to
Jupyter, we are extending the functionality and allowing us to process and retrieve the data in
Python while showing its visualization in
Javascript which not only displays the graph but also makes it possible to add physics controls and personalized actions.
Have You Try The FollowingHave You Try The Following
Now that you know the basics, how about checking some examples from vis.js and implementing them.
How about adding physics controls?
Play around and check what happens. Create your own visualization; the idea is to learn new things. If you like this tutorial, share it with your friends.
| https://www.codementor.io/isaib.cicourel/visjs-visualization-in-jupyter-notebook-phgb3fjv0 | CC-MAIN-2018-05 | refinedweb | 1,105 | 58.38 |
Overview
Atlassian SourceTree is a free Git and Mercurial client for Windows.
Atlassian SourceTree is a free Git and Mercurial client for Mac.
mexopencv
Collection and a development kit of matlab mex functions for OpenCV library an original mex function.
Contents
The project tree is organized as follows.
+cv/ directory to put compiled mex files, wrappers, or help files Doxyfile config file for doxygen Makefile make script README.markdown this file doc/ directory for documentation include/ header files lib/ directory for compiled c++ library files samples/ directory for sample application codes src/ directory for c++ source files src/+cv/ directory for mex source files src/+cv/private directory for private mex source files test/ directory for test scripts and resources utils/ directory for utilities
Compile
Prerequisite:
- Unix: matlab, opencv (>=2.4.0), g++, make, pkg-config
- Windows: matlab, opencv (>=2.4.0), supported compiler
For opencv older than v2.4.0, check out the corresponding v2.x branch.
Unix
First make sure you have OpenCV installed in the system. If not, install the
package available in your package manager (e.g., libopencv-dev in Debian/Ubuntu,
opencv-devel in Fedora, opencv in Macports), or install the source package from . Make sure
pkg-config command can
identify opencv path. If you have all the prerequisite, go to the mexopencv
directory and type:
$ make
This will build and place all mex functions inside
+cv/.
Specify your matlab directory if you install matlab other than /usr/local/matlab
$ make MATLABDIR=/Applications/MATLAB_R2012a.app
Optionally you can test the library functionality
$ make test
If matlab says 'Library not loaded' or any other error in the test, it's likely
the compatibility issue between a system library and matlab's internal library.
You might be able to fix this issue by preloading the library file. On linux,
set the correct library path in
LD_PRELOAD environmental variable. For
example, if you see
GLIBCXX_3.4.15 error in mex, use the following to start
matlab.
$ LD_PRELOAD=/usr/lib/libstdc++.so.6 matlab
Note that you need to find the correct path to the shared object. For example,
/usr/lib64/ instead of
/usr/lib/. You can use
locate command to find the
location of the shared object.
On Mac OS X, this variable is named
DYLD_INSERT_LIBRARIES. You can check
ldd command line tool to check the dependency of the mex file in linux. On
mac, you can use
otool -L command.
Developer documentation can be generated with doxygen if installed
$ make doc
This will create html and latex files under
doc/.
Windows
Make sure you have OpenCV installed in the system and correctly set up
Path
system variable. e.g.,
c:\opencv\build\x86\vc10\bin. See for the instruction.
Also make sure you install a compiler supported by Matlab. See for the list of
supported compilers for different versions of Matlab. Windows 64-bit users need
to install Windows SDK.
Once you satisfy the above requirement, in the matlab shell, type
>> cv.make
to build all mex functions. By default, mexopencv assumes the OpenCV library is
installed in
C:\opencv. If this is not the case, specify the path as an
argument.
>> cv.make('opencv_path', 'c:\your\path\to\opencv')
Note that if you build OpenCV from source, this path specification does not
work. You need to replace dll files in the OpenCV package with newly built
binaries. Or, you need to modify
+cv/make.m to correctly link your mex files
with the library.
To remove existing mexopencv binaries, use the following command.
>> cv.make('clean')
Missing stdint.h in Visual Studio 2008
Visual Studio 2008 or earlier does not comply C99 standard and lacks
stdint.h
header file. Luckily, the header file is available on the Web. For example,
Place this file under
include directory in the mexopencv package.
Error: Invalid MEX file or Segmentation fault
The OpenCV windows package contains c++ binary files compiled with
_SECURE_SCL=1 flag, but mex command in Matlab does not use this option by
default, which results in
Invalid MEX file or segmentation fault on execution.
The current version of
cv.make script adds
_SECURE_SCL=1 flag in the build
command and should have no problem with the distributed binary package.
If you see
Invalid MEX file or segmentation fault with manually built OpenCV
dll's, first make sure you compile OpenCV with the same
_SECURE_SCL flag to
the mex command. The default mex configuration, which is created with the
mex -setup command in matlab, is located in the following path in recent
versions of Windows.
C:\Users\(Username)\AppData\Roaming\MathWorks\MATLAB\(version)\mexopts.bat
Open this file and edit
/D_SECURE_SCL option.
If you see
Invalid MEX file error even when having the matched
_SECURE_SCL
flag, it probably indicates some other compatibility issues. Please file a bug
report at .
Usage
Once mex functions are compiled, you can add path to the project directory and
call mex functions within matlab using package name
cv.
addpath('/path/to/mexopencv'); result = cv.filter2D(img, kern); % with package name 'cv' import cv.*; result = filter2D(img, kern); % no need to specify 'cv' after imported
Note that some functions such as
cv.imread overload Matlab's builtin function
when imported. Use the scoped name when you need to avoid name collision. It is
also possible to import individual functions. Check
help import in matlab.
Check a list of functions available by
help command in matlab.
>> help cv; % shows list of functions in package 'cv' Contents of cv: GaussianBlur - Smoothes an image using a Gaussian filter Laplacian - Calculates the Laplacian of an image VideoCapture - VideoCapture wrapper class ... >> help cv.VideoCapture; % shows documentation of VideoCapture VIDEOCAPTURE VideoCapture wrapper class Class for video capturing from video files or cameras. The class provides Matlab API for capturing video from cameras or for reading video files. Here is how the class can be used: ...
Look at the
samples/ directory for an example of an application.
Also mexopencv includes a simple documentation utility that generates HTML help
files for matlab. The following command creates a user documentation under
doc/matlab/ directory.
addpath('utils'); MDoc;
Online documentation is available at
You can test the functionality of compiled files by
UnitTest class
located inside
test directory.
addpath('test'); UnitTest;
Developing a new mex function:
#include "mexopencv.hpp" void mexFunction( int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[] ) { // Check arguments if (nlhs!=1 || nrhs!=1) mexErrMsgIdAndTxt("myfunc:invalidArgs", "Wrong number of arguments"); // Convert MxArray to cv::Mat cv::Mat mat = MxArray(prhs[0]).toMat(); // Do whatever you want // Convert cv::Mat back to mxArray* plhs[0] = MxArray(mat); }.
int i = MxArray(prhs[0]).toInt(); double d = MxArray(prhs[0]).toDouble(); bool b = MxArray(prhs[0]).toBool(); std::string s = MxArray(prhs[0]).toString(); cv::Mat mat = MxArray(prhs[0]).toMat(); // For pixels cv::Mat ndmat = MxArray(prhs[0]).toMatND(); // For N-D array cv::Point pt = MxArray(prhs[0]).toPoint(); cv::Size siz = MxArray(prhs[0]).toSize(); cv::Rect rct = MxArray(prhs[0]).toRect(); cv::Scalar sc = MxArray(prhs[0]).toScalar(); cv::SparseMat sp = MxArray(prhs[0]).toSparseMat(); // Only double to float mxArray* plhs[0] = MxArray(i); mxArray* plhs[0] = MxArray(d); mxArray* plhs[0] = MxArray(b); mxArray* plhs[0] = MxArray(s); mxArray* plhs[0] = MxArray(mat); mxArray* plhs[0] = MxArray(ndmat); mxArray* plhs[0] = MxArray(pt); mxArray* plhs[0] = MxArray(siz); mxArray* plhs[0] = MxArray(rct); mxArray* plhs[0] = MxArray(sc); mxArray* plhs[0] = MxArray(sp); // Only 2D float to double
Check
MxArraay.hpp for a complete list of.
Testing
You can optionally:
classdef TestMyFunc methods (Static) function test_1 src = imread('/path/to/myimg'); ref = [1,2,3]; % reference output dst = cv.myfunc(src); % execute your function assert(all(dst(:) == ref(:))); % check the output end function test_error_1 try cv.myfunc('foo'); % myfunc should throw an error error('UnitTest:Fail','myfunc incorrectly returned'); catch e assert(strcmp(e.identifier,'mexopencv:error')); end end end end
In Windows, add path to the
test directory and invoke
UnitTest to run all
the test routines.
Documenting
You can create a Matlab help documentation for mex function by having the same
file with '.m' extension. For example, a help file for
filter2D.mex* would be
filter2D.m. Inside the help file should be only matlab comments. An example
is shown below:
%MYFUNC brief description about myfunc % % Detailed description of function continues % ...
License
The code may be redistributed under BSD license. | https://bitbucket.org/suever/mexopencv | CC-MAIN-2017-34 | refinedweb | 1,394 | 50.43 |
A Complete Beginner's Guide to React
Ali Spittel
Updated on
・15 min read
I want to get back into writing more code-heavy content, and React is one of my favorite technologies, so I thought I would create a React intro! This post requires knowledge of HTML and JavaScript -- I am of the firm opinion that you should know these before moving on to libraries like React!
What is React
React is a JavaScript library build in 2013 by the Facebook development team to make user interfaces more modular (or reusable) and easier to maintain. According to React's website, it is used to "Build encapsulated components that manage their own state, then compose them to make complex UIs."
I'm going to use a lot of Facebook examples throughout this post since they wrote React in the first place!
Remember when Facebook moved from just likes to reactions? Instead of just being able to like posts, you can now react with a heart, or a smiley face, or a like to any post. If those reactions were primarily made in HTML, it would be a tremendous amount of work to change all of those likes to reactions and to make sure that they work.
This is where React comes in -- instead of implementing the "separation of concerns" that gets impressed upon developers from day one, we have a different architecture in React that increases modularity based on a component structure instead of separating the different programming languages.
Today, we'll keep the CSS separate, but you can even make that component specific if you want!
React vs. Vanilla JavaScript
When we talk about "vanilla" JavaScript, we are normally talking about writing JavaScript code that doesn't use additional libraries like JQuery, React, Angular, or Vue. If you would like to read more about those and what a framework is, I have a post all about web frameworks!
A couple quick notes before we begin
- To make this tutorial a little more succinct, some code examples have
...before or after them, which means that some code was omitted.
- I use Git diffs in some places to show lines of code that will change, so if you copy and paste, you need to delete the
+at the beginning of the line.
- I have full CodePens with the completed versions of each section -- so you can use those to catch-up!
- More advanced concepts that aren't essential for the tutorial are in blockquotes, these are mostly just facts that I think are interesting!
Set Up
If you are creating a production React application, you will want to use a build tool, like Webpack, to bundle your code since React utilizes some patterns that won't work by default in the browser. Create React App is super helpful for these purposes, since it does most of the configuration for you!
For now, since we want to get up and running super quickly so we can write actual React code, we will be using the React CDN, which is only for development purposes! We will also use the Babel CDN so that we can use some non-standard JavaScript features (we'll talk more about that later!).
<script crossorigin</script> <script crossorigin</script> <script src=""></script>
I also made a Codepen template that you can use!
In a full React project, I would split my components into different files, but again, for learning purposes, we will combine our JavaScript into one file for now.
Components
For this tutorial, we will be building a Facebook status widget, since Facebook wrote React in the first place!
Think about how many places the
like widget appears on Facebook -- you can like a status, or a link post, or a video post, or a picture! Or even a page! Every time Facebook tweaks something about the like functionality, they don't want to have to do so in all of those places. So, that's where components come in! All of the reusable pieces of a webpage are abstracted into a component that can be used over and over again, and we will only have to change code in one place to update it.
Let's look at a picture of a Facebook status and break down the different components within it.
The status itself will be a component -- there are lots of statuses within a Facebook timeline, so we definitely want to be able to reuse the status component.
Within that component, we will have subcomponents or components within a parent component. Those will be reusable as well -- so we could have the like button component be a child of the
PhotoStatus component and the
LinkStatus component.
Maybe our subcomponents would look something like this:
We can even have subcomponents within subcomponents! So, the group of like, comment, and share could be it's own
ActionBar component with components for liking commenting and sharing within it!
There are a bunch of ways you could break down these components and subcomponents depending on where you will reuse the functionality in your application.
Getting Started
I wanted to start off this tutorial with a React "Hello World" -- it is tradition after all! Then we'll move to the slightly more complex status example.
In our HTML file, let's add just one element -- a
div with an id on it. By convention, you will normally see that div have an id "root" on it since it will be the root of our React application!
<div id="root"></div>
If you're writing the code in the CodePen template, you can write this JavaScript directly in the
js section. If you are instead writing this on your computer, you will have to add a script tag with the type
text/jsx, so:
<script type="text/jsx"></script>
Now, let's get to our React code!
class HelloWorld extends React.Component { render() { // Tells React what HTML code to render return <h1>Hello World</h1> } } // Tells React to attach the HelloWorld component to the 'root' HTML div ReactDOM.render(<HelloWorld />, document.getElementById("root"))
All that happens is that "Hello World" is displayed as an H1 on the page!
Let's walk through what's going on here.
First, we are using an ES6 class that inherits from the
React.Component class. This is a pattern that we will use for most of our React components.
Next, we have a method in our class -- and its a special method called
render. React looks for the
render method to decide what to render on the page! The name makes sense. Whatever is returned from that
render method, will be rendered by that component.
In this case, we are returning an H1 with the text of "Hello World" -- this is exactly what would be in the HTML file normally.
Finally, we have:
ReactDOM.render(<HelloWorld />, document.getElementById("root"))
We are using the ReactDOM functionality to attach our react component to the DOM.
React utilizes something called the virtual DOM which is a virtual representation of the DOM that you would normally interact with in Vanilla JavaScript or JQuery. This
reactDOM.renderrenders this virtual DOM to the actual DOM. Behind the scenes, React does a lot of work to efficiently edit and re-render the DOM when something on the interface needs to change.
Our component,
<HelloWorld />, looks like an HTML tag! This syntax is part of JSX which is an extension of JavaScript. You can't natively use it in the browser. Remember how we're using Babel for our JavaScript? Babel will transpile (or convert) our JSX into regular JavaScript so the browser can understand it.
JSX is actually optional in React, but you'll see it used in the vast majority of cases!
Then, we are using JavaScript's built-in
document.getElementById to grab our root element we created in our HTML.
All in all, in this
ReactDOM.render statement, we are attaching our
HelloWorld component to our
div that we created in our HTML file.
Starter Code
Okay -- now that we've done a "Hello World," we can get started with our Facebook component.
First, I want you to play around with this demo. We'll be working on this throughout the rest of the tutorial. Feel free to look at the code too, but don't worry about not understanding it! That's what the rest of the tutorial is for!
Let's start off by "hard coding" the HTML for the widget:
<div class="content"> <div class="col-6 offset-3"> <div class="card"> <div class="card-block"> <div class="row"> <div class="col-2"> <img src="" class="profile-pic"> </div> <div class="col-10 profile-row"> <div class="row"> <a href="#">The Zen of Programming</a> </div> <div class="row"> <small class="post-time">10 mins</small> </div> </div> </div> <p>Hello World!</p> <div> <span class="fa-stack fa-sm"> <i class="fa fa-circle fa-stack-2x blue-icon"></i> <i class="fa fa-thumbs-up fa-stack-1x fa-inverse"></i> </span> </div> <div> <hr class="remove-margin"> <div> <button type="button" class="btn no-outline btn-secondary"> <i class="fa fa-thumbs-o-up fa-4 align-middle" aria-</i> <span class="align-middle">Like</span> </button> </div> </div> </div> <div class="card-footer text-muted"> <textarea class="form-control" placeholder="Write a comment..."></textarea> <small>120 Remaining</small> </div> </div> </div> </div>
With some added CSS, this looks like the following:
Here's a Codepen with the full starter code.
For the sake of this tutorial, we will be creating four components: a
Status component which will be the parent, a
Like component that will encompass the liking logic, and the
Comment component which will contain the logic for typing in a comment. The
Like component will also have a child
LikeIcon that will show up or be hidden when you toggle the like button.
Component Architecture
Let's go ahead and divide the HTML code that we've written into those components.
We'll start with the shell of a component, and we'll render it as well to make sure it's working!
class Status extends React.Component { render() { return ( <div className="col-6 offset-3"> <div className="card"> <div className="card-block"> <div className="row"> <div className="col-10 profile-row"> <div className="row"> <a href="#">The Zen of Programming</a> </div> <div class="row"> <small className="post-time">10 mins</small> </div> </div> </div> </div> <p>Hello world!</p> <div className="card-footer text-muted" /> </div> </div> ) } } ReactDOM.render(<Status />, document.getElementById("root"))
One interesting note about the above, is that we had to change "class" attributes to "className". Class already means something in JavaScript -- its for es6 classes! Some attributes are named differently in JSX than in HTML.
We can also delete the content of our HTML, leaving just an element with the ID root -- the parent "content" div is just for styling!
<body> <div class="content"> <div id="root"></div> </div> </body>
Here's the HTML that is going to go in the Status component. Notice, some of the original HTML isn't there yet -- it's going to go into our subcomponents instead!
Let's create a second component, and then we'll include it in our
Status component.
class Comment extends React.Component { render() { return ( <div> <textarea className="form-control" placeholder="Write a comment..." /> <small>140 Remaining</small> </div> ) } }
Here's the component for our comment. It just has our
textarea to type in, and the text with how many characters we have remaining. Notice that both are wrapped in a
div -- this is because React requires us to wrap all the contents of a component within one HTML tag -- if we didn't have the parent
div we'd be returning a
textarea and a
small tag.
So, now we need to include this component within our
Status component since it will be our subcomponent. We can do so using that same JSX syntax we used to render the> </div> <div className="card-footer text-muted"> + <Comment /> </div> </div> </div> ) } }
Okay, now we just need to do the same for our likes!
class LikeIcon extends React.Component { render() { return ( <div> <span className="fa-stack fa-sm"> <i className="fa fa-circle fa-stack-2x blue-icon" /> <i className="fa fa-thumbs-up fa-stack-1x fa-inverse" /> </span> </div> ) } } class Like extends React.Component { render() { return ( <div> {/* Include the LikeIcon subcomponent within the Like component*/} <LikeIcon /> <hr /> <div> <button type="button"> <i className="fa fa-thumbs-o-up fa-4 align-middle" aria- <span className="align-middle">Like</span> </button> </div> </div> ) } }
Then we need to include it in our original> + <Like /> </div> <div className="card-footer text-muted"> <Comment /> </div> </div> </div> ) } }
Cool, now we have React-ified our original HTML, but it still doesn't do anything! Let's start fixing that!
All in all, the code from this section will look like this CodePen!
State and Props
We have two different user interactions that we want to implement:
- We want the like icon to show up only if the like button is pressed
- We want the number of characters remaining to decrease as the person
Let's start working on these!
Props
Imagine that we wanted our comment box to allow for a different number of letters in different places. On a status, for example, we want a user to be allowed to write a 200 letter long response. On a picture, however, we only want them to be able to write a 100 character response.
React allows us to pass props (short for properties) from the
PictureStatus component and the
Status component to specify how many letters we want to allow in our response, rather than having two different comment components.
The syntax for props looks like the following:
<Comment maxLetters={20} /> <Comment text='hello world' /> <Comment show={false} /> var test = 'hello world' <Comment text={test} />
The props look like HTML attributes! If you are passing a string via props, you don't need the brackets, but any other data type or a variable needs to be within the brackets!
Then, within our component, we can use our props:
console.log(this.props.maxLetters)
They are bundled together in the
props attribute of the instance so they can be accessed with
this.props.myPropName.
So, let's change the hardcoded 140 characters to be easily changeable outside the component!
First, we'll change where we instantiate the Comment component within the Status component (note some code is omitted!):
class Status extends React.Component { ... <div className="card-footer text-muted"> + <Comment maxLetters={280} /> </div> </div> </div> ) } }
Then we'll change the hardcoded 140 character limit in the Comment component.
class Comment extends React.Component { ... <div> <textarea className="form-control" placeholder="Write a comment..." /> + <small>{this.props.maxLetters} Remaining</small> </div> ... }
State
The props that we pass from component to component will never change within the child component -- they can change within the parent but not within the child. But -- a lot of the time we will have attributes that we will want to change within the life of a component. For example, we want to keep a tally of how many characters the user has typed into the textarea, and we want to keep track of whether the status has been "liked" or not. We will store those attributes that we want to change within the component in its state.
You'll notice a lot of immutability within React -- it is highly influenced by the functional paradigm, so side effects are also discouraged.
We want this state to be created whenever we create a new instance of a component, so we will use the ES6 class constructor to create it. If you want a quick refresh on ES6 classes MDN is a great resource!
State is going to be an object with any key-value pairs that we want to include. In this case, we want a characterCount of how many characters the user has typed. We'll set that to zero for now!
class Comment extends React.Component { constructor () { super() this.state = { characterCount: 0 } } ...
Now let's subtract that from the
maxLetters prop, so we always know how many characters we have remaining!
<small>{this.props.maxLetters - this.state.characterCount} Remaining</small>
If you increase the
characterCount, the characters remaining display decreases!
But -- nothing happens when you type! We're never changing the value of
characterCount. We need to add an event handler to the
textarea so that we change the
characterCount when the user types!
Event Handlers
When you've written JavaScript in the past, you've probably written event handlers to interact with user input. We are going to do the same in React, the syntax is just going to be a little bit different.
We are going to add a
onChange handler to our
textarea. Inside of it, we will place a reference to an event handling method that will run every time the user types in the
textarea.
<textarea className="form-control" placeholder="Write a comment..." onChange={this.handleChange}/>
Now we need to create a
handleChange method!
class Comment extends React.Component { constructor () { super() this.state = { characterCount: 0 } } handleChange (event) { console.log(event.target.value) } ...
Right now, we're just
console.log-ing the
event.target.value -- this will work the same way as it does in React-less JavaScript (though if you dive a little deeper, the event object is a little bit different). If you look at that console, we are printing out what we are typing in the textbox!
Now we need to update the
characterCount attribute in state. In React, we never directly modify state, so we can't do something like this:
this.state.characterCount = event.target.value.length. We instead need to use the
this.setState method.
handleChange (event) { this.setState({ characterCount: event.target.value.length }) }
But! You get an error -- "Uncaught TypeError: this.setState is not a function". This error is telling us that need to preserve the context of the es6 class within the event handler. We can do this by binding
this to the method in the constructor! If you want to read more about this, here's a good article!
class Comment extends React.Component { constructor () { super() this.handleChange = this.handleChange.bind(this) ...
Okay! We're almost there! We just need to add the ability to toggle the
like showing up!
We need to add a constructor to our
Like component. In that constructor, we need to instantiate the component's state. The thing that will change within the lifecycle of the component is whether or not the status has been liked.
class Like extends React.Component { constructor() { super() this.state = { liked: false } } ...
Now we need to add an event handler to change whether or not the status has been liked!
class Like extends React.Component { constructor() { super() this.state = { liked: false } this.toggleLike = this.toggleLike.bind(this) } toggleLike () { this.setState(previousState => ({ liked: !previousState.liked })) } ...
The difference here is that the callback function that
this.setState receives a parameter --
previousState. As you can probably guess from the name of the parameter, this is the value of state before
this.setState is called.
setState is asynchronous, so we can't depend on using
this.state.liked within it.
Now, we need to:
a) call the event handler whenever the user clicks on the like button:
b) only show the LikeIcon when
liked is true
render() { return ( <div> {/* Use boolean logic to only render the LikeIcon if liked is true */} + {this.state.liked && <LikeIcon />} <hr /> <div> + <button type="button" className="btn no-outline btn-secondary" onClick={this.toggleLike}> <i className="fa fa-thumbs-o-up fa-4 align-middle" aria- <span className="align-middle">Like</span> </button> </div> </div> ) }
Awesome! Now all of our functionality is in place!
Bonus: Functional components
If you feel like you're in over your head already, feel free to skip this part, but I wanted to make one more quick refactor to this project. If we create components that don't have state associated with them (which we call stateless components), we can make our components into functions instead of ES6 classes.
In that case, our
LikeIcon could look something like this:
const LikeIcon = () => { return ( <div> <span className="fa-stack fa-sm"> <i className="fa fa-circle fa-stack-2x blue-icon" /> <i className="fa fa-thumbs-up fa-stack-1x fa-inverse" /> </span> </div> ) }
We just return the UI of the component instead of using the
render method!
Here is a CodePen that implements this refactor!
Cheat Sheet
I love cheatsheets, so I made one with the content from this post!
You can also download it as a PDF here!
Next Steps
To recap, we talked about the component architecture, the basic React syntax and JSX, state and props, event handlers, and functional components.
If you would like to view all the CodePens from this tutorial, here is a collection!
If you would like to try to extend the code from this tutorial, I would recommend changing likes to reactions or creating a photo component that reuses some of the components that we made!
Also, here are some other awesome places to learn React:
Keep in touch
If you liked this article and want to read more, I have a weekly newsletter with my favorite links from the week and my latest articles. Also, tweet me about things you want me to write tutorials on, or constructive feedback on how I could make these easier to follow! If you have any questions, my AMA repo is the best place to reach me!
If you're interested in more posts like this one, I have two other beginners guides: one to CSS and one to Vue
This is so much like some of the very early React how-tos. Things had gotten way overly complicated since then. People sort of forgot about the basics, but the original tutorials are out of date.
This is just what the space needed 🙂
Yes, I love it!
Getting rid of all the tooling to understand the concepts.
I also wrote a tutorial on this topic, without tooling or ES2015 features, to get people up to speed!
React From Zero
Ah, thank you so much!!!
Not something I needed to hear preparing for my new job where they use React extensively :O
Agreed, this shows that React doesn't need to get too complicated in order for it to work. Simple, fancy and functional :D
I appreciate the amount of efforts put ting this article together: codepen, git diff, cheatsheet, react concepts, etc. for us to read in about 10 minutes. More importantly, how you break down and approach the problems and writing solutions. You are awesome.
Ah thank you so much!! Glad it was helpful!
Great article!
I've noticed a small thing - today's approach with methods is not to write
in constructor function, but simply define methods with arrow functions like that:
This way our "this" is the method's parent Class.
Please correct me if I'm wrong :)
You can do that -- its a Babel feature, so its not implemented in JS, it also has some performance issues. I like the syntax, but its still perfectly valid to bind in the constructor, which IMO is easier to explain.
Ok, thank You for an answer :)
I will try to check more about performance (I didn't have problems with that), so thanks for suggestion!
Thank you Ali for providing such a great start to React! I noticed that there's a missing closing tag at the end of the browser.js declaration.
Good catch! Thanks!!
@aspittel, thanks for sharing great article. I noticed when I load the babel script in browser, it give me error in console.
The problem was using "babel-core" instead of "babel-standalone". Link
Thank you!! Will fix!
I was (actually, still am, but I'm doing it in parts) working on something similar, a React guide that goes through all the basics. This one is pretty good, congrats, Ali.
Thanks for sharing Ali. Great intro. To compliment this, the Syntax podcast recently did a podcast episode on React for beginners for those interested.
Looking forward to your next post!
Very nice article! Congratulations and thanks for sharing that
Great article. Thanks!
Very depth explaination
React, Vue and Angular are all great for building Hello Worlds, the ugliness starts with more complex projects where you'll find yourself reinventing the wheel day-to-day. Probably Angular has it best in this regard.
Good job!
This is perfect, I start doing my react app next week!
Simple, effective and more interesting!! A clean article that walks us through the basic idea of React and JSX. Thanks a lot @aspittel for your efforts
Thank you!!
This is fantastic, I've been looking for a write up like this. Thank you so much!!
thank you for reading!
You're going to have to stop writing these articles Ali, I don't have the time to read them all! Added to the list and I'm looking forward to reading it :)
haha thanks for reading!
I need this so much!
Gooshh! Just what I needed! Thank you so much!
Nice tutorial, just started learning React. Thanks for sharing.
Not bad! I like it
Awesome One
super awesome :). luv it.
@aspittel , I think you meant 'team' here:
right?
Good catch, thanks!
An extraordinary guide for beginners. Love it.
Thanks a zillion.
Thank you! this is exactly what I was looking for ^ | https://practicaldev-herokuapp-com.global.ssl.fastly.net/aspittel/a-complete-beginners-guide-to-react-2cl6 | CC-MAIN-2019-43 | refinedweb | 4,268 | 64 |
Glossary Item Box
Artificial types are types that are not present in the model itself but are defined during runtime. These new persistent types are described in XML configuration entries.
Defining artificial type:
The syntax for adding those classes is the same as for the artificial fields with the only difference that the class name should not match any class name in the given namespace. For example, if you have a class defined like this in the app.config file:
And an artificial node defined like this:
Then the fields of the artificial node will extend the Address class from the model. If the name of the class in the artificial mapping is not matching any of the model classes than a new artificial type will be created that will be forward mapped to the database.
Referring Artificial Types from Types defined in the model:
Since the artificial type does not have direct representation in the model but it does have one on the database you can easily create properties in the model that reference such type in the app.config file. For example if we have an artificial Person type defined as follows:
We can easily extent the Address class which we have in the model with just adding this few lines of XML to the app.config file
This will add a property of Person type to the address class. When you execute the DDL script for this change it will create a personID column that will reference the newly created person table.
Using Artificial Types:
The usage of Artificial type is similar to the use of artificial fields. To create instances of artificial types you will have to take advantage of the IPersistentTypeDescriptor interface:
Note that the CreateInstance() method can be called with either a value representing the Key of this entry or null if an auto key generation mechanism is used.You can set the person instance properties using the same methods as setting any other artificial field. | http://www.telerik.com/help/openaccess-classic/artificial-types.html | CC-MAIN-2016-50 | refinedweb | 332 | 54.36 |
#! /usr/bin/env python
#
# example.py: Using the Python Database API (DB-API)
# with Red Hat Database
#
# This example module utilizes the Python DB-API to
# create and modify data within an RHDB database.
# These examples have been # created to display the
# concepts of the Red Hat Database Python DB-API.
# Allow the Python DBI (pgdb.py) to be accessible
import pgdb
import string
# Dump information about the RHDB Python DB-API
def about ():
print "\n******************************************"
print " About the RHDB Python DB-API"
print " DB-API level: %s" % pgdb.apilevel
print " DB-API Thread Safety level: %d" % pgdb.threadsafety
print " DB-API Parameter Formatting: %s" % pgdb.paramstyle
print "******************************************\n"
# Return a connection to the database
def initialize ():
# Connect to the basketball database.
# Use the dsn as the connection parameter.
# The Python DB-API details the other valid connection
# parameters. Notify the user of any raised exceptions.
try:
db = pgdb.connect (dsn = 'localhost:basketball')
except:
print
print "Exception encountered connecting "
print "to database basketball."
print
# Force execution to stop by raising
# the handled exception again
raise
return db
# Close a database connection
def disconnect (db):
db.close ()
# Create and populate the table which the examples feed off of
def cleantable ():
# Connect to the database and open a cursor
db = initialize ()
cursor = db.cursor ()
print "\nDropping Players table..."
# Drop the table if it exists.
# This is done in a try/except block as we may
# be attempting to drop a table that does not exist.
# Handle exceptions as you see fit.
try:
cursor.execute ("drop table players")
except pgdb.DatabaseError:
print " Exception encountered: Table does not exist.
Continuing."
db.rollback ()
print " Creating and seeding Players table..."
# Create and populate the Players table.
# Any raised exceptions will cause the program to stop
# as no handler is declared.
cursor.execute ("create table players
(name varchar(20), team varchar(50))")
cursor.execute ("insert into players values
('Michael Jordan','Washington Wizards')")
cursor.execute ("insert into players values
('Tim Duncan','San Antonio Spurs')")
cursor.execute ("insert into players values
('Vince Carter','Toronto Raptors')")
db.commit ()
cursor.close ()
print " Players table successfully created.\n"
# Display the contents of the Players table and leave
viewtable (db)
disconnect (db)
# Display the contents of the Players table
def viewtable (db):
# Create understandable index variables
playerName, playerTeam = 0, 1
# Issue a full select from the Players table
cursor = db.cursor ()
cursor.execute ("select * from players order by name asc")
# Fetch the first result from the cursor
row = cursor.fetchone ()
# Display a message on an empty result set
if row == None:
print "Empty result set returned on select * from players"
else:
# Iterate through the result set displaying the player's name
# and team
print "Content of Players table:"
print "*************************"
while row != None:
print " %s -- %s" % (row[playerName], row[playerTeam])
row = cursor.fetchone ()
print "\nNumber of players: %d\n" % cursor.rowcount
cursor.close ()
# Update a member of the Players table
def update ():
# Connect to the database
db = initialize ()
cursor = db.cursor ()
selectStmt = "select * from players where name = 'Kobe Bryant'"
updateStmt = "update players set team='LA Lakers'
where name = 'Kobe Bryant'"
# We'll show a raw before/after image of the result sets sent
# back for Kobe Bryant
cursor.execute (selectStmt)
print "\nBefore:", cursor.fetchall ()
# Update Kobe's real team
print "\nUpdating..."
cursor.execute (updateStmt)
# Display the updated row(s) and then the rest of the table
cursor.execute (selectStmt)
print "\nAfter: ", cursor.fetchall ()
print
viewtable (db)
# Cleanup
db.commit ()
cursor.close ()
db.close ()
# Modify table data and commit or rollback the transaction based
# on user input
def transaction (action):
# Use the older versions of string manipulation so that
# the example works with all versions of Python
command = string.lower (action)
# Check parameters
if (command != "commit") and (command != "rollback"):
print "Usage: transaction (action) where"
print " action=\"commit\" or
action=\"rollback\""
return
# Connect to the database
db = initialize ()
# Display the contents of the Players table before we modify
# the data
print "\nBefore:"
viewtable (db)
# Create SQL statements to be consumed by RHDB
insertStmt1 = "insert into players values
('Tracy McGrady','Orlando Magic')"
insertStmt2 = "insert into players values
('Kobe Bryant','NY Knicks')"
deleteStmt = "delete from players where name = 'Michael Jordan'"
print "About to issue the following commands:
\n %s\n %s\n %s" % (insertStmt1, insertStmt2, deleteStmt)
# Modify the table data. If specific exceptions are returned,
# rollback the transaction and leave... any other exceptions
# will cause execution to halt.
cursor = db.cursor ()
try:
cursor.execute (insertStmt1)
cursor.execute (insertStmt2)
cursor.execute (deleteStmt)
except (pgdb.DatabaseError, pgdb.OperationalError,
pgdb.pgOperationalError):
print " Exception encountered while modifying table data."
db.rollback ()
return
print "\nAbout to", command, "the transaction..."
# Commit or rollback the transaction as requested
if command == "commit":
db.commit ()
else:
db.rollback ()
cursor.close ()
print "\nAfter:"
# Display the contents of the Players table
# after the (potential) data modification
viewtable (db)
disconnect (db) | http://www.redhat.com/docs/manuals/database/RHDB-7.1.3-Manual/prog/examplepy.html | crawl-002 | refinedweb | 791 | 50.12 |
Manuel lets you mix and match traditional doctests with custom test syntax.
Several plug-ins are included that provide new test syntax (see Included Functionality). You can also create your own plug-ins.
For example, if you’ve ever wanted to include a large chunk of Python in a doctest but were irritated by all the “>>>” and ”...” prompts required, you’d like the manuel.codeblock module. It lets you execute code using Sphinx-style ”.. code-block:: python” directives. The markup looks like this:
.. code-block:: python import foo def my_func(bar): return foo.baz(bar)
Incidentally, the implementation of manuel.codeblock is only 23 lines of code.
The plug-ins included in Manuel make good examples while being quite useful in their own right. The Manuel documentation makes extensive use of them as well. Follow the “Show Source” link to the left to see the reST source of this document.
For a large example of creating test syntax, take a look at the FIT Table Example or for all the details, Theory of Operation.
To see how to get Manuel wired up see Getting Started.
Contents
Manuel includes several plug-ins out of the box:
The plug-ins used for a test are composed together using the “+” operator. Let’s say you wanted a test that used doctest syntax as well as footnotes. You would create a Manuel instance to use like this:
import manuel.doctest import manuel.footnote m = manuel.doctest.Manuel() m += manuel.footnote.Manuel()
You would then pass the Manuel instance to a manuel.testing.TestSuite, including the names of documents you want to process:
manuel.testing.TestSuite(m, 'test-one.txt', 'test-two.txt')
The simplest way to get started with Manuel is to use unittest to run your tests:
import manuel.codeblock import manuel.doctest import manuel.testing import unittest def test_suite(): m = manuel.doctest.Manuel() m += manuel.codeblock.Manuel() return manuel.testing.TestSuite(m, 'test-one.txt', 'test-two.txt') if __name__ == '__main__': unittest.TextTestRunner().run(test_suite())
If you want to use a more featureful test runner you can use zope.testing’s test runner (usable stand-alone – it isn’t dependent on the Zope application server). Create a file named tests.py with a test_suite() function that returns a test suite.
The suite can be either a manuel.testing.TestSuite object or a unittest.TestSuite as demonstrated below.
import manuel.codeblock import manuel.doctest import manuel.testing def test_suite(): suite = unittest.TestSuite() # here you add your other tests to the suite... # now you can add the Manuel tests m = manuel.doctest.Manuel() m += manuel.codeblock.Manuel() suite.addTest(manuel.testing.TestSuite(m, 'test-one.txt', 'test-two.txt')) return suite
If you know out how to make Manuel work with other test runners (nose, py.test, etc.), please send me an email and I’ll expand this section.
Manuel has its own manuel.testing.TestClass class that manuel.testing.TestSuite uses. If you want to customize it, you can pass in your own class to TestSuite.
import os.path import manuel.testing class StripDirsTestCase(manuel.testing.TestCase): def shortDescription(self): return os.path.basename(str(self)) suite = manuel.testing.TestSuite( m, path_to_test, TestCase=StripDirsTestCase) >>> list(suite)[0].shortDescription() 'bugs.txt'
Manuel is all about making testable documents and well-documented tests. Of course, Python’s doctest module is a long-standing fixture in that space, so it only makes sense for Manuel to support doctest syntax.
Handling doctests is easy:
import manuel.doctest m = manuel.doctest.Manuel() suite = manuel.testing.TestSuite(m, 'my-doctest.txt')
Of course you can mix in other Manuel syntax plug-ins as well (including ones you write yourself).
import manuel.doctest import manuel.codeblock m = manuel.doctest.Manuel() m += manuel.codeblock.Manuel() suite = manuel.testing.TestSuite(m, 'my-doctest-with-code-blocks.txt')
The manuel.doctest.Manuel constructor also takes optionflags and checker arguments.
m = manuel.doctest.Manuel(optionflags=optionflags, checker=checker)
See the doctest documentation for more information about the available options and output checkers
Note
zope.testing.renormalizing provides an OutputChecker for smoothing out differences between actual and expected output for things that are hard to control (like memory addresses and time). See the module’s doctests for more information on how it works. Here’s a short example that smoothes over the differences between CPython’s and PyPy’s NameError messages:
import re import zope.testing.renormalizing checker = zope.testing.renormalizing.RENormalizing([ (re.compile(r"NameError: global name '([a-zA-Z0-9_]+)' is not defined"), r"NameError: name '\1' is not defined"), ])
When writing documentation the need often arises to describe the contents of files or other non-Python information. You may also want to put that information under test. manuel.capture helps with that.
For example, if you were writing the problems for a programming contest, you might want to describe the input and output files for each challenge, but you want to be sure that your examples are correct.
To do that you might write your document like this:
Challenge 1 =========== Write a program that sorts the numbers in a file. Example ------- Given this example input file:: 6 1 8 20 11 65 2 .. -> input Your program should generate this output file:: 1 2 6 8 11 20 65 .. -> output >>> input_lines = input.splitlines() >>>>> output == correct True
This uses the syntax implemented in manuel.capture to capture a block of text into a variable (the one named after “->”).
Whenever a line of the structure ”.. -> VAR” is detected, the text of the previous block will be stored in the given variable.
Of course, lines that start with ”.. ” are reST comments, so when the document is rendered with docutils or Sphinx, the tests will dissapear and only the intended document contents will remain. Like so:
Challenge 1 =========== Write a program that sorts the numbers in a file. Example ------- Given this example input file:: 6 1 8 20 11 65 2 Your program should generate this output file:: 1 2 6 8 11 20 65
Sphinx and other docutils extensions provide a “code-block” directive, which allows inlined snippets of code in reST documents.
The manuel.codeblock module provides the ability to execute the contents of Python code-blocks. For example:
.. code-block:: python print('hello')
If the code-block generates some sort of error...
.. code-block:: python print(does_not_exist)
...that error will be reported:
>>> document.process_with(m, globs={}) Traceback (most recent call last): ... NameError: name 'does_not_exist' is not defined
If you find that you want to include a code-block in a document but don’t want Manuel to execute it, use manuel.ignore to ignore that particular block.
Sphinx and docutils have different ideas of how code blocks should be spelled. Manuel supports the docutils-style code blocks too.
.. code:: python a = 1
Docutils options after the opening of the code block are also allowed:
.. code:: python :class: hidden a = 1
At times you’ll want to have a block of code that is executed but not displayed in the rendered document (like some setup for later examples).
When using doctest’s native format (“>>>”) that’s easy to do, you just put the code in a reST comment, like so:
.. this is some setup, it is hidden in a reST comment >>> a = 5 >>> b = a + 3
However, if you want to include a relatively large chunk of Python, you’d rather use a code-block, but that means that it will be included in the rendered document. Instead, manuel.codeblock also understands a variant of the code-block directive that is actually a reST comment: ”.. invisible-code-block:: python”:
.. invisible-code-block:: python a = 5 b = a + 3
Note
The “invisible-code-block” directive will work with either one or two colons. The reason is that reST processers (like docutils and Sphinx) will generate an error for unrecognized directives (like invisible-code-block). Therefore you can use a single colon and the line will be interpreted as a comment instead.
The manuel.footnote module provides an implementation of reST footnote handling, but instead of just plain text, the footnotes can contain any syntax Manuel can interpret including doctests.
>>> import manuel.footnote >>> m = manuel.footnote.Manuel()
Here’s an example of combining footnotes with doctests:
Here we reference a footnote. [1]_ >>> x 42 Here we reference another. [2]_ >>> x 100 .. [1] This is a test footnote definition. >>> x = 42 .. [2] This is another test footnote definition. >>> x = 100 .. [3] This is a footnote that will never be executed. >>> raise RuntimeError('nooooo!')
It is also possible to reference more than one footnote on a single line.
This line has several footnotes on it. [1]_ [2]_ [3]_ >>> z 105 A little prose to separate the examples. .. [1] Do something >>> w = 3 .. [2] Do something >>> x = 5 .. [3] Do something >>> y = 7 >>> z = w * x * y
Occasionally the need arises to ignore a block of markup that would otherwise be parsed by a Manuel plug-in.
For example, this document has a code-block that will generate a syntax error:
The following is invalid Python. .. code-block:: python def foo: pass
We can see that when executed, the SyntaxError escapes.
>>> import manuel.codeblock >>> m = manuel.codeblock.Manuel() >>> document.process_with(m, globs={}) File "<memory>:4", line 2 def foo: ^ SyntaxError: invalid syntax
The manuel.ignore module provides a way to ignore parts of a document using a directive ”.. ignore-next-block”.
Because Manuel plug-ins are executed in the order they are accumulated, we want manuel.ignore to be the base Manuel object, with any additional plug-ins added to it.
import manuel.ignore import manuel.doctest m = manuel.ignore.Manuel() m += manuel.codeblock.Manuel() m += manuel.doctest.Manuel()
If we add an ignore marker to the block we don’t want processed...
The following is invalid Python. .. ignore-next-block .. code-block:: python def foo: pass
...the error goes away.
>>> document.process_with(m, globs={}) >>> print(document.formatted())
Ignoring literal blocks is a little more involved:
Here is some invalid Python: .. ignore-next-block :: >>> lambda: x=1
One of the advantages of unittest over doctest is that the individual tests are isolated from one-another.
In large doctests (like this one) you may want to keep later tests from depending on incidental details of earlier tests, preventing the tests from becoming brittle and harder to change.
Test isolation is one approach to reducing this intra-doctest coupling. The manuel.isolation module provides a plug-in to help.
The ”.. reset-globs” directive resets the globals in the test:
We define a variable. >>> x = 'hello' It is still defined. >>> print(x) hello Now we can reset the globals... .. reset-globs ...and the name binding will be gone: >>> print(x) Traceback (most recent call last): ... NameError: name 'x' is not defined
We can see that after the globals have been reset, the second “print(x)” line raises an error.
Of course, resetting to an empty set of global variables isn’t always what’s wanted. In that case there is a ”.. capture-globs” directive that saves a baseline set of globals that will be restored at each reset.
We define a variable. >>>>> print(y) goodbye .. reset-globs ...it will disappear after a reset. >>> print(y) Traceback (most recent call last): ... NameError: name 'y' is not defined But the captured globals will still be defined. >>> print(x) hello
If you want parts of a document to be individually accessible as test cases (to be able to run just a particular subset of them, for example), a parser can create a region that marks the beginning of a new test case.
Two ways of identifying test cases are included in manuel.testcase:
First Section ============= Some prose. >>> print('first test case') Some more prose. >>> print('still in the first test case') Second Section ============== Even more prose. >>> print('second test case')
Given the above document, if you’re using zope.testing’s testrunner (located in bin/test), you could run just the tests in the second section with this command:
bin/test -t "file-name.txt:Second Section"
Or, exploiting the fact that -t does a regex search (as opposed to a match):
bin/test -t file-name.txt:Second
If you would like to identify test cases separately from sections, you can identify them with a marker:
First Section ============= The following test will be in a test case that is not individually identifiable. >>> print('first test case (unidentified)') Some more prose. .. test-case: first-named-test-case >>> print('first identified test case') Second Section ============== The test case markers don't have to immediately proceed a test. .. test-case: second-named-test-case Even more prose. >>> print('second identified test case')
Again, given the above document and zope.testing, you could run just the second set of tests with this command:
bin/test -t file-name.txt:second-named-test-case
Or, exploiting the fact that -t does a regex search again:
bin/test -t file-name.txt:second
Even though the tests are individually accessable doesn’t mean that they can’t all be run at the same time:
bin/test -t file-name.txt
Also, if you create a hierarchy of names, you can run groups of tests at a time. For example, lets say that you append “-important” to all your really important tests, you could then run the important tests for a single document like so:
bin/test -t 'file-name.txt:.*-important$'
or all the “important” tests no matter what file they are in:
bin/test -t '-important$'
You can also combine more than one test case identification method if you want. Here’s an example of building a Manuel stack that has doctests and both flavors of test case identification:
import manuel.doctest import manuel.testcase m = manuel.doctest.Manuel() m += manuel.testcase.SectionManuel() m += manuel.testcase.MarkerManuel() | https://pythonhosted.org/manuel/index.html | CC-MAIN-2020-40 | refinedweb | 2,302 | 59.9 |
Python’s Flask
Flask is a small and powerful web framework for Python. It is easy to learn and simple to use, enabling the users to build their web app in less amount of time. Flask is also easy to get started with as a beginner because there is little boilerplate code for getting a simple app up and running. Flask backs extensions that can add application features as if they were implemented in Flask itself. Extensions exist for object-relational mappers, form validation, upload handling, and several common frameworks related tools. Extensions are updated more regularly than the core Flask program. Flask is commonly used with MongoDB which allows it more control over databases and history.
INSTALLING FLASK
Before getting started, the user need to install Flask. Because systems vary, things can intermittently go wrong during these steps.
INSTALL VIRTUALENV
Here we will be using virtualenv to install Flask. Virtualenv is a suitable tool that creates isolated Python development environments where the user can do all his/her development work. If the user installs it system-wide, there is the risk of messing up other libraries that the user might have installed already. Instead, use virtualenv to create a sandbox, where the user can install and use the library without affecting the rest of the system. The user can keep using sandbox for ongoing development work, or can simply delete it once the user is finished using it. Either way, the system remains organized and clutter-free.
If you see a version number, you are good to go and you can skip to this “Install Flask” section. If the command was not found, use easy_install or pip to install virtualenv. If you are running in Linux or Mac OS X, one of the following should work:
$ sudo easy_install virtualenv
$ sudo pip install virtualenv
If you are running Windows, follow the “Installation Instructions” on this page to get easy_install up and running on your system.
INSTALL FLASK
After installing virtualenv, the user can create a new isolated development environment, like so:
$ virtualenv flaskapp
Here, virtualenv creates a folder, flaskapp/, and sets up a clean copy of Python inside for the user to use. It also installs the handy package manager, pip.
Enter newly created development environment and activate it so to start working within it.
1
2
$ cd flaskapp
$ . bin/activate
Now, the user can safely install Flask:
$ pip install Flask
SETTING UP THE PROJECT STRUCTURE
Let’s create a couple of folders and files within flaskapp/ to keep the web app organized.
Within flaskapp/, create a folder, app/, to comprise all files. Inside app/, create a folder static/; this is where the user has to put the web app’s images, CSS, and JavaScript files, so create folders for each of those, as demonstrated above. As well, create another folder, templates/, to store the app’s web templates. Create an empty Python file routes.py for the application logic, such as URL routing.
And no project is complete without a helpful description, so create a README.md file as well.
BUILDING A HOME PAGE
While writing a web app with a couple of pages, it quickly becomes bothersome to write the same HTML boilerplate over and over again for each page. Also, if the user needs to add a new element to their application, such as a new CSS file, the user would have to go into every single page and should add. This is time consuming and error prone. Wouldn’t be nice if, instead of repeatedly writing the same HTML boilerplate, the user can define their page layout just once, and then use that layout to make new pages with their own content.
APP/TEMPLATES/HOME.HTML
1 {% extends “layout.html” %}
2 {% block content %}
3 <div class=”jumbo”>
4 <h2>Welcome to the Flask app<h2>
5 <h3>This is the home page for the Flask app<h3>
6 </div>
7 {% endblock %}
BUILDING AN ABOUT PAGE
In the above section, we have seen the creation of a web template home.html. Now, let’s repeat that process again to create an about page for our web app.
APP/TEMPLATES/ABOUT.HTML
{% extends “layout.html” %}
{% block content %}
<h2>About</h2>
<p>This is an About page for the Intro to Flask article. Don’t I look good? Oh stop, you’re making me blush.</p>
{% endblock %}
In order to visit this page in the browser, we need to map a URL to it. Open up routes.py and add another mapping:
from flask import Flask, render_template
app = Flask(__name__)
@app.route(‘/’)
def home():
return render_template(‘home.html’)
@app.route(‘/about’)
def about():
return render_template(‘about.html’)
if __name__ == ‘__main__':
app.run(debug=True) | http://www.pro-tekconsulting.com/blog/ | CC-MAIN-2018-47 | refinedweb | 785 | 64 |
Postcode Lookup with YQL and Great Maps
One of the best secrets the web keeps hidden is YQL, for those of you who've never heard of this little hidden gem, YQL stands for "Yahoo Query Language" and it's an SQL-like syntax that allows you access to the masses of information available in Yahoo's many different data stores.
In this post, we're going to explore using the "geo.placefinder" lookup to find UK post codes and then show their location using a map in a desktop WPF application using the "Great Maps" toolkit.
So, What Exactly Is YQL?
There's not really an easy way to define YQL in one sentence, but it's a language that it interpreted by the YQL service, and under the hood turned into whatever is needed to query the requested resources. However, it's not just a universal database layer for all of Yahoo's data; it can, for example, be used to query your Yahoo Email account (if you have one), and it can be instructed to parse web pages, RSS feeds, and all manner of other resources, and then return the data in either XML- or JSON-based formats.
One of the things that YQL is most used for, in fact, is to query XML-based RSS streams, and return the results as a JSON result set, thus allowing a quick and easy way to convert RSS to JSON. If you want to have a play with the YQL service, Yahoo provide a developers console at
and the best thing is, you don't need to have an account or be logged in to use a large chunk of the service.
When you view the console, you'll see a number of data tables down the left hand side, a box to enter your YQL query, and a number of samples to get you started.
We'll be using the 'geo.placefinder' method, so if you enter
select * from geo.placefinder where text="DH1 1HR"
into the YQL statement box on the console page, click the JSON button, and then click test, YQL should then go off and look for the UK postal code 'DH1 1HR' and come back with various bits of information about it.
Figure 1: Retrieving the postal code information
Among the data that you get back you should be able to see that you get a latitude and a longitude value. With these two values, we could then plug them into Google maps, for example, and it would show us where that postal code was located. I would also guess that you could potentially put in postal codes from other countries in here too, but I can't say 100% what would and would not work as I've not tested it.
If you look at the bottom of the console screen, you should see something like this:
Figure 2: Observing the postal code information
This is the URL you need to call from your favourite programming language to get that data back into your application. In particular, if you look closely, you'll see that your postal code text has been plugged in there, so all you need to do to create an application that can search on any given code, is to remove that section of the string and replace it with a new bit of text.
With that in mind, fire up Visual Studio, and create a new WPF application project. Use the following base XAML code for your main window
<Window x: <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width=".25*"/> <ColumnDefinition Width=".75*"/> </Grid.ColumnDefinitions> </Grid> </Window>
This will create a base window off 800x600 with a left hand bar that is a quarter of the width of the window, and a main right hand pane that's three quarters of the width.
To continue, we're now going to add a search box, a button to trigger the search, and some controls to display our status to the left hand bar. Add the following XAML just after the end of the grid column definitions in the previous XAML code, to define our left hand bar.
<StackPanel Grid. <TextBlock Text="Postcode to search for:" Margin="10"/> <TextBox x: <Button x:</Button> <TextBlock Text="Results:" Margin="10,50,10,10" FontWeight="Bold"/> <TextBlock Text="Latitude" Margin="10,10,10,0"/> <TextBlock x: <TextBlock Text="Longitude" Margin="10,10,10,0"/> <TextBlock x: <TextBlock Text="City" Margin="10,10,10,0"/> <TextBlock x: <TextBlock Text="County" Margin="10,10,10,0"/> <TextBlock x: <TextBlock Text="Country" Margin="10,10,10,0"/> <TextBlock x: </StackPanel>
By now, your form should look something like the following.
Figure 3: The form is progressing
At this point, we'll now switch to code view and start adding some code to query the YQL service.
Make sure your code behind looks similar to the following:
using System.IO; using System.Net; using System.Windows; namespace yqlpostcodelookup { public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); InitializeResultDefaults(); } private void InitializeResultDefaults() { txtCityVal.Text = "Unknown"; txtCountryVal.Text = "Unknown"; txtCountyVal.Text = "Unknown"; txtLatVal.Text = "0"; txtLongVal.Text = "0"; } private void BtnDoSearchClick(object sender, RoutedEventArgs e) { if(string.IsNullOrWhiteSpace(txtCodeToSearch.Text)) { MessageBox.Show("Please enter a postal code to search for"); return; } string jsonString = PerformYqlSeqrch(txtCodeToSearch.Text); } private string PerformYqlSeqrch(string postalCode) { string urlString = @" * from geo.placefinder where text=""{0}""&format=json&callback="; string resultJson = string.Empty; try { WebClient webClient = new WebClient(); Stream dataStream = webClient.OpenRead(string.Format(urlString, postalCode)); StreamReader reader = new StreamReader(dataStream); resultJson = reader.ReadToEnd(); if (dataStream != null) dataStream.Close(); reader.Close(); } catch(WebException) { // Result is an empty JSON object resultJson = "{}"; } return resultJson; } } }
And, change your main window XAML for the search button to the following so that it fires the search correctly.
<Button x: </Button>
You'll see from the preceding code that we've copied the string from the bottom of the YQL console and changed it back to a more readable string, followed by making the post code section a parameter that can be added by using string.format. If we now run the application by using F5, enter a postal code (for example DH1 1HR) into the search box, and then click search. Our app will fire off a request to the YQL service and come back with some JSON. At this point, however, we've not yet wired anything up in the UI.
Because there are no readymade libraries for YQL, we'll need to process the JSON returned ourselves, and to make that easy, we'll use the excellent Newtonsoft JSON library available on NuGet. Use NuGet to install the "Json.Net" NuGet Package into our WPF project.
Figure 4: Installing the "Json.Net" NuGet Package
Newtonsoft JSON makes parsing complex JSON objects child's play. By using the .NET dynamic object services, you can easily take an unknown JSON string and parse it to gain useful information.
Add the following method to your WPF form class to take the JSON data we retrieve from YQL and turn it into an object we can work with.
private LocationInfo ParseYqlJsonData(string jsonData) { dynamic task = JObject.Parse(jsonData); var count = task.query.count.Value; int objectCount = 0; if(count != null) objectCount = Convert.ToInt32(count); if (objectCount == 0) return new LocationInfo(); JObject resultObject = objectCount > 1 ? task.query.results.Result[0] : task.query.results.Result; if(resultObject == null) { return new LocationInfo(); } LocationInfo result = new LocationInfo(); var latitude = resultObject["latitude"].ToString(); if(latitude != null) { result.Latitude = Convert.ToDouble(latitude); } var longitude = resultObject["longitude"].ToString(); if (longitude != null) { result.Longitude = Convert.ToDouble(longitude); } var city = resultObject["city"].ToString(); if (city != null) { result.City = city; } var county = resultObject["county"].ToString(); if (county != null) { result.County = county; } var country = resultObject["country"].ToString(); if (country != null) { result.Country = country; } return result; }
You'll also need to add a new class called 'LocationInfo' that has the following definition:
namespace yqlpostcodelookup { class LocationInfo { public double Latitude { get; set; } public double Longitude { get; set; } public string City { get; set; } public string County { get; set; } public string Country { get; set; } public LocationInfo() { Latitude = 0; Longitude = 0; City = "Unknown"; Country = "Unknown"; County = "Unknown"; } } }
With these two changes in place, and the following code added to the search button click handler
string jsonString = PerformYqlSeqrch(txtCodeToSearch.Text); LocationInfo result = ParseYqlJsonData(jsonString); txtLatVal.Text = result.Latitude.ToString(CultureInfo.InvariantCulture); txtLongVal.Text = result.Longitude.ToString(CultureInfo.InvariantCulture); txtCityVal.Text = result.City; txtCountyVal.Text = result.County; txtCountryVal.Text = result.Country;
you should now find that you can enter a postal code into the search box, click search, and have the values filled in for you.
Just before anyone starts to write a comment about doing any of this using MVVM, however, please remember that this is just an example of using YQL. Please don't write your own UI update code the same way as I have done here. My code is deliberately simple so you can focus on the important bits; there are much better ways of doing this, especially where getting the data from the YQL URL is concerned.
The last bit we need to look at is adding a map to show the searched location. We'll do this by using a NuGet package called "Great Maps" that will add support for the most common arial and standard street maps available for public consumption on the web. Fire up NuGet and search for and install the package named "GMap.NET.Presentation".
Figure 5: Installing "GMap.NET.Presentation"
Then, we'll be ready to add a map to the right hand side of our UI. To get the map into our XAML, we need to add a new namespace to our window. You can do this by adding:
xmlns:gMap="clr-namespace:GMap.NET.WindowsPresentation; assembly=GMap.NET.WindowsPresentation"
to the XAML window declaration, making the full declaration look something like the following:
<Window x:
Once we've declared the namespace, we then can add some code to the actual XAML markup where we want our map to appear. Add the following XAML just after the closing 'StackPanel' used to create the left hand information section.
<Grid Grid. <gMap:GMapControl </Grid>
This will set up our default map holder and add the appropriate objects to our markup.
Once we have the XAML side of things working, we then need to add a small bootstrap function to our code behind. We do this by using the following method:
private void SetupGmaps() { mainMap.MapProvider = GMapProviders.BingSatelliteMap; mainMap.Position = new PointLatLng(0, 0); mainMap.MinZoom = 1; mainMap.MaxZoom = 25; mainMap.Zoom = 16.0; }
If you press F5 at this point and run your app, you should see your app start up with a blank grey map. (This is normal as your centred at 0,0, which is the Equator.)
Figure 6: The map shows when you run your app
You might (depending on your project configuration) see the following, however:
Figure 7: Your project could, instead, look like this
If you do, you need to set your application to use the legacy version 2 .NET runtime policy. Find the 'Startup' section in your app.config file (it'll look like the following):
<startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> </startup>
and change it so that it has the following attribute:
<startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> </startup>
When you now run your application, everything should work as expected.
The final thing we now need to do is to add a line into the search button handler that changes our map's centre point to the co-ordinates received from the YQL JSON data. We can do this by adding the following line:
mainMap.Position = new PointLatLng(result.Longitude, result.Latitude);
just after the code that sets the various strings in our information display. If you now enter a post code into the search box—for example, DH1 1HR—and then click search, your map should update to show the location of that post code.
And that's it. You now have an app that can pretty much search for anything that the YQL system can find and perform a reverse geocode on, and then display a map of that location. There's much more you can do with the app as it stands. For example, you could implement manual scrolling of the map, or a drop-down enabling you to change the map type. You could even try using some of the many geospatial utility libraries that exist on NuGet to provide co-ordinate conversion and lots of other tricks.
If you have any ideas for topics you'd like to see covered in this column, please add a comment below or come and find me lurking around in the Lidnug (Linked .NET users group) in Linked-In and ask me directly. If I can help, I'm more than happy to oblige.
Sr. DeveloperPosted by Mel Pama on 08/19/2015 08:20pm
You should make your images with a little higher resolution so we can read the code. I'm on a 22 inch screen and can barely see the words let alone read them.
Re:Sr. DeveloperPosted by on 08/19/2015 09:19pm
I've asked the author to look into this. We have to balance the file size of images with readability. Sometimes we get the balance off a little bit!Reply | https://www.codeguru.com/columns/dotnet/postcode-lookup-with-yql-and-great-maps.html | CC-MAIN-2019-26 | refinedweb | 2,230 | 62.27 |
CodePlexProject Hosting for Open Source Software
I have been unable to successfully install blogengine.net. I am using DiscountASPNet hosting, and they will not help since it's a third party. I've done everything that I've researched to fix the problem. I am a developer and this should
have been easy!
I keep getting this error:
Compiler Error Message: CS0246: The type or namespace name 'BlogEngine' could not be found (are you missing a using directive or an assembly reference?)
I have this app in a folder of my main app. I have created it as a separate app, and it runs in vs2005 on my local by itself. I've changed web.config files many times to make this work and still no luck. I'm sooo frustrated.
Any advice?
Thanks,
Mari
I use discountasp, so that's not the problem.
To me, it sounds like IIS is looking for your DLLs but can't find them. Since you said you put your files in a subfolder of the root - did you remember to make that subfolder an application?
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://blogengine.codeplex.com/discussions/205263 | CC-MAIN-2017-17 | refinedweb | 218 | 84.37 |
I am trying to write a code where I state the height and radius and then calculates the surface area and volume of a code.
These are the formulas:
Surface = sqrt(r^2 + s^2)
Surface area = pi * r * s + pi * r^2
Volume = (1/3) * pi * r^2 * h
I think I wrote the class correctly, but I get a few errors that say the I am not using the instance variables height and radius. Then when I try to state a variable in the tester, it says that the constructor is undefined.
Here are the classes:
Code :
public class IceCreamCone { private double height; // Says value height is not used. private double radius; // Says value radius is not used. private double area; private double volume; // Receives parameters; calculates surface area and volume. public void IceCreamCone(double h, double r) { height = h; radius = r; double s = Math.sqrt((r * r) + (h * h)); area = (Math.PI) * (r) * (s) * + (Math.PI) * (r * r); volume = (1 / 3) * (Math.PI) * (r * r) * (h); } // Returns calculated area. public double getSurfaceArea() { return area; } // Returns calculated volume. public double getVolume() { return volume; } }
Code :
public class IceCreamtester { public static void main(String[] args) { IceCreamCone jean = new IceCreamCone(2,1); //says constructor is undefined. } }
Thanks! | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/18752-why-cant-i-state-values-constructor-printingthethread.html | CC-MAIN-2016-22 | refinedweb | 206 | 65.01 |
OSM defines itself as a geographic database where tagging is free and unrestricted. One of the motto is "tag what you see on the ground". But many tags present in the database are not visible from the ground and not related to geodata like phone numbers, faxes, websites, operators/brands, opening hours, etc.
My question is to know if OSM is just an open database accepting everything, including yellow page data ?
If not, what kind of information should be accepted or rejected by the community ? Is that not the role of the OSM Foundation to fix the limits ? Do we have a document explaining what OSM is not, like Wikipedia ?
asked
15 Oct '10, 12:31
Pieren
9.7k●20●75●157
accept rate:
15%
edited
29 Aug '13, 11:01
I had rephrased this as a proper question. Can you NOT revert such changes or rephrase this as an actual question? It helps in keyword searching, better indexing by search engines and general readability. Thank you.
The average mapper thinks of the map as containing the same type of information found on their navigation device, such as POIs. Taking the concept a bit further, they envision the additional information as useful for offline searches with a portable smartphone: Give me all the X within Y distance of my location. After expanding the concept, the query becomes "give me all the X within Y distance of my location which are open now". Because the data reside in the same database, results can be nearly instant, even when the smartphone cannot connect to the internet. As the search result pops up, the end user can click on the phone number to call, or click on the web site for further information.
Mappers who collect information often see phone numbers and even web sites posted on the storefront windows of some POIs. They then enter this into the OSM database - which is a single convenient point of entry. POIs and businesses could be placed in a separate database, then combined later into the smartphone device or renderer for ease of use. This would require developing the separate database, possibly requiring a separate step of integrating POI edits into all editors to recognize separate databases, with the only end result to reduce the size of the OSM database by a small amount.
OSM has already set a precedent of including imaginary data with the inclusion of administrative boundaries. While I don't know of a list of what OSM is not, the verifiability page is a step toward filtering out completely unrelated information.
answered
15 Oct '10, 15:21
Mike N
2.7k●2●16●52
accept rate:
17%
So if the list of ingredients of my preferred pizza from my well known pizzeria is "verifiable" (listed in the menu), I can enter it in OSM ? If the fuel oil price is "verifiable" (large display in front of my gas station), I can enter it in OSM ?
The fuel price is "verifiable" but changes so fast that it's not worth recording in OpenStreetMap. The relevant time scale here is the time between planet dump (once a week) plus the time for a planet import (another week). So anything that changes faster then once every couple of weeks is definitely NOT worth having in OpenStreetMap (unless it's periodic).
You're right that if restaurants published their menus as XML exports, that someone would want to upload the XML menus to the map. Currently, the thing that stops people is that the amount of effort to collect and enter menu information versus the possible benefit: very little; if the web site is listed, people should just go there for more information.
Based of what we often read about what OSM is not, here are some points:
(1) airspace mapping thread in OSM mailing list
(2) historical data thread in OSM mailing list
answered
19 Oct '10, 23:47
edited
08 Jun '11, 18:12
A reasonable start, but I have to say it clearly once again: we don't map routes between airports because there are no fixed routes (at least in europe). The only thing where something like routes or roads exist is the final approach to the runway.
Aerial routes are not less "fixed" than sea routes. But many sea routes are present in OSM (and not only for ferries). So why what is accepted for the sea is not for the air ?
it is accepted for sea routes because of 2 reasons:
1. The sea is empty in our renderings and editors. It doesn't distract mappers. Instead of drawing monsters we draw ferry routes to fill the voids.
2. Most maps do it. (Probably because of 1.)
Adding to dieterdreist's answer, I'm quite sure that maps show sea routes also because they have been a very regular route for many many people for so many centuries.
Sea routes also often connect roads, which air routes never do.
also, sea routes tend not to connect every port with every other port, rather the ro/ro or passenger routes as marked on the map tend to be very static and just link to a single other port (e.g. Sconser/Raasay), or at most a handful (e.g. Portsmouth)
The philosophical ones can also be fun.
Mike N has just given some great and practical example of what the "extra" data can do.
Now if you are thinking towards "fixed limits" you would either explain next why missing limits are somethings negative or dangerous. Or you could give practical examples for the benefits of such limits.
Maybe you are concerned about server capacity or finances? In that case, I would suggest a simple appeal to those who love to tag lots: "If you enter lots of data then please donate lots of money for more servers."
I love the fact that such a young system can still function without many rules. In the long run, where many people interact there arises a need for regulation. But maybe in a context like a lovely world-map modern technology can allow for much of our personality quirks and our temperaments.
Just let us know more please, what your concerns are. Or maybe you just enjoy rules? I guess it is beneficial for a community to have some of each flavour. :)
And as a beginner, I actually like your question about a document. This is a precise question that one of the veterans should please answer with a yes and a link or just a no.
answered
16 Oct '10, 03:16
Screentoosmall
66●2●2●8
accept rate:
0%
edited
16 Oct '10, 03:18
Answering just your final question "Do we have a document explaining what OSM is not?"...
As far as I'm aware the closest page we have to this already, is the "Good Practice" page which links to two or three other pages on specific points. I'd like to see that set of pages expanded upon. It's not specifically dealing with the "What OSM is not" kind of limits. I suppose we could call that the "Bad Practice" list, but I think the same page should accommodate both (rename it perhaps) Documenting in terms of things which are not included in the database could be helpful yes.
answered
09 Jun '11, 11:32
Harry Wood
9.3k●24●86●126
accept rate:
13%
Wikipedia is has over 100 million readers and covers many controversial subjects. They adopted the policy to protect their brand. They do however have a close relationship with Wiktionary and the other Wikimedia projects. Some of those projects, like Commons, have very low standards as to what they accept.
So perhaps it's more appropriate to compare OSM with Wikimedia. We could place all the phone numbers into a separate database called 'OpenPhoneBook', or we could just use the existing infrastructure.
In some ways we are however more like Perl (or any programming language) :
Some people want to limit the scope of OSM to the data itself and exclude applications. The first problem is that there exists no approved standard for tagging anything and having reference implementations will fill the gap. The second problem is that many contributors wait for the arrival of applications that support certain tags before they contribute that data. Furthermore many contributors insist on open source applications.
answered
01 Nov '10, 00:32
Nic Roets
583●9●12●19
accept rate:
6%
edited
06 Dec '10, 05:33
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
This is the support site for OpenStreetMap.
Question tags:
import ×174
poi ×162
bestpractice ×32
community ×25
yellowpage ×1
question asked: 15 Oct '10, 12:31
question was seen: 12,161 times
last updated: 29 Aug '13, 11:10
How can I import POIs from a PostGIS database into OpenStreetMap?
Should I use POIs or areas to identify shops?
[closed] What will happen to forum.openstreetmap.org?
What should I do about vandalism?
How to deal with copyright violations in our OSM data?
What is a Mapping Party?
How do I import map data from a .dwg file to OpenStreetMap?
Uploading a small bit of new TIGER data (and 3 other unrelated questions)
How do I agree to the Contributor Terms?
What about forum.openstreetmap.org?
First time here? Check out the FAQ! | https://help.openstreetmap.org/questions/1194/what-osm-is-not | CC-MAIN-2020-16 | refinedweb | 1,577 | 70.84 |
Apache::AuthCookie - Perl Authentication and Authorization via cookies
version 3.24
Make sure your mod_perl is at least 1.24, with StackedHandlers, MethodHandlers, Authen, and Authz compiled in.
# In httpd.conf or .htaccess: PerlModule Sample::Apache::AuthCookieHandler PerlSetVar WhatEverPath / PerlSetVar WhatEverLoginScript /login.pl # use to alter how "require" directives are matched. Can be "Any" or "All". # If its "Any", then you must only match Any of the "require" directives. If # its "All", then you must match All of the require directives. # # Default: All PerlSetVar WhatEverSatisfy Any # The following line is optional - it allows you to set the domain # scope of your cookie. Default is the current domain. PerlSetVar WhatEverDomain .yourdomain.com # Use this to only send over a secure connection PerlSetVar WhatEverSecure 1 # Use this if you want user session cookies to expire if the user # doesn't request a auth-required or recognize_user page for some # time period. If set, a new cookie (with updated expire time) # is set on every request. PerlSetVar WhatEverSessionTimeout +30m # to enable the HttpOnly cookie property, use HttpOnly. # this is an MS extension. See # PerlSetVar WhatEverHttpOnly 1 # Usually documents are uncached - turn off here PerlSetVar WhatEverCache 1 # Use this to make your cookies persistent (+2 hours here) PerlSetVar WhatEverExpires +2h # Use to make AuthCookie send a P3P header with the cookie # see for details about what the value # of this should be PerlSetVar WhatEverP3P "CP=\"...\"" # These documents require user to be logged in. <Location /protected> AuthType Sample::Apache::AuthCookieHandler AuthName WhatEver PerlAuthenHandler Sample::Apache::AuthCookieHandler->authenticate PerlAuthzHandler Sample::Apache::AuthCookieHandler->authorize require valid-user </Location> # These documents don't require logging in, but allow it. <FilesMatch "\.ok$"> AuthType Sample::Apache::AuthCookieHandler AuthName WhatEver PerlFixupHandler Sample::Apache::AuthCookieHandler->recognize_user </FilesMatch> # This is the action of the login.pl script above. <Files LOGIN> AuthType Sample::Apache::AuthCookieHandler AuthName WhatEver SetHandler perl-script PerlHandler Sample::Apache::AuthCookieHandler->login </Files>
Apache::AuthCookie allows you to intercept a user's first unauthenticated access to a protected document. The user will be presented with a custom form where they can enter authentication credentials. The credentials are posted to the server where AuthCookie verifies them and returns module that inherits from AuthCookie. Your module is a class which implements two methods:
authen_cred()
Verify the user-supplied credentials and return a session key. The session key can be any string - often you'll use some string containing username, timeout info, and any other information you need to determine access to documents, and append a one-way hash of those values together with some secret key.
authen_ses_key()
Verify the session key (previously generated by
authen_cred(), possibly during a previous request) and return the user ID. This user ID will be fed to
$r->connection->user() to set Apache's idea of who's logged in.
By using AuthCookie versus Apache's built-in AuthBasic you can design your own authentication system. There are several benefits.
WhatEver, you can put the command
PerlSetVar WhatEverDomain .yourhost.com
into your server setup file and your access cookies will span all hosts ending in
.yourhost.com.
CookieNamedirective. For instance, if your AuthName is
WhatEver, you can put the command
PerlSetVar WhatEverCookieName MyCustomName
into your server setup file and your cookies for this AuthCookie realm will be named MyCustomName. Default is AuthType_AuthName.
requiredirectives. If you want authentication to succeed if ANY
requiredirectives are met, use the
Satisfydirective. For instance, if your AuthName is
WhatEver, you can put the command
PerlSetVar WhatEverSatisfy Any
into your server startup file and authentication for this realm will succeed if ANY of the
require directives are met. ) | FORBIDDEN. LOGIN URL, which calls | | | credentials |<------------| AuthCookie->login(). | | \ to / +---------------------------------+ | \authen_cred/ | \ function/ | \ / | \ / | \ / +------------------------------------+ | \ / return | Authen cred returns \ ^ | successive requests | | / session \ | +----------------------------+ | / key to \ return | | +-| authen_ses_key|------------+ V \ / False +-----------------------------------+ \ / | Tell Apache to set Expires). The only requirement is that the authen_ses_key function that you create must be able to determine if this session_key is valid and map it back to the originally authenticated user ID.
Apache::AuthCookie has several methods you should know about. Here is the documentation for each. =)
This method is one you'll use in a server config file (httpd.conf, .htaccess, ...) as a PerlAuthenHandler. If the user provided a session key in a cookie, the
authen_ses_key() method will get called to check whether the key is valid. If not, or if there is no key provided, we redirect to the login form.
This will step through the
require directives you've given for protected documents and make sure the user passes muster. The
require valid-user and
require user joey-jojo directives are handled for you. You can implement custom directives, such as
require species hamster, by defining a method called
species() in your subclass, which will then be called. The method will be called as
$r->species($r, $args), where
$args is everything on your
require line after the word
species. The method should return OK on success and FORBIDDEN on failure.
You must define this method yourself in your subclass of
Apache::AuthCookie. Its job is to create the session key that will be preserved in the user's cookie. The arguments passed to it are:
sub authen_cred ($$\@) { my $self = shift; # Package name (same as AuthName directive) my $r = shift; # Apache request object my @cred = @_; # Credentials from login form ...blah blah blah, create a session key... return $session_key; }
The only limitation on the session key is that you should be able to look at it later and determine the user's username. You are responsible for implementing your own session key format. A typical format is to make a string that contains the username, an expiration time, whatever else you need, and an MD5 hash of all that data together with a secret key. The hash will ensure that the user doesn't tamper with the session key. More info in the Eagle book.
You must define this method yourself in your subclass of Apache::AuthCookie. Its job is to look at a session key and determine whether it is valid. If so, it returns the username of the authenticated user.
sub authen_ses_key ($$$) { my ($self, $r, $session_key) = @_; ...blah blah blah, check whether $session_key is valid... return $ok ? $username : undef; }
Optionally, return an array of 2 or more items that will be passed to method custom_errors. It is the responsibility of this method to return the correct response to the main Apache module.
Note: this interface is experimental.
This method handles the server response when you wish to access the Apache custom_response method. Any suitable response can be used. this is particularly useful when implementing 'by directory' access control using the user authentication information. i.e.
/restricted /one user is allowed access here /two not here /three AND here
The authen_ses_key method would return a normal response when the user attempts to access 'one' or 'three' but return (NOT_FOUND, 'File not found') if an attempt was made to access subdirectory 'two'. Or, in the case of expired credentials, (AUTH_REQUIRED,'Your session has timed out, you must login again').
example 'custom_errors' sub custom_errors { my ($self,$r,$CODE,$msg) = @_; # return custom message else use the server's standard message $r->custom_response($CODE, $msg) if $msg; return($CODE); } where CODE is a valid code from Apache::Constants
This method handles the submission of the login form. It will call the
authen_cred() method, passing it
$r and all the submitted data with names like
"credential_#", where # is a number. These will be passed in a simple array, so the prototype is
$self->authen_cred($r, @credentials). After calling
authen_cred(), we set the user's cookie and redirect to the URL contained in the
"destination" submitted form field.
This method is responsible for displaying the login form. The default implementation will make an internal redirect and display the URL you specified with the
PerlSetVar WhatEverLoginScript configuration directive. You can overwrite this method to provide your own mechanism.
This method returns the HTTP status code that will be returned with the login form response. The default behaviour is to return FORBIDDEN, except for some known browsers which ignore HTML content for FORBIDDEN responses (e.g.: SymbianOS). You can override this method to return custom codes.
Note that FORBIDDEN is the most correct code to return as the given request was not authorized to view the requested page. You should only change this if FORBIDDEN does not work.
This is simply a convenience method that unsets the session key for you. You can call it in your logout scripts. Usually this looks like
$r->auth_type->logout($r);.
By default this method simply sends out the session key you give it. If you need to change the default behavior (perhaps to update a timestamp in the key) you can override this method.
If the user has provided a valid session key but the document isn't protected, this method will set
$r->connection->user anyway. Use it as a PerlFixupHandler, unless you have a better idea.
This method will return the current session key, if any. This can be handy inside a method that implements a
require directive check (like the
species method discussed above) if you put any extra information like clearances or whatever into the session key.
This method returns a modified version of the destination parameter before embedding it into the response header. Per default it escapes CR, LF and TAB characters of the uri to avoid certain types of security attacks. You can override it to more limit the allowed destinations, e.g., only allow relative uris, only special hosts or only limited set of characters.
For an example of how to use Apache::AuthCookie, you may want to check out the test suite, which runs AuthCookie through a few of its paces. The documents are located in t/eg/, and you may want to peruse t/real.t to see the generated httpd.conf file (at the bottom of real.t) and check out what requests it's making of the server (at the top of real.t).
You will need to create a login script (called login.pl above) that generates an HTML form for the user to fill out. You might generate the page using an Apache::Registry script, or an HTML::Mason component, or perhaps even using a static HTML page. It's usually useful to generate it dynamically so that you can define the 'destination' field correctly (see below).
The following fields must be present in the form:
$r->prev->uri. See the login.pl script in t/eg/.
In addition, you might want your login page to be able to tell why the user is being asked to log in. In other words, if the user sent bad credentials, then it might be useful to display an error message saying that the given username or password are invalid. Also, it might be useful to determine the difference between a user that sent an invalid auth cookie, and a user that sent no auth cookie at all. To cope with these situations, AuthCookie will set
$r->subprocess_env('AuthCookieReason') to one of the following values.
The user presented no cookie at all. Typically this means the user is trying to log in for the first time.
The cookie the user presented is invalid. Typically this means that the user is not allowed access to the given page.
The user tried to log in, but the credentials that were passed are invalid.
You can examine this value in your login form by examining
$r->prev->subprocess_env('AuthCookieReason') (because it's a sub-request).
Of course, if you want to give more specific information about why access failed when a cookie is present, your
authen_ses_key() method can set arbitrary entries in
$r->subprocess_env.
If you want to let users log themselves out (something that can't be done using Basic Auth), you need to create a logout script. For an example, see t/htdocs/docs/logout.pl. Logout scripts may want to take advantage of AuthCookie's
logout() method, which will set the proper cookie headers in order to clear the user's cookie. This usually looks like
$r->auth_type->logout($r);.
Note that if you don't necessarily trust your users, you can't count on cookie deletion for logging out. You'll have to expire some server-side login information too. AuthCookie doesn't do this for you, you have to handle it yourself.
Unlike the sample AuthCookieHandler, you have you verify the user's login and password in
authen_cred(), then you do something like:
my $date = localtime; my $ses_key = MD5->hexhash(join(';', $date, $PID, $PAC));
save
$ses_key along.
Originally written by Eric Bartley <bartley@purdue.edu>
versions 2.x were written by Ken Williams <ken@forum.swarthmore.edu>
perl(1), mod_perl(1), Apache(1).. | http://search.cpan.org/dist/Apache-AuthCookie/lib/Apache/AuthCookie.pm | CC-MAIN-2016-26 | refinedweb | 2,121 | 55.95 |
Getting Started with Django and Heroku
I’ve written a few Django applications recently that were made up of basically the same components: a web server, an asynchronous worker, a Postgres SQL database, Amazon S3 storage, Mailgun E-Mail management, and Stripe credit card billing.
After setting up a django application once with the above components, it’s extremely easy to do it again. Here, I just wanted to document the process that it might help some of you all, my billions of readers.
Start a Django Project
Heroku already has some instructions for getting started with a Django application and a virtual environment, but I just wanted to go a step further with some of the additional add-ons you’ll probably want.
Additional Requirements
The standard requirements.txt from the Heroku template are as follows:
Django==1.7 dj-database-url==0.3.0 dj-static==0.0.6 gunicorn==19.1.1 psycopg2==2.5.1 static==0.4 wsgiref==0.1.2
I additionally include the following packages as a base:
django-storages==1.1.8 boto>=2.31.0,<3.0.0 stripe==1.19.0 requests==2.4.1 python-memcached==1.53 celery==2.5.5
- Django Storages will simplify the process of transferring static files to Amazon S3
- Boto is a handy tool to dynamically upload files to S3 which is useful if your end users will end up uploading any sort of files to your web application
- Stripe is pretty much the most developer friendly credit card billing service
- Requests is probably the best high level library for making server-side network calls
- Python-memcached is useful as an easy backend for Django’s caching mechanism. Free add-ons can be found with Heroku to support a minimal memory requirement
- Celery is used to consume tasks that are messaged via your message broker (I’m using RabbitMq). I ended up using Celery 2.5 instead of Celery 3.x because of some whacky configuration problems that I ended up giving up trying to solve (don’t judge, these are weekend/train home projects).
Download a CSS Template, Convert it to a Django Template
Good HTML and CSS take a long time, and if you’re primarily a backend developer, then you’re probably also not good at it. I like to just buy CSS templates from a reputable site and not worry about it. I generally start at.
The next problem is that you’ve been handed raw HTML, and it takes a while to convert everything into a Django template. This will take time, but I have this small script that helps out for 99% of the cases converting static image/javascript/css URL’s into django static URL’s:
import sys strs_to_replace = [ "images/", "img/", "css/", "js/", ] end_char = "\"" def clean_string(entire_string, str_to_replace): start_index = 0 new_str = "" while start_index < len(entire_string): try: next_instance_index = entire_string.index(str_to_replace, start_index) except ValueError: new_str += entire_string[start_index:] break new_str += entire_string[start_index: next_instance_index] end_index = entire_string.index(end_char, next_instance_index) str_to_modify = entire_string[next_instance_index: end_index] str_to_modify = "%s%s%s" % ("{% static '", str_to_modify, "' %}") new_str += str_to_modify start_index = end_index return new_str def make_static(target_file): with open(target_file, "rb") as read_file: new_html = read_file.read() for str_to_replace in strs_to_replace: new_html = clean_string(new_html, str_to_replace) with open("updated-%s" % target_file, "w+") as write_file: write_file.write(new_html) if __name__ == "__main__": target_file = sys.argv[1] make_static(target_file)
Just run the above script against a file, and Django “{% static %}” tag will be placed in the appropriate locations. Just ensure that the top level folders for images, javascript, css, etc are placed in your Django static directory.
Then you’ll also need to add this to the top of the file:
{% load staticfiles %}
I generally just pick one page to work with, and from there use Backbone.js and underscore templates for additional rendering rather than go the standard server-side templating route espoused by Django.
Get your Django Application Working Locally
Get your app working locally. From your root directory where you started the project you should be able to:
foreman start web
And load pages from “”
Initialize a Git Repository
Create a repository on Github’s website and follow the directions to associated that repository with your new code. That can be done with:
git init git add . git commit -m "base commit" git remote add origin{{ username }}/{{ repo-name }} git push origin master
The rest of the tutorial assumes you have a Stripe Account, an Amazon S3 Account, and a MailGun Account. So if you don’t have those, just go sign up, it’s free.
Adjust Heroku Settings
You’ll end up running these commands:
heroku login heroku create git push heroku master heroku ps:scale web=1 heroku apps:rename cool-new-app-name
The last step is not necessary, but eventually you’ll want to rename your app to something meaningful.
Get the essential add-ons
For the project requirements I listed initially, the Heroku add-ons ended up being Cloud AMQP and PostgreSQL database. So:
heroku addons:add cloudamqp heroku addons:add heroku-postgresql
For the AMQP service, there should be a broker URL that gets output. Add that to your Heroku environment variables (next step).
Set environment variables
Don’t make the same mistake I did and leave any of your API keys in a file that gets pushed to a public git repository. It will get found by some nerds eating hot pockets in a basement, and fraudulent charges will be made, and you’ll have to convince Amazon support that it was not you that spun up 200 virtual machines in a matter of minutes across every possible region.
So keep your environment variables stored in ~/.bash_profile (Mac) or ~/.bashrc (Linux). For dev variables, such as your local database settings, your app will run seamlessly, and for things that should be kept secret such as API keys, you can keep these relatively secure.
I ended up setting the following variables:
AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY CLOUDAMQP_URL DATABASE_HOST DATABASE_NAME DATABASE_PASSWORD DATABASE_USER DJANGO_SECRET_KEY HEROKU_POSTGRESQL_COBALT_URL MAILGUN_API_KEY STRIPE_LIVE_PUBLISHABLE_KEY STRIPE_LIVE_SECRET_KEY STRIPE_TEST_PUBLISHABLE_KEY STRIPE_TEST_SECRET_KEY
You can run through each key with
heroku config:set AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
or specify the raw value if it’s not common between your dev and prod system. For all of the database settings, you should be able to login to Heroku, view your add-ons for an app, and get the values from there.
Upload Static Files to Amazon
My settings files end up looking something like this:
AWS_ACCESS_KEY_ID = os.environ["AWS_ACCESS_KEY_ID"] AWS_SECRET_ACCESS_KEY = os.environ["AWS_SECRET_ACCESS_KEY"] AWS_STORAGE_BUCKET_NAME = "credit-serve-static" if os.environ.get("I_AM_IN_DEV_ENV"): STATIC_URL = '/static/' else: STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage' DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage' STATIC_URL = 'http://' + AWS_STORAGE_BUCKET_NAME + '.s3.amazonaws.com/'
You’ll also need to create an Amazon bucket (in this case “credit-serve-static”) in Amazon S3 before you can run a script to upload files. Once that’s configured, you can safely upload static files in your django directory to amazon with:
heroku run python manage.py collectstatic --noinput
Update Your DNS Manager to point to Heroku
Depending on where you bought your domain name, you’ll need to go into your domain name manager, and set the @ record to redirect to your www cname. Then set the www record to point to http://{ app-name }.herokuapp.com. Google around for your particular service and you should be able to find directions.
The End
This is only the beginning, but this should get you to a good starting point with reasonably solid infrastructure for a basic web application. From here you have the necessary support to send emails, upload files to Amazon (either for static purposes or dynamically from user-generated content), create asynchronous tasks, create a dynamic page, and save records to a database. Each of those, however, is another tutorial unto itself. | http://scottlobdell.me/2014/12/getting-started-django-heroku/ | CC-MAIN-2019-26 | refinedweb | 1,292 | 50.26 |
This action might not be possible to undo. Are you sure you want to continue?
Laser Technology for Astronomical Adaptive Optics
Donald Gavel* UCO/Lick Observatory, University of California Santa Cruz, 1156 High St., Santa Cruz, CA 95064
ABSTRACT
In this paper we review the current status of work in the sodium guidestar laser arena from the perspective of an astronomical AO system developer and user. Sodium beacons provide the highest and most useful guidestars for the 8m and larger class telescopes, but unfortunately sodium lasers are expensive and difficult to build at high output powers. Here we present highlights of recent advancements in the laser technology. Perhaps most dramatic are the recent theoretical and experimental efforts leading to better understanding the physics of coupling the laser light to the upper altitude sodium for best return signal. In addition we will discuss the key issues which affect LGS AO system performance and their technology drivers, including: pulse format, guidestar elongation, crystal and fiber technology, and beam transport. Keywords: laser guide stars, sodium layer, adaptive optics
1. INTRODUCTION
In 2006-7, a series of workshops on Laser Technology and Systems for Astronomy was held under the auspices of the NSF Center for Adaptive Optics. The purpose of the workshops was to exchange information and decide where key investments should be made in sodium laser guidestar technology to meet the needs of advanced laser guide star systems envisioned for the large telescopes. Participants included laser engineers, AO instrument makers, and astronomers worldwide who were using and/or interested in laser guidestar technology. This paper summarizes the status of Sodium guidestar lasers, both in use and under development, as presented by the participants, and gives a synopsis of what we learned at the workshops. Sodium guidestars (where the laser is resonant with the sodium D2 line at 589 nm) are the main focus of attention because the sodium is at high altitude (90 km) and has a high cross section density product. Hence it is the most likely choice for high-Strehl AO systems on existing large and future extremely large telescopes. We discuss the very interesting and mainly unsolved issues regarding the choice of laser pulse and spectral format for best coupling to the mesospheric sodium layer and for optimum return signal. We also mention the practical advantages of certain laser formats that trade off with the design and performance of the overall AO system.
2. LASER GUIDESTAR SYSTEMS
Table 1 contains a list of current sodium guidestars on telescopes throughout the world, along with the relevant properties and performance. This is a snapshot of the list kept on the Twiki web page. The author apologizes in advance if the list presented here is not fully up to date or complete as of this SPIE publication date. Corrections and additions are most welcome. The web page can be edited directly (these are “wiki” like pages) or by contacting the author. The main motivation for the laser technology workshops was the realization that there are large differences in the measured return (per unit area per unit time per unit projected laser power) in present on-sky systems, and, given that laser power at the sodium D2 wavelength (589 nm) is generally very expensive these days, we ought to come to some consensus on the general type of laser that should be implemented in future systems. Table 2 shows a list of new lasers under construction or development.
*
gavel@ucolick.org;
Adaptive Optics Systems, edited by Norbert Hubin, Claire E. Max, Peter L. Wizinowich, Proc. of SPIE Vol. 7015, 70150J, (2008) 0277-786X/08/$18 · doi: 10.1117/12.796637 Proc. of SPIE Vol. 7015 70150J-1
Table 1. Lasers in use at observatories Facility Lick Mt. Hamilton Starfire Optical Range W. M. Keck Observato ry Principal Investigators Claire Max, Don Gavel Bob Fugate, Craig Denman Laser Maker and Type LLNL Tunable Dye SOR Solid state, resonant sumfrequency generator[1] Return / Watt (nominal) 10 ph/s/cm2/W 100 ph/s/cm2/W seasonal average[2] Average Power Apparent onsky Spot Size 2 arcsec seeing limited 1.4-3 arcsec (site has r0=7.8cm avg) 1.8" x 2.3" (average stacked) as good as 2.3 FWHM arcsec in 1.0 arcsec V-band seeing @ 5.5 W power Currently fixing launch telescope problems
12 W
50 W
Peter Wizinowich
Palomar
Richard Dekany, Ed Kibblewhite Masanori Iye, Yutaka Hayano
LLNL Tunable Dye University of Chicago Solid state sumfrequency modelocked [3][4]
10 ph/s/cm2/W
12-15 W
30-130 ph/s/cm2/W [5]
6-8 W
Subaru
solid state sumfrequency Lockheed-Martin Coherent Technologies diode-pumped solid state 1.06+1.32micron sum-frequency laser [6] Max Planck Institutes Tunable Dye [7]
Gemini North Very Large Telescope
Francois Rigaut, Celine D'Orgeville Domenico Bonaccini Calia
unreported 27photons/cm^2/s/ W (laser power projected to the sky, i.e. out of the LLT) with linear polarization (~30% increase with circular polarization)
4.7 W ~12 W at the ouput of the laser, ~9W projected to the sky;
measurement made in May 2005, during season of lowest sodium abundance
1.3 arcsec
54 ph/s/cm2/W
10 W
1.25 arcsec
Table 2. New Sodium Laser Development Institution Lockheed-Martin Coherent Technologies (LMCT) Lawrence Livermore National Laboratory (LLNL) Lockheed-Martin Coherent Technologies (LMCT) PI Allen Tracy, Allen Hankla Dee Pennington, Jay Dawson Sponsor Laser Type Sum frequency solid state, 1319nm+1064nm into PPSLT, modular pulse format Sum frequency fiber, 1583nm+938nm into PPSLT, modular pulse format, 500 MHz linewidth Sum frequency solid state, 1319nm+1064nm into LBO, 0.7 nm pulse every 12 ns quasi CW Doubled 1178 nm fiberRaman, modulated CW (to 1GHz) or Q switched micropulse Progress Reference
AODP
1.5 W / 10 W goal
[8]
AODP and CfAO Keck I and Gemini South Telescopes
Allen Hankla Domenico Bonaccini Calia
3.5 W / 5-10 W goal >40W of 589nm demonstrated in the lab for GS laser (October 2007) Demonstrated modulated CW, 4.2 W @ 589 nm
[9]
[10]
European Southern Observatory
ESO
[11]
Proc. of SPIE Vol. 7015 70150J-2
3. SODIUM INTERACTION AND RETURN SIGNAL
Lasers that have built for astronomy AO have had a variety of pulse and spectral formats. These formats have a critical effect on how bright the guidestar will be for a given laser power and hence should be an important consideration for system design in addition to raw laser power. Unfortunately, the interaction physics are not easy to calculate given the complexity of the sodium atom, and is a subject of longstanding and current research [12][13] [14][15][16][17][18]. Of the sodium lasers that have been tested on-sky so far, the narrow band CW laser, with projected light circularly polarized, seems to be producing the highest photon return per Watt [1][2]. However, some pulsed lasers[20],[5] are also producing returns approaching the CW, and there are indications from theoretical analysis that some variant of pulse and spectral format could produce a still much higher return efficiency. Recently, analysis by Paul Hillman [16] of the Starfire Optical Range has led to a possible explanation for why return from a broad spectrum laser, i.e. with power distributed densely over the mesospheric sodium’s Doppler bandwidth of ~ 1 GHz, does not produce as bright a laser as a narrow (10kHz) line, even though the CW line only interacts with a small fraction of the atoms. Hillman provides an analysis of why each 10 MHz wide Doppler class within the 1 GHz Dopplerbroadened sodium line must be treated separately with respect to the entire spectrum of incident laser light, especially when the laser spectrum is broader than 10 MHz. The reason for this is that the sodium atom’s coupling to the photon is complicated by the complex hyperfine splitting of the sodium D2 line. It is possible to achieve resonance with some of the allowed transitions, but this affects the populations, that is, the percentages of atoms within given energetic states. With proper laser spectral format it is possible to have a condition known as optical pumping, where atoms are preferentially driven to a non-equilibrium state all resonant to the laser. It is also possible (and perhaps unfortunately easier) to cause the desired transition states to become depleted, causing the return signal to quench. Ed Kibblewhite has suggested chirping the laser in conjunction with optical pumping to follow the change in resonance due to atomic recoil [18]. This will possibly offer an enhancement as recoiled atoms move into adjacent Doppler classes where additional available atoms can also resonate with the chirped laser. The SOR has built a second narrow band CW laser and was, for a short time before this laser had to be delivered to its customer/user, was able to perform several interesting on-sky tests. One test was to tune the first laser to the D2a line for optical pumping and tune the second laser to the D2b line for back-pumping the atoms “lost” to the ground state during the pumping process. The back-pumping scheme seemed to work, providing the expected brightness improvement [1]. A second test was to try to measure the population shift due to recoil [21]. The results of this test have low signal to noise and are still being analyzed. A third test was to check if two narrow lines separated in frequency enough so that in theory they do not interfere with each others’ optical pumping, would produce the optically pumped response of a single narrow line of equivalent laser power (accounting for the density differences of atoms responding to the two frequencies). The initial results of this test are somewhat disappointing in that the lines still seem to be interacting to quench response slightly (~15%). What would cause this is still under investigation. We now describe some of the key issues under investigation that a potential user should remain aware of: Optical Pumping. Optical pumping occurs when one preferred atomic transition is excited by the frequency of the incident laser and selection rules tend to shift the population so that most of the atoms are in one or the other state of this transition. Optical pumping can occur with linearly polarized incident light (the F2 to F’3 transition, with MF=MF’=0), or with circularly polarized incident light (F2 to F’3 transition with MF=2 and MF’=3). Circular pumping is preferred because the circularly polarized transition has an enhanced return towards the laser source. Spectral Content of the Laser. The laser for various practical reasons may need to have energy spread out in frequency. There would also be a “multiplexing” advantage of being able to use the additional atoms available at other frequencies within the Doppler broadened D2 line. The difficulty is in avoiding the cross-talk that could spoil optical pumping as described earlier. A spectral format that has 9 or 10 lines spaced at >200 MHz across the D2a line is being investigated. This will be discussed in the next section. Pulse Format of the Laser. The width and energy concentration of laser pulses matters to the sodium response. We discuss this further in the next section also.
Proc. of SPIE Vol. 7015 70150J-3
Competing Processes: Earth’s magnetic field. Competing processes tend to spoil optical pumping because they couple to and thus redistribute the atom population away from the desired resonant states. The geomagnetic field acts to redistribute magnetic sublevel populations in approximately the 1 µs time frame [15], the degree of loss depending on the relative angle of the geomagnetic field to the laser propagation direction. The effect will thus depend on telescope pointing angle as well as telescope site location. A group at ESO has evaluated the geomagnetic field relative angles vs alt-az and earth lat-long [19]. One possible mitigation for geomagnetic loss is to confine the laser power to short (<1 microsec) pulses so all the atoms are pumped before they remix due to magnetic field [18]. Competing Processes: collisions. Collisions of the sodium atom with other atoms in the mesosphere (mostly Nitrogen, possibly Oxygen) cause the energetic state to change randomly. This thermal renormalization is thought to occur over time scales of ~100 microseconds [15]. With a suitably intense incident wave, say a 5-10 Watt laser concentrated in one or a few narrow lines, the optical pumping takes place on a much faster time scale, so there is not much depletion from the resonant states due to collisions. However, collisions can help recover from depletion by putting atoms back into the states addressed by the laser line. Recoil. Each atom, when it absorbs a photon, receives that photon’s linear momentum and thus is accelerated away from the laser. The re-emission is random (symmetric in all directions for linear polarized, symmetric in forward and backward directions for circular polarized) so there is not a net radial (along line from atom to laser) acceleration due to emission. Atomic recoil could possibly be used to advantage in a future laser design with chirped frequency to follow the Doppler shift in resonant frequency. This also can possibly pick up additional new atoms in adjacent Doppler bins as the line is swept into them. Recoil dynamics and response is currently under investigation in theoretical and experimental work [18][21].
4. PULSE FORMATTING AND SYSTEMS ISSUES
There are two basic reasons for wanting to pulse a sodium guidestar laser rather than using pure CW illumination. The first is to broaden the laser bandwidth to cover the Doppler broadened D2 line via transform broadening. E.g. a 1 ns pulse has spectral power over 1 GHz. A second reason to pulse the laser is to allow gating of the wavefront sensor detector to avoid Rayleigh backscatter (essentially noise to the wavefront sensor) and/or to mitigate spot elongation by tracking a pulse as it goes through the finite thickness sodium layer. We’ll focus on each of these applications one at a time. A pulse train of 1 ns pulses spaced 5 ns apart would produce a “picket fence” of lines over 1 GHz, spaced 200 MHz apart, which might gain the pumping advantage of a single narrow band line. If the pulse train is long enough, then each line addresses a single Doppler bin (the so called quasi-CW or macro-pulse format). The long-pulse gate-enabling formats are targeted to improving overall AO system performance. A pulse format such as those illustrated in Figure 1 will allow a chopper wheel or gated wavefront sensor camera to blank out the first 20 km of Rayleigh (Nitrogen) and Mie (particulate) backscatter. This might prove essential for eliminating “fratricide” in multiguidestar tomographic sensing, where one guidestar Rayleigh beam crosses into the field of view or another’s sodium beacon wavefront sensor. Spot elongation could be mitigated with a 3 microsecond pulse or pulse burst that is then tracked as it traverses the sodium layer. 3 microseconds corresponds to a pulse length of 1 km, thus a 10:1 compression of elongation is achieved, assuming 10km as the thickness of the sodium layer. This pulse can be either opto-mechanically tracked or tracked on the wavefront sensor CCD camera via chip-clocking. The pulse could be transmitted every round trip time (600 microsec x Sec(zenith angle)) and thus would also enable Rayleigh blanking. A tricky variant would be to have more than one pulse in the atmosphere at once. Even several pulses in the sodium layer at once could work with a tracking CCD and a correlation centroiding algorithm providing the compression. It is difficult to specify a format that accommodates both Rayleigh blanking and pulse tracking if there are practical limitations that require the laser’s duty cycle to be greater than a few percent. These restrictions are typically imposed by inability to hold off undesirable spontaneous emission in a pulse amplifier during the “off” period. There is a tremendous amount to be gained from pulse tracking, perhaps a factor of 3-5 in laser power for the ELTs. This is of course to be traded against the added complexity of the laser and pulse tracking wavefront sensor system. The gain from Rayleigh gating is still under investigation, since it enters into the systematics of wavefront measurement error in a complicated way. Nevertheless, it will offer some non-negligible degree of benefit.
Proc. of SPIE Vol. 7015 70150J-4
120
100
•
II
oumLayer
Heigb, km
II
I I
I I
o 0
so
E
. , 0
60
-/7
o
40
20
PI Width
0
icc _4Q11
h bI
-
Time, microsec
Gt
— 400
500
600
700
800
Figure 1. Pulse formats for Rayleigh gating for a zenith pointing laser. Left: This example shows a single 160 microsecond pulse’s round trip. Receiver blanking for at least 300 microseconds after launch is needed to block Rayleigh from below 20 km. Tracks of the pulse tip and tail are shown. The pulse is received between 570 and 790 microseconds after launch. The repetition frequency is 1.25 kHz Right: Rayleigh blanking format with two pulses in the air at once, carefully timed to allow a 20% laser duty cycle: a 70 microsecond pulse running at 2.8 kHz.
Another important system issue is the means of transporting laser power from the laser to the beacon launch telescope. Typically the laser, because it is big and heavy, is located on a Nasmyth platform or on the dome floor while the laser launch telescope is placed behind the secondary mirror, at the top end of the telescope structure. Several existing systems use free-space beam transport to direct the beam up the telescope structure. A more convenient transport method is via a fiber optic cable. Only specially designed fibers, the photonics crystal fibers, can handle the high powers needed for guide star lasers today, and these only if the pulse format is suitable: spread out in frequency or (transform-equivalently) short pulse. The Subaru telescope has successfully transported a short pulse laser during their recent first light tests [22].
5. SODIUM LASER TECHNOLOGY
There have been a number of design approaches taken for generating high-power 589 nm laser light. All of the more recent solid-state designs depend on mixing infrared lasers in a nonlinear crystal to produce a visible output. The nonlinear mixing caused by high flux in crystal material such as KTP, LBO, or PPSLT will act to sum the frequencies of the two mixed wavelengths. Laser designers have often tried to choose the pair of IR frequencies so that at least one of the wavelengths is produced with a commercial or otherwise easily constructed laser. The popular pairs today are 1024 (standard YaG line) and 1319 nm [1][3] [6][8][10], 1583 (in a fiber optic communications band) and 938 nm [9], and 1178 nm doubled [11]. Successful solid state IR lasers have been realized in Q switched [3][6][8][10][22] or freerunning cavities [1], and in fiber amplifiers [9]. It is likely possible to modulate an inherently CW laser using fast electro-optic switches so as to mimic the pulse formats of the Q switched lasers. In addition, electro-optic phase modulation can be used in a CW laser to set up sidebands at discrete frequencies other than the fundamental should this prove to be a preferred format either for addressing additional Doppler bins or to help reduce nonlinearity induced parasitic oscillations in the laser amplifier. The fiber amplifiers have difficulty with nonlinearity effects at high power density (stimulated Brillouin scattering), thus they are being designed to run with wide bandwidth, or with power spread over many narrow lines within the Doppler profile.
6. CONCLUSIONS
Laser technology and understanding of sodium physical interaction with the laser photon have advanced significantly over the last few years. However, there is much more work to be done and unknown phenomena to be understood before we can definitively converge on a laser type that will be best or clearly preferred one for sodium gudestar adaptive optics. There is a definite possibility of a great payoff: a bright sodium beacon with minimal laser power and therefore much reduced expense per laser as we advance into the era of multi-beacon AO for large telescopes.
Proc. of SPIE Vol. 7015 70150J-5
ACKNOWLEDGEMENTS
This work was supported in part by the National Science Foundation Science and Technology Center for Adaptive Optics (CfAO), managed by the University of California at Santa Cruz under cooperative agreement AST 98-76783. The author would like to thank especially the participants in the CfAO sponsored workshops on Laser Technology for Astronomical Adaptive Optics in November 2006 and April and November 2007. The insightful discussions and interchange of ideas there have been an inspiration and have been of enormous benefit to the AO community.
REFERENCES
[1]
[2] [3] [4]
[5] [6]
[7]
[8]
[9] [10]
[11] [12] [13] [14] [15] [16] [17] [18] [19]
C. Denman, J. Drummond, M. Eickhoff, R.Q. Fugate, P. Hillman, S. Novotny, J. Telle, “Characteristics of sodium guidestars created by the 50-watt FASOR and first closed-loop AO results at the Starfire Optical Range,” SPIE 6272, (2006). J. Drummond, J. Telle, C. Denman, P. Hillman, A. Tuffli, “Photometry of a Sodium Laser Guide Star at the Starfire Optical Range,” The Publications of the Astronomical Society of the Pacific, v116, n 817, pp. 278-289, (2004). V. Velur, E. Kibblewhite, R. Dekany, M. Troy, H. Petrie, R. Thicksten, G. Brack, T. Trin, M. Cheselka, “Implementation of the Chicago sum frequency laser at Palomar laser guide star test bed,” SPIE 5490, (2004). J. Roberts, A. Bouchez, J. Angione, R. Burruss, J. Cromer, R. Dekany, S. Guiwits, J. Henning, J. Hickey, E. Kibblewhite, D. McKenna, A. Moore, H. Petrie, J.C. Shelton, R. Thicksten, T. Trinh, R. Tripathi, M. Troy, T. Truong, V. Velur, “Facilitizing the Palomar AO Laser Guide Star System,” SPIE 7015, (2008). A.Bouchez, “The Palomar Laser Guide Star Flux,” Caltech Instrumentation Note #605, (2006). Allen J. Tracy, Allen K. Hankla, Camilo A. Lopez, David Sadighi, Ken Groff, Céline d’Orgeville, Michael Sheehan, Douglas J. Bamford, Scott J. Sharpe, David J. Cook, “High-power solid-state sodium guidestar laser for the Gemini North Observatory,” SPIE Vol. 6100, 61001H, (2006). D.Bonaccini Calia, E. Allaert, J.L.Alvarez, C. Araujo Hauck, G.Avila, E.Bendek, B. Buzzoni, M. Comin, M. Cullum, R.Davies, M. Dimmler, I.Guidolin, W. Hackenberg, S.Hippler, S.Kellner, A. van Kesteren, F. Koch, U.Neumann, T.Ott, D. Popovic, F.Pedichini, M Quattri, J. Quentin, S.Rabien, A.Silber and M.Tapia, “First Light of the ESO Laser Guide Star Facility,” SPIE Vol. 6272, (2006). Allen J. Tracy, John Hobbs, Iain McKinnie, Camilo Lopez, Munib Jalali, Allen Hankla, Joe Alford, “A Compact Modular Scalable Versatile Laser Guidestar System Architecture for 8-100 m Telescopes,” SPIE Vol. 6272, 62721J, (2006). J. Dawson, A. Drobshoff, R. Beach, M. Messerly, S. Payne, A. Brown, D. Pennington, D. Bamford, S. Sharpe, D. Cook, “Multi-watt 589nm fiber laser source,” SPIE Vol. 6102, 61021F, (2006). Allen K. Hankla, Jarett Bartholomew, Ken Groff, Ian Lee, Iain T. McKinnie, Grant Moule, Nathan Rogers, Bruce Tiemann, Allen J. Tracy, Paul VanHoudt, S. Adkins, Céline d’Orgeville, “20 W and 50 W Solid-State Sodium Beacon Guidestar Laser Systems for the Keck I and Gemini South Telescopes,” SPIE Vol. 6272, 62721G, (2006). D.BonacciniCalia, W. Hackenberg, S.Chernikov, Y.Fengand L.Taylor, “AFIRE: FibreFibreRaman Laser for LGS AO,” Presented at the 2006 CfAO Fall Workshop on Laser Technology and Systems for Astronomy, (2006). P. W. Milonni and L. E. Thode, “Theory of mesospheric sodium fluorescence excited by pulse trains,” Applied Optics, v31, n6, p785, (1992). J. R. Morris, “Efficient excitation of a mesospheric sodium laser guide starnby intermediate-duration pulses,” J. Opt. Soc. Am. A, v11, n2, p832, (1994). Peter W. Milonni, Robert Q. Fugate and John M. Telle, “Analysis of measured photon returns from sodium beacons,” J. Opt. Soc. Am. A, v15, n1, p217, (1998). P. Milonni, H. Fearn, J. Telle, R. Q. Fugate, “Theory of continuous-wave excitation of the sodium beacon,” J. Opt. Soc. Am. A, v16, n10, p255, (1999). P. Hillman, “Sodium Guidestar Return From Broad CW Sources,” Presented at the 2007 CfAO Spring Workshop on Laser Technology and Systems for Astronomy, (2007). J. Telle, J. Drummond, P. Hillman, C. Denman, “Simulations of mesospheric sodium guidestar radiance,” SPIE 6878, (2008). E. Kibblewhite, “Calculation of returns from sodium beacons for different types of laser,” SPIE 7015, (2008). N. Maussaoui, W. Hackenberg, R. Holzlohner, D. Bonaccini Calia, “The effect of the Geomagnetic field on the intensity of the LGS,” SPIE 7015-263 (2008).
Proc. of SPIE Vol. 7015 70150J-6
[20] [21] [22]
T. H. Jeys, R. M. Heinrichs, K. F Wall, J. Korn, T. C. Hotaling, E. Kibblewhite, “Observation of optical pumping of mesospheric sodium,” Optics Letters, v17, n16, p 1143, (1992). P. Hillman, “Effects of Atomic Recoil on Photon Return using a CW Single Frequency FASOR,” Presented at the 2007 CfAO Fall Workshop on Laser Technology and Systems for Astronomy, (2007). Y. Hayano, H. Takami, O. Guyon, S. Oya, M. Hattori, Y. Saito, M. Watanabe, N. Murakami, Y. Minowa, M. Ito, S. Colley, M. Eldred, T. Golota, M. Dinkins, N. Kashikawa, M. Iye, “Current status of the laser guide star adaptive optics system for Subaru Telescope,” SPIE 7015-35, (2008).
Proc. of SPIE Vol. 7015 70150J-7
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview. | https://www.scribd.com/document/53125858/Laser-technology-for-astronomical-adaptive-optics | CC-MAIN-2016-50 | refinedweb | 4,301 | 51.68 |
State preparation with PyTorch¶
In this notebook, we build and optimize a circuit to prepare arbitrary single-qubit states, including mixed states. Along the way, we also show how to:
- Construct compact expressions for circuits composed of many layers.
- Succinctly evaluate expectation values of many observables.
- Estimate expectation values from repeated measurements, as in real hardware.
The most general state of a qubit is represented in terms of a positive semi-definite density matrix \(\rho\) with unit trace. The density matrix can be uniquely described in terms of its three-dimensional Bloch vector \(\vec{a}=(a_x, a_y, a_z)\) as:
where \(\sigma_x, \sigma_y, \sigma_z\) are the Pauli matrices. Any Bloch vector corresponds to a valid density matrix as long as \(\|\vec{a}\|\leq 1\).
The purity of a state is defined as \(p=\text{Tr}(\rho^2)\), which for a qubit is bounded as \(1/2\leq p\leq 1\). The state is pure if \(p=1\) and maximally mixed if \(p=1/2\). In this example, we select the target state by choosing a random Bloch vector and renormalizing it to have a specified purity.
To start, we import PennyLane, NumPy, and PyTorch for the optimization:
import pennylane as qml import numpy as np import torch from torch.autograd import Variable np.random.seed(42) # we generate a three-dimensional random vector by sampling # each entry from a standard normal distribution v = np.random.normal(0, 1, 3) # purity of the target state purity = 0.66 # create a random Bloch vector with the specified purity bloch_v = np.sqrt(2 * purity - 1) * v / np.sqrt(np.sum(v ** 2)) # array of Pauli matrices (will be useful later) Paulis = np.zeros((3, 2, 2), dtype=complex) Paulis[0] = [[0, 1], [1, 0]] Paulis[1] = [[0, -1j], [1j, 0]] Paulis[2] = [[1, 0], [0, -1]]
Unitary operations map pure states to pure states. So how can we prepare mixed states using unitary circuits? The trick is to introduce additional qubits and perform a unitary transformation on this larger system. By “tracing out” the ancilla qubits, we can prepare mixed states in the target register. In this example, we introduce two additional qubits, which suffices to prepare arbitrary states.
The ansatz circuit is composed of repeated layers, each of which consists of single-qubit rotations along the \(x, y,\) and \(z\) axes, followed by three CNOT gates entangling all qubits. Initial gate parameters are chosen at random from a normal distribution. Importantly, when declaring the layer function, we introduce an input parameter \(j\), which allows us to later call each layer individually.
# number of qubits in the circuit nr_qubits = 3 # number of layers in the circuit nr_layers = 2 # randomly initialize parameters from a normal distribution params = np.random.normal(0, np.pi, (nr_qubits, nr_layers, 3)) params = Variable(torch.tensor(params), requires_grad=True) # a layer of the circuit ansatz def layer(params, j): for i in range(nr_qubits): qml.RX(params[i, j, 0], wires=i) qml.RY(params[i, j, 1], wires=i) qml.RZ(params[i, j, 2], wires=i) qml.CNOT(wires=[0, 1]) qml.CNOT(wires=[0, 2]) qml.CNOT(wires=[1, 2])
Here, we use the
default.qubit device to perform the optimization, but this can be changed to
any other supported device.
dev = qml.device("default.qubit", wires=3)
When defining the QNode, we introduce as input a Hermitian operator \(A\) that specifies the expectation value being evaluated. This choice later allows us to easily evaluate several expectation values without having to define a new QNode each time.
Since we will be optimizing using PyTorch, we configure the QNode to use the PyTorch interface:
@qml.qnode(dev, interface="torch") def circuit(params, A=None): # repeatedly apply each layer in the circuit for j in range(nr_layers): layer(params, j) # returns the expectation of the input matrix A on the first qubit return qml.expval(qml.Hermitian(A, wires=0))
Our goal is to prepare a state with the same Bloch vector as the target state. Therefore, we define a simple cost function
where \(\vec{a}=(a_1, a_2, a_3)\) is the target vector and \(\vec{a}'=(a'_1, a'_2, a'_3)\) is the vector of the state prepared by the circuit. Optimization is carried out using the Adam optimizer. Finally, we compare the Bloch vectors of the target and output state.
# cost function def cost_fn(params): cost = 0 for k in range(3): cost += torch.abs(circuit(params, A=Paulis[k]) - bloch_v[k]) return cost # set up the optimizer opt = torch.optim.Adam([params], lr=0.1) # number of steps in the optimization routine steps = 200 # the final stage of optimization isn't always the best, so we keep track of # the best parameters along the way best_cost = cost_fn(params) best_params = np.zeros((nr_qubits, nr_layers, 3)) print("Cost after 0 steps is {:.4f}".format(cost_fn(params))) # optimization begins for n in range(steps): opt.zero_grad() loss = cost_fn(params) loss.backward() opt.step() # keeps track of best parameters if loss < best_cost: best_params = params # Keep track of progress every 10 steps if n % 10 == 9 or n == steps - 1: print("Cost after {} steps is {:.4f}".format(n + 1, loss)) # calculate the Bloch vector of the output state output_bloch_v = np.zeros(3) for l in range(3): output_bloch_v[l] = circuit(best_params, A=Paulis[l]) # print results print("Target Bloch vector = ", bloch_v) print("Output Bloch vector = ", output_bloch_v)
Out:
Cost after 0 steps is 1.0179 Cost after 10 steps is 0.1467 Cost after 20 steps is 0.0768 Cost after 30 steps is 0.0813 Cost after 40 steps is 0.0807 Cost after 50 steps is 0.0940 Cost after 60 steps is 0.0614 Cost after 70 steps is 0.0932 Cost after 80 steps is 0.0455 Cost after 90 steps is 0.0752 Cost after 100 steps is 0.0301 Cost after 110 steps is 0.0363 Cost after 120 steps is 0.1332 Cost after 130 steps is 0.0687 Cost after 140 steps is 0.0505 Cost after 150 steps is 0.0800 Cost after 160 steps is 0.0644 Cost after 170 steps is 0.0813 Cost after 180 steps is 0.0592 Cost after 190 steps is 0.0502 Cost after 200 steps is 0.0573 Target Bloch vector = [ 0.33941241 -0.09447812 0.44257553] Output Bloch vector = [ 0.3070773 -0.07421859 0.47392787]
Total running time of the script: ( 1 minutes 13.360 seconds)
Gallery generated by Sphinx-Gallery
Downloads | https://pennylane.ai/qml/demos/tutorial_state_preparation.html | CC-MAIN-2020-16 | refinedweb | 1,077 | 59.19 |
Results 1 to 3 of 3
- Join Date
- Jan 2012
- 14
- Thanks
- 6
- Thanked 0 Times in 0 Posts
Hide Submit on Date Select in Past
Hey guys I need some assistance in developing a code that will hide the submit button of a form if a date in the past is selected. The variable for the date box is firstleveldate and the date appears in a MM-DD-YYYY format.
Thanks!
- Join Date
- Apr 2012
- Location
- St. Louis, MO
- 985
- Thanks
- 7
- Thanked 101 Times in 101 Posts
Load the form with the submit button disabled, then place an onBlur event in the firstleveldate field that checks the date value. If it's before the current date, do nothing; if it's AFTER the current date (or is the current date?), then document.forms["formName"].submitButtonName.disabled = false;^_^".
you can use this:Code:
function TestDate(Input, CurDateValid){ var seg=Input.split('-'); var Dato=new Date(seg[2],seg[0]-1,seg[1]); var Now=new Date(); var NowReset=new Date(Now.getFullYear(),Now.getMonth(),Now.getDate()); return ((CurDateValid?NowReset:Now)<=Dato); }
If today is a valid date then
Code:
if (TestDate(firstleveldate,1)){ --show/enable button-- }else{ --hide/disable button-- }Code:
if (TestDate(firstleveldate,0)){ // or: if (TestDate(firstleveldate)){ --show/enable button-- }else{ --hide/disable button-- }
Last edited by Lerura; 06-24-2012 at 12:01 AM. | http://www.codingforums.com/javascript-programming/266094-hide-submit-date-select-past.html | CC-MAIN-2017-26 | refinedweb | 229 | 59.03 |
what gets passed to your KeyboardProc:
Here's some code that shows several hooks, WH_KEYBOARD among them, and the extraction of key info.
i read in the first article that if the Code varible is less the zero, the function returns the value returned by the next hook in line. it appears that this is my problem because i am not assigning anything to Code
what should i assign to it?
Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden with our free interactive tool and use it to determine the right price for your IT services. Download your free eBook now!
cool then plz tell us if u find a way to get the key back
thankz... suma
In any case, be aware that EE frowns heavily on dupe accounts, but my lips are sealed.
I'm in the midst of see what really happens when the call back is to managed code...
It would appear as if there is a way to get call backs to managed code. I'm burned out right now, but will continue tomorrow night.
cheers...
well i've read quite a few articles about using local hooks in c# so it should be able to be done... well my code works so it can be done, its just a question of working out how to get the key back.
does this mean that it might be possible to get a global hook working by creating the dll file in c#
if the only reason that the code needs to be in a dll file is so that the os can insert it into more than 1 running process, then i cant see why this wouldn't work... but im probably missing something obvious as usual...
anywho im not going to worry about that until i get the local hook working
laters...
[MarshalAs(UnmanagedType.F
//declare the delegate
plz tell me what u think coz im having trouble declaring the delegate seperatly (it wont let me use the 'this' keyword for some reason) and i dont want to waste time fixing this problem if its not going to pay off
[DllImport("user32.dll")]
public static extern IntPtr SetWindowsHookEx(int HookType,HookProc lpfn,IntPtr hMod,int dwThreadId);
[DllImport("user32.dll")]
public static extern bool UnhookWindowsHookEx(IntPtr
[DllImport("user32.dll")]
public static extern int CallNextHookEx(IntPtr hHook,int Code,IntPtr wParam,IntPtr lParam);
public const int HookType=2;
private System.Windows.Forms.Butto
private System.Windows.Forms.Butto
public static IntPtr hHook=IntPtr.Zero;
public delegate int HookProc(int Code, IntPtr wParam, IntPtr lParam);
[MarshalAs(UnmanagedType.F
HookProc address=new HookProc(Form1.Keypress);
public static int Keypress(int Code, IntPtr wParam, IntPtr lParam)
{
if(Code>0)MessageBox.Show(
return CallNextHookEx(hHook,Code,
}
public void Install()
{
hHook=SetWindowsHookEx(Hoo
}
public void Uninstall()
{
bool ok=UnhookWindowsHookEx(hHo
}
so what do u think of that article?
I had just started massaging the code when I got the notif you had it working.
<<also have u found out a way to get the key back?>>
Recall that I thought the DLL was needed because I didn't think the call back could be made into the C# code. Since you're getting the keystroke info in the C# code, there's no need for a DLL.
but i dont know how to get the "handle" of my application (im not totally sure what it is)
my code does this: every time a key is pressed (local) it fires a messagebox
i use this messagebox so find out what wParam is. it changes for each key but the ascii value is not correct
i.e. if i hit 'a' on the keyboard the wParam is 65, in ascii 'a' is 97. so i must be doing something wrong...
im not going to start on the global hook until i get the local 1 working so can u help me on this as i really dont know what is wrong.
ima go read your article now and see if i can find out what is wrong
cheers
Although we're not supposed to communicate outside this forum on questions posted here, I think it would be acceptable if you were to mail me a zip of your project directory. That way I'd be sure to have the same project settings as you and we'd be working with the same base.
laters... suma
my next step it to make it global... u say i dont need to put it in a dll so i assume im supposed to get the handle of the app. ppl use the function GetModuleHandle() but i do not know the parameters (and whenever i try it it does not recognise it)?
cookre (0) yahoo periodot komm
heres what i plan to do:
make a dll (the way i showed u) which has functions to set up the hook, remove the hook, and gets the callback
find out a way to get the handle of it and put that in hMod.
set dwThreadId to 0
then this should be a working global hook, am i wrong?
my problems:
>>activate a method in the exe file every time the dll recives a callback (i.e. get the key info)
>>get the handle of the dll
i cant belive how close i am!!
When you call your DLL entry point, let's call it "InitiateKbdMonitor(...), one of the parameters will be that very same address you're now passing to SetWindowsHookEx().
"But," you ask (reasonably, I might add), "when I call SetWindo... in my DLL, how do I pass along the address that was passed to me?"
I see three possibilities:
1) drop into ASM and build the call to SetWin...() dynamically
2) Something like *FuncPtr might work (i.e., indirect operator)
3) Have a function in the DLL that calls the address you passed to it. Pass SetWindo...() the address of the function in the DLL (which will then call the address you passed the DLL.
I'd prefer 2). but would have to fiddle a bit to come up with the precise syntax.
1) is way to advanced for me
2) seems to be the best option
3) good but complicated
this problem would be avoided if i could do as without the dll as mentioned above
do u know how to do this?
At the bottom:
Global Hook Is Not Supported in .NET Framework
You cannot implement global hooks in Microsoft .NET Framework. To install a global hook, a hook must have a native dynamic-link library (DLL) export to inject itself in another process that requires a valid, consistent function to call into. This requires a DLL export, which .NET Framework does not support. Managed code has no concept of a consistent value for a function pointer because these function pointers are proxies that are built dynamically.
int *FncPtr;
int sbr2()
{
printf("HI!\n");
return 2;
}
int sbr1(int x())
{
x();
return 1;
}
int main(int argc, char* argv[])
{
FncPtr=(int *)&sbr2;
sbr1((int (__cdecl *)(void))FncPtr);
return 0;
}
In otherwords, the DLL would catch the event then make a call into the EXE.
<Global Hook Is Not Supported in .NET Framework>
yea this is what i expected
cheers
What 3) looks like is:
EXE calls DLL which calls SetWindo... (et al).
EXE has routine Wigit that get's control from DLL, so the address of Wigit is passed to the DLL using the same c# syntax as you used when passing the address of the hook routine to SetWindo...
The DLL will lok something like:
(I'm just going to use int for all the function types. You'll have to change them to whatever is appropriate)
// This is where we save the address in the EXE to call
int * FncPtr;
// This is where we call the EXE
int DoTheCallBackToTheEXE(int x())
{
x();
}
// This gets the call back from the hook
int GetEventFromSWHE()
{
DoTheCallBackToTheEXE((int
}
// This is the entry point to the DLL called by the EXE
int DLLEntryPoint(int WhoToCall())
{
FncPtr=(int *)&WhoToCall; // Save EXE call back address
SetWindo...(..,GetEventFro
}
>>when the exe wants to set a hook it activates a function in the dll, giving one of its own methods as a parmeter
>>the activated function (in the dll) sets the hook so that the callback will activate one of its own methods
>>when the method in the dll is activated, it calls a method in the exe passing the key info
>>the dll can NOT be created using c#, but CAN be created using certain types of c++ projects
once u confirm that the above is correct i will try to:
1) make a non-managed dll
2) call its entry point from an exe
3) make a function (in the dll) that passes a value to the exe
The only things I worry about are:
1) That comment from MS: "Managed code has no concept of a consistent value for a function pointer because these function pointers are proxies that are built dynamically."
2) Timing. If the box is heavily loaded and a lot of key events occur in rapid succession, either key handler (DLL and/or EXE may get called before it's finished processing the previous event. There are two ways to deal with this:
a) when the DLL gets an event, disable the hook. When event processing is complete, re-enable the hook.
b) when either the DLL or EXE gets an event, check an ImBusy flag. If it's set, ignore the event (yes, thereby missing the event):
in EXE:
CallDLL();
EXEBusy=false;
top of handler in EXE:
if (EXEBusy) return;
EXEBusy=true;
...
EXEBusy=false;
return;
in DLL:
SetWindowsHookEx();
DLLBusy=false;
top of handler in DLL:
if (DLLBusy) return;
DLLBusy=true;
...
DLLBusy=false;
return;
unfortunatly i dont know c++ so ima have to ask my dad for a book about it and it will take me ages until i know enuf to make the dll
or could u post/mail me the source code to make the dll and then it would be pretty easy for me to figure out what is going on without me having to learn another language...i know this is a really big ask but im pretty desperate here. let us know either way...
thankz...
suma
The second link above is code for such a DLL:
where exactly do i put the code? heres what im doing:
new c++ win32 project
application settings
set application type to DLL
finish
now i got:
Souce Files(Folder)
stdafx.cpp
<projectname>.cpp
Header Files(Folder)
stdafx.h
Resouce Files(Folder)
readme.txt
exactly where do i add the code?
// avvc.cpp : Defines the entry point for the DLL application.
//
#include "stdafx.h"
BOOL APIENTRY DllMain( HANDLE hModule,
DWORD ul_reason_for_call,
LPVOID lpReserved)
{
//1 do i put the code here?
return TRUE;
}
//2 or here?
2 is where other routines would go, e.g., the dll's call back routine from SetWind...
ill post here if i have any problems
later
heres my code:
#include "stdafx.h"
BOOL APIENTRY DllMain( HANDLE hModule,
DWORD ul_reason_for_call,
LPVOID lpReserved
)
{
DLLMIN();
return TRUE;
}
void DLLMIN()
{
}
error C2065: 'DLLMIN' : undeclared identifier
error C2365: 'DLLMIN' : redefinition; previous definition was a 'formerly unknown identifier'
If a function is referenced before it's defined, youi need a declaration.
Some people put the code for internal routines above the point they're first referenced, but with lots of internal routines, that can get a bit messy - not to mention a little more difficult to follow.
The common way to deal with this it to put function declarations after globals but before the first function definition.
So, in this case, add:
void DLLMIN();
above BOOL APIENTRY...
That will get you a little further.
ive actually changed my mind and think that this will be very easy to make... i just need to work out how to transfer back values and callback to the exe.
now im trying to return a value into the exe:
heres my function that returns a value
int DLLMIN()
{
return 1;
}
i can set it up so that when the entry point in the dll is called, DLLMIN() will be called... but how do i get the value of 1 back to the exe?
(also how exactly do i call the entry point from c#???)
cheers!
suma
===suma===
putting the dll in the system32 folder
[DllImport("cbf.dll")//eve
bool e=DllMain();//error: no entery point in cbf.dll called DllMain()
if it aint this then i dunno what it is
post there aiite... ill accept 1 of your comments once u get this message
laters
Also, you need to stop using duplicate IDs. It's contrary to EE policy and the administration has started a campaign to find them. No small number of folks have been banned for their continued use of duplicate accounts.
aiite cool cool well give that mlmcc dude a chance... we can still use this post to talk about whatever he says so i wont accept 1 of yo answers yet...
laters
does the entry point have to be declared as public?? (coz i cant complie it when i declare anything as public)
// SUMA.cpp : Defines the entry point for the DLL application.
//
#include "stdafx.h"
void MethodTest()
{
}
BOOL APIENTRY DllMain( HANDLE hModule,
DWORD ul_reason_for_call,
LPVOID lpReserved)
{
MethodTest();
return TRUE;
}
i then move the dll to c:\windows\system32\
and then call it from a c# windows application:
using System.Runtime.InteropServ
[DllImport("SUMA.dll")]
public static extern bool DllMain();
Form1_Load
{
DllMain();
}
and it says:
Additional information: Unable to find an entry point named DllMain in DLL SUMA.dll.
heres a thought... maybe i should define the c# function with the same parameters as in the dll:
HANDLE hModule, //no idea what this is
DWORD ul_reason_for_call, //a DWORD is the same an int but i dont know what value to assign to it
LPVOID lpReserved //no idea
im getting confused...
Aren't we all - that's why we nee all the code - the c#,too.
using System.Runtime.InteropServ
public static extern bool DllMain();
Form1_Load
{
DllMain();
}
#include "stdafx.h"
BOOL APIENTRY DllMain( HANDLE hModule,
DWORD ul_reason_for_call,
LPVOID lpReserved)
{
return TRUE;
}
can u tell me how to call that from c#?
[DllImport("dll file name")]
public static extern bool DllMain();
Error: Cannot Find Entry Point in DllFileName called DllMain()
the only reason i can think of why i might be getting this error is that DllMain() does not take zero parameters
BOOL APIENTRY DllMain( HANDLE hModule,
DWORD ul_reason_for_call,
LPVOID lpReserved)
what do u think?
seeingz as the local hook worked in managed code, i really cant see why the global hook wouldnt work in a managed dll
maybe that crap that microsoft said was just to deter ppl from making spy applications
do u think it would be worth trying?
Sorry I haven't been able to do any test coding on this - I started a new project at work last week that has to be deployed to in 3 weeks. I doubt I'll make much more than quickie comments on EE for a while.
i can now make a non-managed c++ dll and call functions in it from c#!
now the only things i have to wory about it making the callback function (that passes data from the dll to the exe) and finding out what value should be put in hMod to make a global hook
as with all the other problems ive had so far, ill keep trying until i either fix them or find out that it cant be done
good luk wit your project :)
why am i going to all this trouble to pass the data back to the exe, why dont i just do what ever i want wit in the dll??
im going to experiment using the win32 api messagebox function... if it works ill just find a way to save the characters to a file
what do u say... is this the best idea ive had in my life or what :)
HOOK hhk = SetWindowsHookEx(int idHook, HookProc lfpn, HINSTANCE hMod, DWORD dwThreadId)
HHOOK hhk = SetWindowsHookEx(2,lfpn,?,
to make a global hook, what value should be given to hMod and where/how do i get this value
i CAN NOT belive how close i am to making this hook!!!!
thanks... suma
Experts Exchange Solution brought to you by
Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial
i got a global hook working (finally) and all there really is left to do is polish the program adding extra features (such as saving to a text file ect ect)
thanks very much 4 all tha help with everything
cya... suma | https://www.experts-exchange.com/questions/20820024/Windows-Hook-PLZ-HELP.html | CC-MAIN-2019-04 | refinedweb | 2,817 | 67.59 |
Chat and questions about the Scala based SuperCollider client. Newbies welcome. @ me if you need fast replies, otherwise I might not see new posts in a few days. Also checkout for the computer music environment that embeds ScalaCollider.
hello!
got a little issue when running synth.Server
i'm running on the v1.18.1 you just published but i got the same issue on v1.18.0
it's not really an issue since it was working fine yesterday but now i get the "Exception in World_OpenUDP: unable to bind udp socket"
good to know thank you!
anyway thank you for the feedbacks it's very nice to be able to talk to you so easily.
i'll ask you for further questions about UGens on the gitter but just to be clear. if we want to add extra UGens to your ScalaColliderUGens project we have to create a .xml configuration file right? i'm not sure that I understood everything on the readme..
We try to add the VBAPUGens to sc3plugin to work with scalacollider but as i said the processus still looks blurry for me..
So we have the .scx generated by supercollider but it doesn't work (as expected) :)
.xmlinput and generates
.scalasource code output. I'd say that is the preferred method to introduce new UGens. The "output" in this case is the
.scalafiles; they are not written by hand but by a code generator. So if at some point I change the API, I don't have to rewrite hundreds of UGen
.scalafiles, but just adapt the generator, perhaps make a few adjustments in the XML. However, for quick testing, you can take those generated
.scalaoutput files as a clue on how they look, and you could just copy + paste + edit them for new UGens, if you don't want to go through the process of defining the
.xmlfiles. If you come up with xml files for the VBAPUGen plugin, I am happy to include it with the project, so future updates will make the classes available. If you have specific questions about the meaning of the XML fields (they are explained a bit in the readme), I can help with that. Basically one xml file = one plugin (.scx file; collection of related ugens), although that organisation is not mandatory. TJUGens is an example for third-party plugins:
The generated output is not part of the repository. But if you open the published jars, e.g. you open the sources.jar from - you will see all the
.scala files thus produced. If you are using an IDE such as IntelliJ or Eclipse, you can probably also just jump from a symbol, say
SinOsc to its source code (if you selected to include library source code in your project). So you get from this:
<ugen name="SinOsc"> <rate name="audio"/> <rate name="control"/> <arg name="freq" default="440.0"> <doc> frequency in Hertz </doc> </arg> <arg name="phase" default="0.0"> <doc> phase offset or modulator in radians </doc> </arg> <doc> <text> A sinusoidal (sine tone) oscillator UGen. This is the same as `Osc` except that it uses a built-in interpolating sine table of 8192 entries. </text> <see>ugen.Osc</see> <see>ugen.FSinOsc</see> </doc> </ugen>
to this:
/** A sinusoidal (sine tone) oscillator UGen. This is the same as `Osc` except that * it uses a built-in interpolating sine table of 8192 entries. * * @see [[de.sciss.synth.ugen.Osc$ Osc]] * @see [[de.sciss.synth.ugen.FSinOsc$ FSinOsc]] */ object SinOsc { def kr: SinOsc = kr() /** @param freq frequency in Hertz * @param phase phase offset or modulator in radians */ def kr(freq: GE = 440.0f, phase: GE = 0.0f): SinOsc = new SinOsc(control, freq, phase) def ar: SinOsc = ar() /** @param freq frequency in Hertz * @param phase phase offset or modulator in radians */ def ar(freq: GE = 440.0f, phase: GE = 0.0f): SinOsc = new SinOsc(audio, freq, phase) } /** A sinusoidal (sine tone) oscillator UGen. This is the same as `Osc` except that * it uses a built-in interpolating sine table of 8192 entries. * * @param freq frequency in Hertz * @param phase phase offset or modulator in radians * * @see [[de.sciss.synth.ugen.Osc$ Osc]] * @see [[de.sciss.synth.ugen.FSinOsc$ FSinOsc]] */ final case class SinOsc(rate: Rate, freq: GE = 440.0f, phase: GE = 0.0f) extends UGenSource.SingleOut { protected def makeUGens: UGenInLike = unwrap(Vector(freq.expand, phase.expand)) protected def makeUGen(_args: Vec[UGenIn]): UGenInLike = UGen.SingleOut(name, rate, _args) }
ok so i came up with a partial xml file for VBAPUgen.
The VBAPUgen contains 3 classes : VBAP, VBAPSpeaker and VBAPSpeakerArray
VBAPSpeaker and VBAPSpeakerArray only have new() method. what would be the equivalent for the generator to understand on the xml.
<ugens revision="1"> <ugen name="VBAP"> <rate name="audio"/> <rate name="control"/> <arg name="numChans"> <doc> the number of output channels </doc> </arg> <arg name="in"> <doc> the input to be panne </doc> </arg> <arg name="bufnum"> <doc> a buffer or it's bufnum containing data calculated by an instance of VBAPSpeakerArray it's number of channels must correspond to numChans above </doc> </arg> <arg name="azimuth" default="0"> <doc> +/- 180° from the medium plane </doc> </arg> <arg name="elevation" default="1"> <doc> +/- 90° from the azimuth plane </doc> </arg> <arg name="spread" default="0"> <doc> A value from 0-100. When 0, if the signal is panned exactly to a speaker location the signal is only on that speaker. At values higher than 0, the signal will always be on more than one speaker. This can smooth the panning effect by making localisation blur more constant. </doc> </arg> </ugen> </ugens>
And also i have issue trying to generate from sbt this way :
$ sbt
$ project scalacolliderugens-gen
my terminal says :
project scalacolliderugens-gen
[error] Not a valid project ID: scalacolliderugens-gen
Klangbeing another one, so I have a hand written
KlangSpec. The XML really just has the UGens. Their arguments are either scalar values, e.g.
Intfor determining the number of channels (
In.ar) or they are graph elements
GE. So if your custom aux types can be converted to
GEthen this approach works. It's not optimal in terms of type-safety, but good enough IMO. Let me have a look at the original VBAP source to refresh my memory
VBAP, like you pasted above. The other classes it appears are only auxiliary classes, for example to calculate speaker angles and such, and are never directly used by the UGen class - if I'm not mistaken. So in order to use the UGen, you just need
VBAP. If you want to use the functions of say
VBAPSpeakerSetavailable from Scala, you'll have to translate that class. It's independent of ScalaCollider. Let me know if you need help with this.
// 8 channel ring val a = VBAPSetup(2, Seq(0, 45, 90, 135, 180, -135, -90, -45)) val b = Buffer.alloc(s, a.bufferData.size) b.setn(a.bufferData) val x = play { val azi = "azi".kr(0) val ele = "ele".kr(0) val spr = "spr".kr(0) VBAP.ar(8, PinkNoise.ar(0.2), b.id, azi, ele, spr) } // test them out x.set("azi" -> a.directions(1).azi) x.set("azi" -> a.directions(2).azi) x.set("azi" -> a.directions(3).azi) // ... x.set("azi" -> a.directions(7).azi) x.set("azi" -> a.directions(0).azi) // try the spread x.set("spr" -> 20) x.set("spr" -> 100) // all speakers x.free(); b.free();
if (value == 1) Synth.play(ssa.name) if (value == 2) { ssa release(2) println("recu 2") }
After I created 'transactional' systems based on ScalaCollider, I wanted to remove the plain side-effecting methods from the basic API. The basic API just supports generating the OSC message; when you call a method such as
synth.set(...) or
synth.free(...), this is facilitated by an additional
import de.sciss.synth.Ops._. So when you look at the README, you have this preamble:
import de.sciss.synth._ import ugen._ import Ops._
The last import makes the side-effects available. The docs are here: - so for
release, that is
NodeOps:
compilation is ok, free() works fine, but release() has no action still :/compilation is ok, free() works fine, but release() has no action still :/
server.run(cfg) { serv => var synth = Synth() ssa.recv(serv) //previous SynthDef receiver.action = { case (m@osc.Message("/test", value: Int), s) => if (value == 1) synth = Synth.play(ssa.name) if (value == 2) println("recu 2"); synth.release(2.0) //synth.free() } }
val synth = play {}
That's the same as in SuperCollider - using
play { } will add an envelope that has a control
gate that is used by
release.
release is just a convention for
synth.set("gate" -> -1 - releaseTime) (or similar). So if you want to add a similar kind of envelope to a
SynthDef you either have to create an
EnvGen with a
gate argument, or you use the pseudo-UGen
WrapOut instead (that's the thing that
play {} uses):
SynthDef.recv("test") { val sig = WhiteNoise.ar(Seq(0.2, 0.2)) WrapOut(sig) // ! } val x = Synth.play("test") x.release(10)
WrapOut creates controls
"out" (for bus) and
"gate" (for release)
WrapOutwhich when expanding adds the envelope.
exception in GraphDef_Recv: UGen 'VBAP' not installed.
val a = VBAPSetup(2, Seq(Polar(-60,0), Polar(60,0), Polar(110,0), Polar(-110,0), 3.35)) val b = Buffer.alloc(serv, a.bufferData.size) b.setn(a.bufferData) SynthDef.write("synthTest.txt", MySynths.load(b), 1) synthDefs = SynthDef.read("synthTest.txt") val ssa = synthDefs.find(_.name == "soudscape-1").get ssa.recv(serv)
MySynths.load(Buffer)loads a
Seq[SynthDef]to store
writewill always write in the standard binary format of SuperCollider. Therefore, this is not a "text file".
val sd = SynthDef("test") { ... } sd.write(dir = "my-dir") SynthDef.load(path = "my-dir/test.scsyndef") // ... synchronise! then Synth.play("test") | https://gitter.im/Sciss/ScalaCollider?at=56ec6b20dec81665365db174 | CC-MAIN-2022-33 | refinedweb | 1,641 | 67.35 |
software development. This is precisely why he’s a Technical Fellow here at Microsoft and has numerous projects underway. The one that’s of keen interest to quite a few people (and yours truly included) is a project that goes by the name of Roslyn (named after an old mining town located in Washington). If you’re not familiar with this technology it can best be described as a “compiler as a service.” Here’s a good overview of the components:
With adding just a few references you’ll be surprised at the types of things you can do using this technology.
using Roslyn.Compilers;
using Roslyn.Compilers.Common;
using Roslyn.Compilers.CSharp;
using Roslyn.Compilers.VisualBasic;
Compiling and running code dynamically brings many concepts into reality but the one that I find most practical is the ability to convert C# to VB and visa versa. This is made possible as Roslyn exposes both object models and syntax trees of these languages.
I strongly encourage you to download this CTP and start experimenting with it as I’m in process of doing this myself. More information about Roslyn can be found on the Visual Studio Blog as well as the Download Center via MSDN to get the bits and accompanying documentation (e.g. walkthroughs, sample code, etc.).
Visual Studio guys can use this to bring lambda to immediate window in debug mode 😉
Perhaps this could automatically translate the code examples in VS Documentation (I always struggle to convert all the C# examples to C++/CLI). | https://blogs.msdn.microsoft.com/bryang/2011/11/01/roslyn-ctp-released/ | CC-MAIN-2016-36 | refinedweb | 255 | 52.7 |
> > * A bit of syntactic sugar for defining prototype objects > > wouldn't go amiss. Having to say > > > > Thing = Base() > > with Thing: > > ... > > > > every time I want to define a class (oops, sorry, prototype) > > would drive me batty. I'd much rather write something like > > > > object Thing(Base): > > ... > > In Io (): Thing := Object clone do( # evaluates in context of Thing, i.e., Thing is the locals object of do() slot := "new slot for Thing" ) Thing slot print # <- "new slot for Thing" There's some fancy footwork going on that I won't go into at the moment - its 5am - but suffice to say that namespaces are handled by objects and message forwarding and one of the tricks of this process is that a method has 'self', 'proto', 'sender', and 'locals' slots. In the case of do(), locals == self == sender (I think). Anyway, for Prothon, perhaps you'll want to reserve "do" for blocks, so we'll try "where": Thing = Object.clone() where: attr = "new attribute for Thing" print Thing.attr # <- "new attribute for Thing" I'm not sure how that'd be made to work though - in Io, do() is just another method of Object (there are no keywords). The namespaces are handled by objects idea seems to play out nicely - for one thing, there's no reason to distinguish globals via capitalization. Also, having two forms of assignment (one for creation(binding), and one for update(re-binding)) eliminates alot of the need for the '.' prefix stuff in Prothon - use "self.attr" once on creation to let the namespace know what you're doing and from then on , as long as you don't shadow it by creating a local of the same name, you can just say "attr". If you're only doing updates on "attr", in a given namespace, you never need to use "self.attr" - the lookup mechanism will just resolve it for you. Anyway, that's enough from me, I have a lab exam to study for. Sean | https://mail.python.org/pipermail/python-list/2004-March/263699.html | CC-MAIN-2014-15 | refinedweb | 329 | 78.48 |
How to use custom DLL in SSIS Package
This article teachs you how to consume/use third party DLL or assembly via a SSIS package script task control. This script task control is a SSIS common control to perform functions that are not provided by the standard Integration Services Task. For Example, moving processing logic from external batch or script files into the package, or invoking a third party API as part of the control flow. This is the control where you can write the C# or VB code to execute in the part to SSIS packages.
1. Create Class Library project:
Start Visual Studio to create Class Library Project (
2. Implement your logic:
Write your code in Class.cs/vb file, compile it, sign with string name and place it in GAC.
Registering the DLL
3. Create new SSIS project:
Then create new SSIS project to use the DLL or Assembly into Script task
What is Script Task: It’s a SSIS common control, it performs functions that are not provided by the standard Integration Services Task. For Example, moving processing logic from external batch or script files into the package, or invoking a third party API as part of the control flow.
Steps:
i). File->Add->New Project
ii). Add new Integration Services Project as shown in below snapshot
iii). Then add a new Script task (from SSIS Tool Box)
iv). Than edit the script task & add the reference of new DLL (DatavalidationSample.dll) as shown in next step.
4. Add Reference:
When we click the edit button from the script task properties, it will open as shown Vista Project Editor, then please follow the steps to add the reference in it.
i). Right click on the vista project
ii). Select the Add Reference… option
iii). Select the Browse tab from the Add Reference dialog box
iv). The browse the DLL from your project and click OK
v). The DLL has been added in references.
vi). Please click Save all.
5. Call the custom method in your SSIS package:
Please follow the below steps:
i). Add the namespace (As shown in snapshot)
ii). Create an object for the class
iii). Call the method which you want to execute (see the highlighted part in snapshot).
6. Execute the SSIS package:
On execution of SSIP package, the Message Box “Hello Infy!” is displayed, this text is sent by the DLL.
| http://www.dotnetspider.com/resources/45645-How-to-use-custom-DLL-in-SSIS-Package.aspx | CC-MAIN-2017-09 | refinedweb | 400 | 74.19 |
This tutorial has been a long time coming and for anyone who has been following the other tutorials I apologize.
In this tutorial I will cover one of the most useful topics in game programming, Game States. Game States are extremely useful, lets say you have a game with a menu screen, pause screen, play screen and game over screen.
You could go ahead and do something horribly convoluted and hard to maintain such as this-
if(menuState) { // do this } if(playstate) { // do this } etc....
As you can see eventually this is gonna be extremely hard to maintain and use, so the solution is to create a way to easily switch between states, remember our Game class, everything is nicely encapsulated and set out. We can use this class with our Game State class to make Game States that are essentially like separate games, these games can be easily switched between.
So first we will create a GameState.h modelled partially on our Game.h file, so create GameState.h and start coding...
#ifndef GAMESTATE_H #define GAMESTATE_H #include "Game.h" class GameState { public: virtual void Init() = 0; virtual void Clean() = 0; virtual void Pause() = 0; virtual void Resume() = 0; virtual void HandleEvents(Game* game) = 0; virtual void Update(Game* game) = 0; virtual void Draw(Game* game) = 0; void ChangeState(Game* game, GameState* state) { game->ChangeState(state); } protected: GameState() { } }; #endif
Notice the pure virtual functions (functions denoted = 0;) we use these so that we can have all our game states inherit from this GameState base class, inheritance is a topic that you should research if this concept seems strange to you.
Also notice we are passing in a pointer to a Game object, this is because GameStates may need to access some game functions as they are essentially Games themselves.
Once more thing to look at is the protected constructor, this is used so we can implement the class as a singleton, singletons are used to ensure there is only ever one instance of an object at any time, we use a protected constructor, then use a function that returns a pointer to a static instance of the class.
OK thats it for our GameState.h, since this is a pure virtual base class we will not write a .cpp file as each individual game state will overload these functions.
Now we need to make some changes to our Game.h file so open it up....
#ifndef _GAME_H_ #define _GAME_H_ #include <SDL.h> #include "Sprite.h" #include <vector> class GameState; // make sure this class knows about the GameState class. class Game { public: Game(); void Init(const char* title, int width, int height, int bpp, bool fullscreen); void ChangeState(GameState* state); // new function void PushState(GameState* state); // new function void PopState(); // new function void HandleEvents(); // remove pointer to game class void Update(); void Draw(); void Clean(); bool Running() { return m_bRunning; } void Quit() { m_bRunning = false; } private: // the stack of states std::vector<GameState*> states; SDL_Surface* m_pScreen; bool m_bFullscreen; bool m_bRunning; }; #endif
Ok thats the header file, and now the cpp file......
void Game::HandleEvents() // take pointer out and remove function body { } void Game::Update() // remove any previous code from old tutorials { } void Game::Draw() // remove any previous code from old tutorials { SDL_Flip(m_pScreen); } void Game::Clean() // remove any previous code from old tutorials { }
OK, we removed these function bodies as we now want our individual states to handle their own drawing,updating and event handling.
Now make sure this compiles, it will just be a blank window at the moment and you will have to force it to quit.
Now we need to implement our GameStates into the Game class and also create the new functions that we wrote.
So again in the Game.cpp file
#include "Game.h" #include "GameState.h" // constructor Game::Game() { } m_pScreen = SDL_SetVideoMode(width, height, bpp, flags); m_bFullscreen = fullscreen; m_bRunning = true; printf("Game Initialised Succesfully\n"); } /* Our new functions, ChangeState() takes a pointer to a GameState as a parameter and then pushes that state onto the vector of pointers to GameStates, before that it uses the clean function to remove the old state from the stack. */ void Game::ChangeState(GameState* state) { // cleanup the current state if ( !states.empty() ) { states.back()->Clean(); states.pop_back(); } // store and init the new state states.push_back(state); states.back()->Init(); } /* Whereas ChangeState() pushes a state onto the stack and removes the previous state, PushState() pauses the previous state before pushing a new state onto the stack, this state can then be removed and the previous state resumed. Extrememly useful for pausing. */ void Game::PushState(GameState* state) { // pause current state if ( !states.empty() ) { states.back()->Pause(); } // store and init the new state states.push_back(state); states.back()->Init(); } /* Remove and resume previous state. */ void Game::PopState() { // cleanup the current state if ( !states.empty() ) { states.back()->Clean(); states.pop_back(); } // resume previous state if ( !states.empty() ) { states.back()->Resume(); } } /* These functions have now been changed so that they simply allow the current state to handle things, states.back() refers to the last element on the stack (the current state) */ void Game::HandleEvents() { // let the state handle events states.back()->HandleEvents(this); } void Game::Update() { // let the state update the game states.back()->Update(this); } void Game::Draw() { // let the state draw the screen states.back()->Draw(this); //SDL_Flip(m_pScreen); } void Game::Clean() { while ( !states.empty() ) { states.back()->Clean(); states.pop_back(); } // shutdown SDL SDL_Quit(); }
Right then our state manager is now set up, now onto creating a few states for our game.
I hope it hasn't been too hard going so far and everyone is still with me.
So check that this compiles, again you will have to force quit the program.
Now lets create a state, we will create a few states but each will simply be a different image as that is the scope of the tutorial so far, and it gives us an excuse to use our Sprite class from the previous tutorial.
So create a new file called MenuState.h
#ifndef _MENU_STATE_H_ #define _MENU_STATE_H_ #include "SDL.h" #include "GameState.h" #include "Sprite.h" class MenuState : public GameState { public: void Init(); void Clean(); void Pause(); void Resume(); void HandleEvents(Game* game); void Update(Game* game); void Draw(Game* game); // Implement Singleton Pattern static MenuState* Instance() { return &m_MenuState; } protected: MenuState() {} private: static MenuState m_MenuState; SDL_Surface* menuSprite; }; #endif
So thats our first state, it inherits from our GameState base class, we also created an SDL_Surface so that we can draw something to the screen.
So now create the MenuState.cpp....
#include <stdio.h> #include "SDL.h" #include "Game.h" #include "MenuState.h" MenuState MenuState::m_MenuState; void MenuState::Init() { menuSprite = NULL; // set pointer to NULL; menuSprite = Sprite::Load("menustate.bmp"); // load menu state bitmap printf("MenuState Init Successful\n"); } void MenuState::Clean() { printf("MenuState Clean Successful\n"); } void MenuState::Pause() { printf("MenuState Paused\n"); } void MenuState::Resume() { printf("MenuState Resumed\n"); } void MenuState::HandleEvents(Game* game) //put our exit function back in business // we can now quit with cross in corner. { SDL_Event event; if (SDL_PollEvent(&event)) { switch (event.type) { case SDL_QUIT: game->Quit(); break; } } } void MenuState::Update(Game* game) { } void MenuState::Draw(Game* game) { Sprite::Draw(game->GetScreen(), menuSprite, 0, 0); // we will write // GetScreen() in a second SDL_Flip(game->GetScreen()); }
While writing the tutorial I realised I would have to write a function to return the screen as it is private, it is bad practice to just bluff it and make it public so try not to do it. Here is the GetScreen() function and it couldn't be simpler
SDL_Surface* GetScreen() {return m_pScreen;}
Just put this function in the public: part of the Game.h file.
Ok so we have now written pretty much all we need to test our first state, so lets update our main.cpp and check it out.
#include "Game.h" #include "MenuState.h" #include <iostream> int main(int argc, char* argv[]) { Game game; game.Init("State Manager",640,480,32,false); game.ChangeState(MenuState::Instance()); while(game.Running()) { game.HandleEvents(); game.Update(); game.Draw(); } // cleanup the engine game.Clean(); return 0; }
We added our ChangeState function and set the current game state to an Instance of the MenuState class.
Wow this is a long tutorial
So we will create 2 new states PlayState and PauseState. Here is the code, paste(or type out if you like) into PlayState.h, PlayState.cpp, PauseState.h, PauseState.cpp
PlayState.h
#ifndef _PLAY_STATE_H_ #define _PLAY_STATE_H_ #include "SDL.h" #include "GameState.h" #include "Sprite.h" class PlayState : public GameState { public: void Init(); void Clean(); void Pause(); void Resume(); void HandleEvents(Game* game); void Update(Game* game); void Draw(Game* game); // Implement Singleton Pattern static PlayState* Instance() { return &m_PlayState; } protected: PlayState() {} private: static PlayState m_PlayState; SDL_Surface* playSprite; }; #endif
PlayState.cpp
#include <stdio.h> #include "SDL.h" #include "Game.h" #include "PlayState.h" PlayState PlayState::m_PlayState; void PlayState::Init() { playSprite = NULL; playSprite = Sprite::Load("playstate.bmp"); printf("PlayState Init Successful\n"); } void PlayState::Clean() { printf("PlayState Clean Successful\n"); } void PlayState::Pause() { printf("PlayState Paused\n"); } void PlayState::Resume() { printf("PlayState Resumed\n"); } void PlayState::HandleEvents(Game* game) { SDL_Event event; if (SDL_PollEvent(&event)) { switch (event.type) { case SDL_QUIT: game->Quit(); break; } } } void PlayState::Update(Game* game) { } void PlayState::Draw(Game* game) { Sprite::Draw(game->GetScreen(), playSprite, 0, 0); SDL_Flip(game->GetScreen()); }
PauseState.h
#ifndef _PAUSE_STATE_H_ #define _PAUSE_STATE_H_ #include "SDL.h" #include "GameState.h" #include "Sprite.h" class PauseState : public GameState { public: void Init(); void Clean(); void Pause(); void Resume(); void HandleEvents(Game* game); void Update(Game* game); void Draw(Game* game); // Implement Singleton Pattern static PauseState* Instance() { return &m_PauseState; } protected: PauseState() {} private: static PauseState m_PauseState; SDL_Surface* pauseSprite; }; #endif
PauseState.cpp
#include <stdio.h> #include "SDL.h" #include "Game.h" #include "PauseState.h" PauseState PauseState::m_PauseState; void PauseState::Init() { pauseSprite = NULL; pauseSprite = Sprite::Load("paused.bmp"); printf("PauseState Init Successful\n"); } void PauseState::Clean() { printf("PauseState Clean Successful\n"); } void PauseState::Resume(){} void PauseState::Pause() {} void PauseState::HandleEvents(Game* game) { SDL_Event event; if (SDL_PollEvent(&event)) { switch (event.type) { case SDL_QUIT: game->Quit(); break; } } } void PauseState::Update(Game* game) { } void PauseState::Draw(Game* game) { Sprite::Draw(game->GetScreen(), pauseSprite, 0, 0); SDL_Flip(game->GetScreen()); }
So once you have these files in your project we are going to move between them. So we will start with our MenuState.cpp, so open it up....
#include "PlayState.h" // include PlayState.h // now we will go into the MenuState::HandleEvents(Game* game) void MenuState::HandleEvents(Game* game) { SDL_Event event; if (SDL_PollEvent(&event)) { switch (event.type) { case SDL_QUIT: game->Quit(); break; case SDL_KEYDOWN: switch(event.key.keysym.sym){ case SDLK_SPACE: game->ChangeState(PlayState::Instance()); break; } } } }
And now we will go into PlayState.cpp
#include "PauseState.h" // include the pause state // in handle events void PlayState::HandleEvents(Game* game) { SDL_Event event; if (SDL_PollEvent(&event)) { switch (event.type) { case SDL_QUIT: game->Quit(); break; case SDL_KEYDOWN: switch(event.key.keysym.sym){ case SDLK_SPACE: game->PushState(PauseState::Instance()); break; } } } }
This time we use PushState(); instead of ChangeState();
now open up PauseState.cpp
void PauseState::HandleEvents(Game* game) { SDL_Event event; if (SDL_PollEvent(&event)) { switch (event.type) { case SDL_QUIT: game->Quit(); break; case SDL_KEYDOWN: switch(event.key.keysym.sym){ case SDLK_SPACE: game->PopState(); break; } } } }
OK and we are done, compile and test
You should start with a menu state and then pressing space will take you to the playstate and then pressing space will pause and unpause the playstate. I know this is quite a simple demonstration but the code written here can be used in all of your 2d games, you can create states as individual levels or anything you want. Try making a menu in the menu state and then possibly a moving sprite in the playstate.
Wow that was a long tutorial, now you know why it took so long to appear. Next tutorial will focus on creating a game object base class and deriving some objects from it.
Happy Coding, as usual any questions welcome.
Here are the files I used in the project, sorry they're big
Number of downloads: 858
Number of downloads: 858
Number of downloads: 757
Kakashi is cool
This post has been edited by stayscrisp: 06 December 2012 - 03:46 AM
Reason for edit:: Small typo fixed | http://www.dreamincode.net/forums/topic/120775-beginning-sdl-part-4-state-manager/ | CC-MAIN-2016-22 | refinedweb | 2,026 | 64 |
On Tue, May 11, 2010 at 11.
Sure, bailing out for ridiculously large arguments sounds fine to me.
On a 64-bit machine, there can be at most 2**61 4-byte digits, each
digit giving containing 30 bits of the long. So the maximum
representable long (under the implausible assumption that someone
could actually find 2**63 bytes of storage) would be around
2**(30*2**61). The following quick search gives me a value of around
1.18e18 for the first n such that n! exceeds this value:
from math import log, lgamma
def bisect(f, a, b):
c = (a + b)/2.0
while a != c and b != c:
a, b = (a, c) if f(c) else (c, b)
c = (a + b)/2.0
return c
BOUND = 2**62*15*log(2)
print(bisect(lambda x: lgamma(x) > BOUND, 2.0, 1e30)) | http://bugs.python.org/msg105557 | CC-MAIN-2017-17 | refinedweb | 144 | 85.59 |
stompjrkz400
Newbie Poster
4 posts since Feb 2011
Reputation Points: 0 [?]
Q&As Helped to Solve: 0 [?]
Skill Endorsements: 0 [?]
•Newbie Member
0
ok everyone, i taking my first semester of C++ and i have a little quesiton, the teacher wants up to output and input on the same line, but to send the input through i have to strike enter, which keys to the next line, heres my code, just cant get it to work how i want it
#include <iostream> #include <iomanip> #include <string> using namespace std; const string MY_NAME = "Devon Crampton"; const string COURSE_NUMBER = "Lab 1 CST 113 -03"; const int YEAR = 2008; const int RAWWOOL = 8; const double SHEAR_COST = 2.25; int main (void) { double inputWoolPrice; double year = YEAR; double price1; double price2; double price3; double yearPriceWool; double totalWool; double shearTotal; double rawProfit; int totalSheep; int wool1; int wool2; int wool3; int numberOfSheep1; int numberOfSheep2; int numberOfSheep3; // Output to the screen programmer's name and course number cout.fill('*'); cout << setw(70+1) << " " << endl; cout.fill(' '); cout << endl; cout << MY_NAME << endl; cout << COURSE_NUMBER << endl; cout << endl; cout.fill('*'); cout << setw(70+1) << " " << endl; cout.fill(' '); cout<< endl; // input price of wool cout << " What is the selling price per pound of sheared wool for 2008? : "; cin >> inputWoolPrice; ///---------------------------------------------------------------------------------- /// this is where i am having the problem \/ // input number of sheep in each field cout << " Enter the number of sheep sheared from each field for 2008 : "<< endl; // i have to have these 3 inputs and outputs on the same line cout << " field 1 "; cin >> numberOfSheep1; cout << " field 2 "; cin >> numberOfSheep2; cout << " field 3 "; cin >> numberOfSheep3; ///---------------------------------------------------------------------------------- cout.fill('*'); cout << setw(70+1) << " " << endl; cout.fill(' '); cout<< endl;
right now they output like this
Enter the number of sheep sheared from each field for 2008 :
field 1 259
field 2 147
field 3 369
i need to have it output like this
Enter the number of sheep sheared from each field for 2008 :
field 1: 259 field 2: 147 field 3: 369
so that every time i strike enter or space it displayed the next output and prompts me for the input
any ideas, this one is stumping me, my tutor couldn't even help me..
thanks for the help ! | https://www.daniweb.com/software-development/cpp/threads/349211/c-question-should-be-easy-just-cant-figure-it-out | CC-MAIN-2015-14 | refinedweb | 369 | 69.25 |
The ProblemWhenPython has a “batteries included” philosophy. I have used 2 standard libraries to solve this problem.
import subprocess import shlex
- subprocess - Works with additional processes
- shlex - Lexical analysis of shell-style syntaxes
subprocess.popenToviceTo
13 comments:
Why are you using self.logger? It doesn't look like you're running this from an instance method. Is this just a typo or am I missing something?
@Dan,
You are right, this is a instance method. I have updated the code snippet. Thanks.
Thanks a lot, very useful post, it works great !
There is still multiple "self." variables in this example.
Grabbing stderr at the same time as stdout in a nice way (and keeping the correct order) is still somewhat challenging.
Thanks @William Isaac, Updated the post.
hi there
i tried you method but not working i am using pythin 2.7.1 and iam tring to read from c++ exe to the python as input so far when i put the while loop with readline in just hold and then crash
@ramtha Have you tried by setting `shell=True`. This tells subprocess to use the OS shell to open and run your script.
How do you handle if the subprocess goes into cooked mode? In my case, I have the .C source of the subprocess, and I can see when it goes into cbreak. When it does, I get no output on the screen when running as a subprocess... then at the end of my subprocess, all of the output type comes at once.
Any hints to a solution for that would be awesome.
'ls -l' hangs forever with this solution
In Python 3.x the process will hang because 'output' is a byte array instead of a string.
Make sure you decode readline() into a string thus:
output = process.stdout.readline().decode()
-TT
Hi I managed to print the output message but it's in the command prompt. Do you know how can I display the message in html? | http://blog.endpoint.com/2015/01/getting-realtime-output-using-python.html | CC-MAIN-2017-22 | refinedweb | 331 | 76.42 |
In the examples we’ve
seen so far, we’ve always assumed that the Java
interfaces for the remote objects are available at compile time. But
what happens if they aren’t? You might get a
reference to a CORBA
Object from a Naming
Service, for example, and not know what interface that object
implements, or (more likely) not have that Java interface in the
client JVM. We mentioned earlier that you can use an
org.omg.CORBA.Object reference directly to make
requests and exchange data with its remote object -- now
we’ll briefly look at how the Dynamic Invocation
Interface (DII) makes that possible.
The CORBA standard actually defines two complementary APIs for this purpose. The DII is used by a CORBA client to make remote method requests of a server object, while the Dynamic Skeleton Interface (DSI) can be used by a server-side skeleton to forward method invocations to its server implementation object in cases where it doesn’t have the actual servant interface available. Both of these APIs provide the same essential function: a dynamic interface to an object whose interface is not known at compile time.
The DII and DSI may seem like sidebar topics in the CORBA world, but in reality they are at the heart of CORBA and how it works. When we generate Java stubs and skeletons from IDL interfaces, the generated code.
On a more pragmatic note, there are cases where you might find
yourself needing the DII and/or the DSI. In all the examples
we’ve gone through in this chapter so far,
we’ve assumed that both the server and client
JVM’s had all of the relevant interfaces available
locally. For clients, we assumed that somehow they had acquired the
IDL-generated Java interface for our objects
(
Account, etc.). It is safe to assume you have
access to the client machines directly, or can provide an easy way
for users to download the required classes themselves as part of some
installation process. Since the clients will obviously need to
acquire your client code in the first place, it’s
usually possible to provide the other CORBA client classes as well,
and the DII isn’t needed. But if not (perhaps you
want to minimize class downloads for some reason, for example), then
it may be useful to use the DII to create a client that operates
without the need for the object interfaces themselves. reference to a remote
object (using any of the approaches we’ve already
covered), you can create and issue a method request to the object by
building a parameter list for the method call, making a
NamedValue object to hold the result, making a
Context object and putting any useful
environment values in it, and then using all of these items to create
a
Request object that corresponds to a
particular method on the object. In general, the various
create_XXX( ) methods on the
ORB interface are used to construct all of these
elements of a DII request, except for the
Request
itself. That is created by calling
_create_request( ) on the
Object that is the target of
the remote method call.
Example 4-9 shows a version of our
Account client that uses DII calls to invoke the
deposit( ) or
getBalance( )
methods on a remote
Account object (the case for
the
withdraw( ) method is very similar to the
deposit( ) case, so it’s
omitted here for the sake of brevity.) The client is structured very
much like the client in Example 4-8, except that the
actual calls to the remote
Account object are
constructed as DII calls. For the deposit requests, the method has a
single float argument and no return value, so the call to create the
Request includes a null
NamedValue for the result, and an
NVList containing a single
NamedValue holding the amount to be deposited. In the case of
requests for the
Account balance, the
getBalance( ) method has no arguments and a
single float return value, so the creation of the
Request includes a null
NVList for the arguments, and a
NamedValue containing an
Any object intended to hold a floating-point
value.
Example 4-9. Account Client DII.java
import org.omg.CORBA.*; import org.omg.CosNaming.*; public class AccountClientDII { public static void main(String args[]) { ORB myORB = ORB.init(args, null); try { // The object name passed in on the command line String name = args[0]; org.omg.CORBA.Object acctRef = null; if (name.startsWith("corbaname") || name.startsWith("corbaloc") || name.startsWith("IOR")) { System.out.println("Attempting to lookup " + args[0]); acctRef = myORB.string_to_object(args[0]); } else { System.out.println("Invalid object URL provided: " + args[0]); System.exit(1); } // Make a dynamic call to the doThis method if (acctRef != null) { // We managed to get a reference to the named account, now check the // requested transaction from the command-line String action = args[1]; float amt = 0.0f; if (action.equals("deposit")) { amt = Float.parseFloat(args[2]); } System.out.println("Got account, performing transaction..."); try { // Did user ask to do a deposit? if (action.equals("deposit")) { // The following DII code is equivalent to this: // acct.deposit(amt); // First build the argument list. In this case, there's a single // float argument to the method. NVList argList = myORB.create_list(1); Any arg1 = myORB.create_any( ); // Set the Any to hold a float value, and set the value to the // amount to be deposited. arg1.insert_float(amt); NamedValue nvArg = argList.add_value("amt", arg1, org.omg.CORBA.ARG_IN.value); // Java IDL doesn't implement the get_default_context( ) operation // on the ORB, so we just set the Context to null Context ctx = null; // Create the request to call the deposit( ) method Request depositReq = acctRef._create_request(ctx, "deposit", argList, null); // Invoke the method... depositReq.invoke( ); System.out.println("Deposited " + amt + " to account."); } else { // The following DII code is equivalent to this: // acct.balance( ); // No argument list is needed here, since the getBalance( ) method // has no arguments. But we do need a result value to hold the // returned balance Any result = myORB.create_any( ); // Set the Any to hold a float value result.insert_float(0.0f); NamedValue resultVal = myORB.create_named_value("result", result, org.omg.CORBA.ARG_OUT.value); // Java IDL doesn't implement the get_default_context( ) operation // on the ORB, so we just set the Context to null Context ctx = null; // Create the request to call getBalance( ) Request balanceReq = acctRef._create_request(ctx, "getBalance", null, resultVal); // Invoke the method... balanceReq.invoke( ); System.out.println("Current account balance: " + result.extract_float( )); } } catch (Exception e) { System.out.println("Error occurred while performing transaction:"); e.printStackTrace( ); } } else { System.out.println("Null account returned."); System.exit(1); } } catch (Exception e) { e.printStackTrace( ); } } }
Again, note that in most situations you will actually have the Java
interface for the remote object available in your client along with
its helper class, so you’ll be able to narrow the
Object reference to a specific type and call its
methods directly..
Get Java Enterprise in a Nutshell, Second Edition now with O’Reilly online learning.
O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers. | https://www.oreilly.com/library/view/java-enterprise-in/0596001525/ch04s06.html | CC-MAIN-2021-17 | refinedweb | 1,181 | 54.93 |
Coding Introduction¶
Evennia allows for a lot of freedom when designing your game - but to code efficiently you still need to adopt some best practices as well as find a good place to start to learn.
Here are some pointers to get you going.
Explore Evennia interactively¶
When new to Evennia it can be hard to find things or figure out what is available. Evennia offers a special interactive python shell that allows you to experiment and try out things. It’s recommended to use ipython for this since the vanilla python prompt is very limited. Here are some simple commands to get started:
# [open a new console/terminal] # [activate your evennia virtualenv in this console/terminal] pip install ipython # [only needed the first time] cd mygame evennia shell
This will open an Evennia-aware python shell (using ipython). From within this shell, try
import evennia evennia.<TAB>
That is, enter
evennia. and press the
<TAB> key. This will show
you all the resources made available at the top level of Evennia’s “flat
API”. See the flat API page for more info on how to explore it
efficiently.
You can complement your exploration by peeking at the sections of the much more detailed Developer Central. The Tutorials section also contains a growing collection of system- or implementation-specific help.
Use a python syntax checker¶
Evennia works by importing your own modules and running them as part of the server. Whereas Evennia should just gracefully tell you what errors it finds, it can nevertheless be a good idea for you to check your code for simple syntax errors before you load it into the running server. There are many python syntax checkers out there. A fast and easy one is pyflakes, a more verbose one is pylint. You can also check so that your code looks up to snuff using pep8. Even with a syntax checker you will not be able to catch every possible problem - some bugs or problems will only appear when you actually run the code. But using such a checker can be a good start to weed out the simple problems.
Plan before you code¶
Before you start coding away at your dream game, take a look at our Game Planning page. It might hopefully help you avoid some common pitfalls and time sinks.
Code in your game folder, not in the evennia/ repository¶
As part of the Evennia setup you will create a game folder to host your
game code. This is your home. You should never need to modify anything
in the
evennia library (anything you download from us, really). You
import useful functionality from here and if you see code you like,
copy&paste it out into your game folder and edit it there.
If you find that Evennia doesn’t support some functionality you need, make a Feature Request about it. Same goes for bugs. If you add features or fix bugs yourself, please consider Contributing your changes upstream!
Learn to read tracebacks¶
Python is very good at reporting when and where things go wrong. A traceback shows everything you need to know about crashing code. The text can be pretty long, but you usually are only interested in the last bit, where it says what the error is and at which module and line number it happened - armed with this info you can resolve most problems.
Evennia will usually not show the full traceback in-game though. Instead
the server outputs errors to the terminal/console from which you started
Evennia in the first place. If you want more to show in-game you can add
IN_GAME_ERRORS = True to your settings file. This will echo most
(but not all) tracebacks both in-game as well as to the
terminal/console. This is a potential security problem though, so don’t
keep this active when your game goes into production.
A common confusing error is finding that objects in-game are suddenly of the type
DefaultObjectrather than your custom typeclass. This happens when you introduce a critical Syntax error to the module holding your custom class. Since such a module is not valid Python, Evennia can’t load it at all. Instead of crashing, Evennia will then print the full traceback to the terminal/console and temporarily fall back to the safe
DefaultObjectuntil you fix the problem and reload.
Docs are here to help you¶
Some people find reading documentation extremely dull and shun it out of principle. That’s your call, but reading docs really does help you, promise! Evennia’s documentation is pretty thorough and knowing what is possible can often give you a lot of new cool game ideas. That said, if you can’t find the answer in the docs, don’t be shy to ask questions! The discussion group and the irc chat are also there for you. | http://evennia.readthedocs.io/en/latest/Coding-Introduction.html | CC-MAIN-2018-13 | refinedweb | 811 | 70.94 |
gnutls_x509_crt_get_issuer_unique_id(3)_x509_crt_get_issuer_unique_id(3)
gnutls_x509_crt_get_issuer_unique_id - API function
#include <gnutls/x509.h> int gnutls_x509_crt_get_issuer_unique_id(gnutls_x509_crt_t crt, char * buf, size_t * buf_size);
gnutls_x509_crt_t crt Holds the certificate char * buf user allocated memory buffer, will hold the unique id size_t * buf_size size of user allocated memory buffer (on input), will hold actual size of the unique ID on return.
This function will extract the issuerUniqueID value (if present) for the given certificate. If the user allocated memory buffer is not large enough to hold the full subjectUniqueID, then a GNUTLS_E_SHORT_MEMORY_BUFFER error will be returned, and buf_size will be set to the actual length. This function had a bug prior to 3.4.8 that prevented the setting of NULL buf to discover the buf_size . To use this function safely with the older versions the buf must be a valid buffer that can hold at least a single byte if buf_size is zero.
GNUTLS_E_SUCCESS on success, otherwise a negative error code.
2.127t.l2s_x509_crt_get_issuer_unique_id(3) | https://man7.org/linux/man-pages/man3/gnutls_x509_crt_get_issuer_unique_id.3.html | CC-MAIN-2022-33 | refinedweb | 160 | 52.6 |
The Initialize Method in Ruby
The initialize method is useful when we want to initialize some class variables at the time of object creation. The initialize method is part of the object-creation process in Ruby and it allows us to set the initial values for an object.
Below are some points about
Initialize :
- We can define default argument.
- It will always return a new object so return keyword is not used inside initialize method
- Defining initialize keyword is not necessary if our class doesn’t require any arguments.
- If we try to pass arguments into new and if we don’t define initialize we are going to get an error.
Syntax:
def initialize(argument1, argument2, .....)
Without Initialize variable –
Example :
Output :
=> :initialize
In above example, we add a method called initialize to the class, method have a single argument name. using initialize method will initialize a object.
With Initialize Variable –
Example :
Output :
#<Rectangle:0x0000555e7b1ba0b0 @x=10, @y=20>
In above example, Initialize variable are accessed using the @ operator within the class but to access them outside of the class we will use public methods.
My Personal Notes arrow_drop_up | https://www.geeksforgeeks.org/the-initialize-method-in-ruby/?ref=leftbar-rightbar | CC-MAIN-2021-49 | refinedweb | 188 | 53.1 |
Simpleform association with images!
Hi Chris,
I'm wondering if you could help. I'm currently building a form that'll send out a SMS with twilio, and i'm looking to send details of a vehicle out at the same time, specifically the images of that vehicle. I have the form working perfectly except selecting which images are to be sent. If i run what i've put below I get 'can't write unknown attribute
message_id'
<%= simple_form_for [@vehicle, Message.new] do |f| %> <%= f.error_notification %> <div class="form-inputs"> <%= f.input :to %> <%= f.input :company %> <%= f.input :body %> <%= f.association :vehicle_images, :as => :check_boxes, item_wrapper_tag: :div, item_wrapper_class: "image-select", :collection => @vehicle.vehicle_images.order(:id), :label => false, :inline_label => false, :include_blank => false, :input_html => {multiple: true}, :label_method => lambda { |b| image_tag(b.image.url(:thumb)) }, value_method: :id %> </div> <div class="form-actions"> <%= f.button :submit %> </div> <% end %>
def create @vehicle = Vehicle.find(params[:vehicle_id]) @message = Message.new(message_params) @message.vehicle_id = @vehicle.id respond_to do |format| if @message.save @bucket = Bucket.new @bucket.message_id = @message.id @bucket.vehicle_id = @vehicle.id @bucket.vehicle_images = params[:message][:vehicle_image_ids] @bucket.save! #byebug format.html { redirect_to vehicle_messages_path(@vehicle), notice: 'Message was successfully created.' } format.json { render :show, status: :created, location: @message } else format.html { render :new } format.json { render json: @message.errors, status: :unprocessable_entity } end end end
Is there anyting silly I'm missing from what you can see here? I'm rushing out but if you need more info to give any feedback, just let me know.
Thanks very much, it's appriciated!
Hey Martin,
Could you post the full error and a bit of the stacktrace as well? I'm guessing it has something to do with your associations.
I haven't seen the
can't write unknown attribute error message recently so I'm not quite sure what it refers to. Googling it leads me to believe maybe your Bucket doesn't have a message_id column or your associations are backwards maybe?
Thanks for taking a look Chris.
I do have message_id on Buckets.
class CreateBuckets < ActiveRecord::Migration def change create_table :buckets do |t| t.integer :message_id t.datetime :expires_at t.integer :vehicle_id t.integer :vehicle_images t.timestamps null: false end end end
Here is the various bits 'n bobs in gist
Thanks again!
Hmm, when I don't add any images I get 'VehicleImage(#70203987369520) expected, got String(#47143662444640)'
It was just me being an idiot Chris, I didn't create the join table for messages and vehicle_images. Schoolboy error.
Thanks again.
Doh. I looked at your code and didn't have any good answers. Didn't cross my mind that the table might be missing. 😜
It's been a long week! :) Thanks for taking the time anyway, it's very much appriciated!
You also have a weird ability to create videos which contain stuff I'm working with at the time! You think you could point me in the right direction with this? | https://gorails.com/forum/simpleform-association-with-images | CC-MAIN-2021-43 | refinedweb | 488 | 53.37 |
Mark Burgess
October 3, 2001
Contents
Contents
1. What is an operating system?
o 1.1 Key concepts
1.1.1 Hierarchies and black boxes
1.1.2 Resources and sharing
1.1.3 Communication, protocols, data types
1.1.4 System overhead
1.1.5 Caching
o 1.2 Hardware
1.2.1 The CPU
1.2.2 Memory
1.2.3 Devices
1.2.4 Interrupts, traps, exceptions
o 1.3 Software
1.3.1 Resource management
1.3.2 Spooling
1.3.3 System calls
1.3.4 Basic command language
1.3.5 Filesystem
1.3.6 Multiple windows and screens
o Note
o Exercises
2. Single-task OS
o 2.1 Memory map and registers
o 2.2 Stack
o 2.3 Input/Output
2.3.1 Interrupts
2.3.2 Buffers
2.3.3 Synchronous and asynchronous I/O
2.3.4 DMA - Direct Memory Access
o Exercises
3. Multi-tasking and multi-user OS
o 3.1 Competition for resources
3.1.1 Users - authentication
3.1.2 Privileges and security
3.1.3 Tasks - two-mode operation
3.1.4 I/O and Memory protection
3.1.5 Time sharing
o 3.2 Memory map
o 3.3 Kernel and shells - layers of software
o 3.4 Services: daemons
o 3.5 Multiprocessors - parallelism
o Exercises
4. Processes and Thread
o 4.1 Key concepts
4.1.1 Naming conventions
4.1.2 Scheduling
4.1.3 Scheduling hierarchy
4.1.4 Runs levels - priority
4.1.5 Context switching
4.1.6 Interprocess communication
o 4.2 Creation and scheduling
4.2.1 Creating processes
4.2.2 Process hierarchy: children and parent processes
4.2.3 Unix: fork() and wait()
4.2.4 Process states
4.2.5 Queue scheduling
4.2.6 Round-robin scheduling
4.2.7 CPU quotas and accounting
o 4.3 Threads
4.3.1 Heavy and lightweight processes
4.3.2 Why use threads?
4.3.3 Levels of threads
4.3.4 Symmetric and asymmetric multiprocessing
4.3.5 Example: POSIX pthreads
4.3.6 Example: LWPs in Solaris 1
o 4.4 Synchronization of processes and threads
4.4.1 Problems with sharing for processes
4.4.2 Serialization
4.4.3 Mutexes: mutual exclusion
4.4.4 User synchronization: file locks
4.4.5 Exclusive and non-exclusive locks
4.4.6 Critical sections: the mutex solution
4.4.7 Flags and semaphores
4.4.8 Monitors
o 4.5 Deadlock
4.5.1 Cause
4.5.2 Prevention
4.5.3 Detection
4.5.4 Recovery
o 4.6 Summary
o Exercises
o Project
5. Memory and storage
o 5.1 Logical and Physical Memory
5.1.1 Physical Address space
5.1.2 Word size
5.1.3 Paged RAM/ROM
5.1.4 Address binding - coexistence in memory
5.1.5 Shared libraries
5.1.6 Runtime binding
5.1.7 Segmentation - sharing
5.1.8 The malloc() function
5.1.9 Page size, fragmentation and alignment
5.1.10 Reclaiming fragmented memory (Tetris!)
o 5.2 Virtual Memory
5.2.1 Paging and Swapping
5.2.2 Demand Paging - Lazy evaluation
5.2.3 Swapping and paging algorithms
5.2.4 Thrashing
o 5.3 Disks: secondary storage
5.3.1 Physical structure
5.3.2 Device drivers and IDs
5.3.3 Checking data consistency and formatting
5.3.4 Scheduling
5.3.5 Partitions
5.3.6 Stripes
o 5.4 Disk Filesystems
5.4.1 Hierachical filesystems and links
5.4.2 File types and device nodes
5.4.3 Permissions and access
5.4.4 File system protocols
5.4.5 Filesystem implementation and storage
5.4.6 The UNIX ufs filesystem
o Exercises
o Project
6. Networks: Services and protocols
o 6.1 Services: the client-server model
o 6.2 Communication and protocol
o 6.3 Services and Ports
o 6.4 UNIX client-server implementation
6.4.1 Socket based communication
6.4.2 RPC services
o 6.5 The telnet command
5 Network Naming services 7.1 The Domain Name Service 7.2 Ethernet addresses o 7.6 X11 6.2 tcp o 7.the network filesystem 7..1 Back doors o 8.2 Setuid programs in unix o 8. Security: design considerations o 8.2.1.3 Super-user.2.3 Routers and gateways o 7.4.3.1 Network connectivity 7.1 NFS .2. networks and domain names 7.2.2 Bad passwords o 8.7 html: hypertext markup language Exercises Project 7.1 Who is responsible? o 8.2 AFS .1 The protocol hierarchy 7.1 IP addresses.6.5 Intruders: Worms and Viruses 8.2 Netmask and broadcast address 7. o o o o 1.1 UNIX passwords 8. What is an operating system? .1. TCP/IP Networks o 7.4 Internet Addresses and Routing 7.6.3.2 The internet protocol family 7.6.5.1 The OSI model 7.3 The physical layer 7.5.the distributed computing environment 8.3 DCE .1 Network administration 8.4.4 Backups o 8.2 Passwords and encryption 8.the andrew filesystem 7.4.5.6 Distributed Filesystems 7.6.3.3.1 udp 7.2 Data encapsulation o 7..2 Network Information Service o 7. or system administrator 8.7 Public and Private Keys Where next? o Glossary Index About this document .6 Firewall o 8.
You can think of it as being the software which is already installed on a machine. A multi-user system must clearly be multi-tasking. There is no universal definition of what an operating system consists of.* S QM 1 Windows 9x S M* 1 AmigaDOS S M 1 hline MTS M M 1 UNIX M M VMS M M S/M M Windows 2000 M M BeOS (Hamlet?) S M NT 1 The first of these (MS/PC DOS/Windows 3x) are single user. Normally the operating system has a number of key elements: (i) a technical layer of software for driving the hardware of the computer. debuggers and a variety of other tools. An OS therefore provides (iv) legal entry points into its code for performing basic operations like writing to devices. the keyboard and the screen. compilers. before you add anything of your own. The table below shows some examples. (ii) a filesystem which provides a way of organizing files logically. single-task systems which build on a ROM based library of basic functions called the BIOS. It shields the user of the machine from the low-level details of the machine's operation and provides frequently needed facilities. OS Users Tasks Processors MS/PC DOS S S 1 Windows 3x S QM 1 Macintosh System 7. all requests to use its resources and devices need to go through the OS. and (iii) a simple command language which enables users to run their own programs and to manipulate their files in a simple way. Since the operating system (OS) is in charge of a computer. Operating systems may be classified by both how many tasks they can perform `simultaneously' and by how many users can be using the system `simultaneously'. Some operating systems also provide text editors. These are system calls .An operating system is a layer of software which takes care of technical aspects of a computer's operation. like disk drives. That is: single-user or multi-user and single-task or multi-tasking.
UNIX split into two camps early on: BSD (Berkeley software distribution) and system 5 (AT&T license). The MacIntosh not a true multitasking system in the sense that. Windows NT added a proper kernel with memory protection. single-user system. keeping only the most important features of the BSD system. originally written for the DEC/Vax. With time these two versions have been merged back together and most systems are now a mixture of both worlds. This might be due to the lack of proper memory protection. A window manager can simulate the appearance of several programs running simultaneously. A standardization committee for Unix called POSIX. MTS (Michigan timesharing system) was the first time-sharing multi-user system1. and one which we shall frequently refer to below.1. Originally designed at AT&T. Windows 2000 thus has comparable functionality to Unix in this respect. but only a single user system. It supports only simple single-screen terminal based input/output and has no hierarchical file system. It is based on the UNIX model and is a fully multi-tasking. Later versions of Windows NT and Windows 2000 (a security and kernel enhanced version of NT) allow multiple logins also through a terminal server. Only a single user application could be open at any time. Windows 95 replaced the old coroutine approach of quasi-multitasking with a true context switching approach. AmigaDOS is an operating system for the Commodore Amiga computer. The BSD version was developed as a research project at the university of Berkeley. That means that it is possible to use several user applications simultaneously. This has been a major limitation on multi-tasking operating systems in the past. Unix is arguably the most important operating system today.which write to the screen or to disk etc. but this relies on each program obeying specific rules in order to achieve the illusion. Windows is purported to be preemptive multitasking but most program crashes also crash the entire system. Although all the operating systems can service interrupts. Here are some common versions of UNIX. and therefore simulate the appearance of multitasking in some situations. formed by the major vendors. The operating system includes a window environment which means that each independent program has a `screen' of its own and does not therefore have to compete for the screen with other programs. Many of the networking and user-friendly features originate from these modifications. Several programs may be actively running at any time. Unix Manufacturer Mainly BSD / Sys 5 . The claim is somewhat confusing. The Macintosh system 7 can be classified as single-user quasi-multitasking1. without proper memory protection. the older PC environments cannot be thought of as a multi-tasking systems in any sense. The trend during the last three years by Sun Microsystems and Hewlett-Packard amongst others has been to move towards system 5. Historically BSD Unix has been most prevalent in universities. based on the VMS system. the whole system crashes. California. developed by different manufacturers. while system 5 has been dominant in business environments. if one program crashes.2. attempts to bring compatibility to the Unix world. It comes in many forms.
Initially it reinvented many existing systems.multiple logins by different users is not possible). The Be operating system. NT is a `new' operating system from Microsoft based on the old VAX/VMS kernel from the Digital Equipment Corporation (VMS's inventor moved to Microsoft) and the Windows32 API. 1. NT has a built in object model and security framework which is amongst the most modern in use. a POSIX programming interface and about 150 Unix commands (including Perl). Unix is generally regarded as the most portable and powerful operating system available today by impartial judges.1 Key concepts Before discussing more of the details. you will do well to keep them in . BeOS has proper memory protection but allows direct access to video memory (required for fast video games). It is optimized for multimedia and is now saleable software developed by Be.BSD Berkeley BSD SunOS (solaris 1) Sun Microsystems BSD/sys 5 Solaris 2 Sun Microsystems Sys 5 Ultrix DEC/Compaq BSD OSF 1/Digital Unix DEC/Compaq BSD/sys 5 Hewlett-Packard Sys 5 AIX IBM Sys 5 / BSD IRIX Silicon Graphics Sys 5 Public Domain Posix (Sys V/BSD) Novell Sys 5 HPUX GNU/Linux SCO unix Note that the original BSD source code is now in the public domain. It is fully multitasking. let's review some key ideas which lie behind the whole OS idea. It also has virtual memory. originally developed for a new multimedia computer called the BeBox. Most Unix types support symmetric multithreaded processing and all support simultaneous logins by multiple users. is also new and is a fully multitasking OS. Is shares little with Unix except for a Bash shell. Although these ideas may seem simple. Unix runs on everything from laptop computers to CRAY mainframes.Com after the new computer concept failed due to lack of financial backing. It is particularly good at managing large database applications and can run on systems with hundreds of processors. but NT is improving quickly. is preemptive multitasking and is based on a microkernel design. It has virtual memory and multithreaded support for several processors. and can support multiple users (but only one at a time-. but it is gradually being forced to adopt many open standards from the Unix world.
the disk drives and the memory.2 Resources and sharing A computer is not just a box which adds numbers together. If only a single keyboard is connected then competing programs must wait for the resources to become free. This is the single most important concept in computing! It is used repeatedly to organize complex problems. 1.mind later. Simple ideas often get lost amongst distracting details. 1. but it is important to remember that the ideas are simple. down in the guts of things. . in which large high-level problems are gradually broken up into manageable low-level problems. The key to making large computer programs and to solving difficult problems is to create a hierarchical structure. If the system has two keyboards (or terminals) connected to it. whereas low-level implies a lot of detail. Figure 1. In a multi-tasking system there may be several programs which need to receive input or write output simultaneously and thus the operating system may have to share these resources between several running programs. which branches from the highest level to the lowest.g. since each high-level object is composed of several lower-level objects.1: The hierarchy is the most important concept in computing.1. Each level works by using a series of `black boxes' (e. This allows us to hide details and remain sane as the complexity builds up.1.1 Hierarchies and black boxes A hierarchy is a way of organizing information using levels of detail. A hierarchy usually has the form of a tree. then the OS can allocate both to different programs. The phrase highlevel implies few details. It has resources like the keyboard and the screen. subroutines) whose inner details are not directly visible.
That is. The agreement may say. This is a simple example of a protocol. This is a convention or agreement between the operating systems of two machines on what messages may contain. it doesn't automatically know what they mean. 1. then work a while on the next program.but the result would have been nonsense. Suppose computer A sends a message to computer B reporting on the names of all the users and how long they have been working.. terminated by a zero. More generally. The next thirty-two bits are a special number telling the OS which protocol to use in order to interpret the data. When computer B receives a stream of bits. or a mixture of all of them. These different types of data are all stored as binary information the only difference between them is the way one chooses to interpret them. If the wrong protocol is diagnosed. when passing parameters to functions in a computer program. data types The exchange of information is an essential part of computing. a protocol is an agreed sequence of behaviour which must be followed. computer B might not be able to read the data and a protocol error would arise.1.. 1. and so on. If the first program was left unfinished. It is important to understand that all computers have to agree on the way in which the data are sent in advance. The OS can then look up this protocol and discover that the rest of the data are arranged according to a pattern of <name><time><name><time>. then a string of characters could easily be converted into a floating point number . protocols. where the name is a string of bytes. Similarly. there are rules about how the parameter should be declared and in which order they are sent.1. it must then return to work more on that. for instance. To do this it sends a stream of bits across a network. it must work for a time on one program. Computer B now knows enough to be able to extract the information from the stream of bits. The way an OS decides to share its time between different tasks is called scheduling. in a systematic way. integers or floating point numbers. For example. Protocols are an important part of communication and data typing and they will appear in many forms during our discussion of operating systems.3 Communication.4 System overhead . if computer A had sent the information incorrectly. that the first thirty-two bits are four integers which give the address of the machine which sent the message. The resolution to this problem is to define a protocol. and the time is a four byte digit containing the time in hours. An multi-tasking operating system must therefore share cpu-time between programs.Most multi-tasking systems have only a single central processor unit and yet this is the most precious resource a computer has. It must decide if the bits represent numbers or characters.
usually identifiable by being the largest chip. they are slowed down by the OS1. the CPU is still logically separate from the memory and devices. there may be several CPUs which can work in parallel. it is just one microprocessor with lots of pins to connect is to memory and devices . It therefore requires its own share of a computer's resources. The memory area used to do this is called a cache. In the UNIX C-shell (csh) environment. Traditionally. 1. Usually the CPU can read data much faster from memory than it can from a disk or network connection. 1. where the OS is running all the time along side users' programs. The CPU is driven by a `clock' or pulse generator. On modern machines. The time spent by the OS servicing user requests is called the system overhead. so it would like to keep an up-to-date copy of frequently used information in memory. Also VLSI or very large scale integration technology has made it possible to put very many separate processors and memory into a single package. Nevertheless.An operating system is itself a computer program which must be executed. so the physical distinction between the CPU and its support chips is getting blurred. This is the part which does the work of executing machine instructions.5 Caching Caching is a technique used to speed up communication with slow devices. 1. Sometimes caching is used more generally to mean `keeping a local copy of data for convenience'. Since user programs have to wait for the OS to perform certain services.3.1. but all other programs which are queuing up for resources. such as UNIX. On a multi-user system one would like this overhead to be kept to a minimum. where a single instruction takes one or more clock cycles to complete. such as allocating resources. This is especially true on multitasking systems. it is possible to find out the exact fraction of time spent by the OS working on a program's behalf by using the time function.2 Hardware Here we list the main hardware concepts. Traditionally CPUs are based on CISC (Complex Instruction Set Computing) architecture.2. or central processor unit is the heart and soul of every computer. Each instruction completes in a certain number of `clock cycles'. You can think of the whole of the primary memory as being a cache for the secondary memory (disk).1 The CPU The CPU. A new trend is to build RISC (Reduced Instruction Set Computing) processors . since programs which make many requests of the OS slow not only themselves down.
exceptions Interrupts are hardware signals which are sent to the CPU by the devices it is connected to. When writing to a printer.2. sometimes with several instructions per clock cycle. place the key value into a buffer for later reading.read only memory. traps. and ROM . Then the CPU must stop what it is doing and read the keyboard. Since CPUs are only made with instructions for reading and writing to memory. When a user writes to a logical device. the OS must control the movement of the read-write heads. and there is the logical device which is a name given by the OS to a legal entry point for talking to a hardware-device.2. There is the hardware unit which is connected to the machine. before it knows about disks etc. the screen. For example. 1. Some common logical devices are: the system disks. 1. the keyboard.2. and return to what it was doing. the OS invokes a device driver which performs the physical operations of controlling the hardware. ROM is normally used for storing those most fundamental parts of the operating system which are required the instant a computer is switched on. which never loses its contents unless destroyed. There are two types of memory: RAM . when writing to a disk. These signals literally interrupt the CPU from what it is doing and demand that it spend a few clock cycles servicing a request.4 Interrupts. the printer and the audio device.2 Memory The primary memory is the most important resource a computer has. interrupts may come from the keyboard because a user pressed a key. or read/write memory. Disks and tapes are often called secondary memory or secondary storage. the OS places the information in a queue and services the request when the printer becomes free.which aim to be more efficient for a subset of instructions by using redundancy.3 Devices The concepts of a device really has two parts. . disk devices generate interrupts when they have finished an I/O task and interrupts can be used to allow computers to monitor sensors and detectors. no programs would be able to run without it. 1. For example.random access memory. which loses its contents when the machine is switched off. These are often called traps or exceptions on some systems. These have simpler instructions but can execute much more quickly. Other `events' generate interrupts: the system clock sends interrupts at periodic intervals. User programs can also generate `software interrupts' in order to handle special situations like a `division by zero' error.
This is accomplished by means of a stack or heap1.4.3. data cannot be stored neatly on a disk. Spooling comes from the need to copy data onto a spool of tape for storage. whereas high level interrupts have a high priority.3. Spooling is a form of batch processing. Moreover. since data can be lost. During a spooling operation. so that the CPU must be able to recover from several `layers' of interruption and end up doing what it was originally doing. programs can often choose whether or not they wish to be interrupted by setting an interrupt mask which masks out the interrupts it does not want to hear about. . As files become deleted. so that users do not have to waste their time programming very low level code which is irrelevant to their purpose. Users should not have to write code to position the head of the disk drive at the right place just to save a file to the disk. Print jobs are spooled to the printer. an OS must keep tables or lists telling it what is free an what is not. All systems therefore have non-maskable interrupts for the most crucial operations.3 Software 1. only one job is performed at a time and other jobs wait in a queue to be processed. For example. which is a pretty lousy attempt to make something more meaningful out of the word `spool'! 1. It has since been dubbed Simultaneous Peripheral Operation On-Line. A high level interrupt can interrupt a low level interrupt. This is a very basic operation which everyone requires and thus it becomes the responsibility of the OS.3 System calls An important task of an operating system is to provide black-box functions for the most frequently needed operations. because they must be printed in the right order (it would not help the user if the lines of his/her file were liberally mixed together with parts of someone elses file). 1.2 Spooling Spooling is a way of processing data serially. For example. holes appear and the data become scattered randomly over the disk surface. controlling devices requires very careful and complex programming. Low level interrupts have a low priority.Interrupts are graded in levels.3. These ready-made functions comprise frequently used code and are called system calls. Another example is mathematical functions or graphics primitives. 1. Masking interrupts can be dangerous.1 Resource management In order to keep track of how the system resources are being used.
executable files.3. In microcomputer operating systems the command language is often built into the system code. . If so how? One way is to use file extensions.exe. system calls are often available only through assembler or machine code. the result would be nonsense and might even be dangerous to the point of crashing the system. printer etc). process management and text editing. If several users will be storing files together on the same disk. list files (DOS) list files (UNIX) change directory copy file to printer execute program `myprog' constitute a basic command language. or a naming convention to identify files. .System calls can be thought of as a very simple protocol .g. text files. On older microcomputers.3. This method is used for binary files in UNIX. On modern systems and integrated systems like UNIX. . 1. should each user's files be exclusive to him or her? Is a mechanism required for sharing files between several users? . Should the filesystem distinguish between types of files e.txt. whereas on larger systems (UNIX) the commands are just executable programs like the last example above. file. Protection. where high level languages are uncommon. Every computer must have such a language (except perhaps the Macintosh . they are available as functions in a high level language like C. SCRIPT. The problem with this is that the names can be abused by users. like myprog. One way around this problem is to introduce a protocol or standard format for executable files. scripts. If one tries to execute a file which is not meant to be executed. . stat (get the status of a file: its size and type) and malloc (request for memory allocation).an agreed way of asking the OS to perform a service. Some typical OS calls are: read.5 Filesystem In creating a system to store files we must answer some basic questions. The command language deals typically with: file management.4 Basic command language Commands like dir ls cd copy file prn myprog . so that when the OS opens a file for execution it first checks to see whether the file obeys the protocol.BAT. write (to screen. 1. disk. for instance.yawn!).
5. but it is worth bearing in mind that the principles are very similar to those of operating systems. This is expensive and probably wasteful. While it is clearly the best of the three (and can be combined with ). A link is not a copy of a file. By making links to other places in a hierarchical filesystem. Exercises 1. Hopefully you will get a feel for this during the course of the tutorial. Sharing and management are the key concepts.3. Window system. 2. Each interactive program needs its own screen and keyboard1. What is the usefulness of system calls? 3. you should note that the design of operating systems is an active area of research. but it can be too restrictive. which are separately maintained in memory.6 Multiple windows and screens Multitasking cannot be fully exploited if each user has only one output terminal (screen). Toggling between `logical screens'. Several physical screens can be attached to the computer. its flexibility is increased considerably. Sometimes it is useful to have a file appear in several places at one time. The technology for the last of these solutions has only been available for a few years. 3.A hierarchical filesystem is a good starting point for organizing files. it requires a considerable amount of memory and CPU power to implement. The problem of overlapping windows requires there to be a manager which controls the sharing of space on the screen. 1. There are three solutions to this problem: 1. The operating system must provide primitives for doing this. What are the key ingredients of an operating system? 2. . but a pointer to where a file really is. By pressing a key on the keyboard the user can switch between two different images. We shall not consider windowing further in this text. What is the difference between primary and secondary storage. There are no universal solutions to the issues that we shall discuss. rather OS design must be thought of as a study of compromises. Note Before proceeding. All of the graphics must be drawn and redrawn continuously. This can be accomplished with links.
Program counter Holds the address of the next instruction to be executed Index (addressing) registers Used to specify the address of data to be loaded into or saved from the accumulator. Roughly speaking. How do hardware devices send signals to the CPU? 2. 2. Should different users be able to change one another's data? If so. from the memory and executes them forever without stopping. memory and a number of peripheral devices. The CPU can store information only in the memory it can address and in the registers of other microprocessors it is connected to. . Here is a brief summary of the types of register a CPU has. Single-task OS Before tackling the complexities of multi-tasking.4.1. one at a time. The CPU contains registers or `internal variables' which control its operation. The CPU reads machine code instructions. Stack pointer Points to the top of the CPUs own hardware controlled stack. under what circumstances? 6. at the hardware level a computer consists of a CPU. it is useful to think about the operation of a single-task OS without all the clutter that multi-tasking entails. What is a logical device? 5. In a multi-task OS the features we shall discuss below have to be reproduced -times and then augmented by extra control structures.1 Memory map and registers The key elements of a single-task computer are shown in figure 2. Register Purpose Accumulator Holds the data currently being worked on. Status register Contains status information after each instruction which can be tested for to detect errors etc. Some microprocessors have several of each type. or operated on in some way.
this might be one byte per character and it might be four bytes per pixel! Memory mapped I/O: Hardware devices like disks and video controllers contain smaller microprocessors of their own. Depending on what kind of visual system a computer uses. converted into colours and positions by a hardware video-controller. mapped into the diagram shown.special instructions for the zero page can leave the `zero' implicit. perhaps) are `wired' into the main memory map. Operating system: The operating system itself is a large program which often takes up a large part of the available memory. for instance. not all of the memory is available to the user of the machine. is a large string of bytes starting with address and increasing up to the maximum address.1 are Zero page: The first t `page' of the memory is often reserved for a special purpose. The screen memory is the area of memory needed to define the colour of every `point' or `unit' on the screen. so that writing to the device is the same as writing to the rest of the memory. It does not show. The CPU gives them instructions by placing numbers into their registers. The roughly distinguished areas in figure 2. special memory which might be located inside the devices or CPU. It is often faster to write to the zero page because you don't have to code the leading zero for the address . Such memory is often used for caching. User programs: Space the user programs can `grow into'. Stack: Every CPU needs a stack for executing subroutines. these device registers (only a few bytes per device. The stack is explained in more detail below. . Note that this figure is very simplified. Screen memory: What you see on the screen of a computer is the image of an area of memory. Also it does not show how the various components are connected together by means of a high speed data bus. like a jigsaw puzzle.The memory. because of the hardware design of the CPU. Some of it is required for the operation of the CPU. To make this process simpler. Physically it is made up. as seen by the CPU. Normally. of many memory chips and control chips.
the last thing to be placed on top of a stack.1: A simple schematic memory map of a microcomputer.2 Stack A stack is a so-called last-in first-out (LIFO) data structure. That is to say . when making it. so that they remember where to return to after the subroutine is finished. Stacks are used by the CPU to store the current position within a program before jumping to subroutines. The order of the different segments of memory can vary depending on the system. the CPU can simply deposit the .Figure 2. is the first item which gets removed when un-making it. 2. Because of the nature of the stack.
The allocated stacksize is very small so that // an overflow can occur if you push too far!! e.h> //********************************************************************** // Include file //********************************************************************** const int forever = 1.h> #include <strstream.h> #include <string. // // The program is compiled with // // g++ stack. const int bufsize = 20. // // Use the commands "push" and "pop" to push onto the stack and to pop // "out" of the stack. On many older microcomputers and in many operating systems the stack is allocated with a fixed size in advance. Consider the following example code for a stack. //********************************************************************** class Stack { public: .g. // // A simple stack handler. since the second subroutine causes another stack frame to be saved on the top of the stack.address of the next instruction to be executed (after the subroutine is finished) on top of the stack. When the subroutine is finished. it returns to the first subroutine and then to the original program in the correct order. input // // push 23 // push 4 // pop // push 678 // quit // // In a real stack handler the numbers would be the address of the next // instruction to return to after completing a subroutine. const int stacksize = 10. When that is finished. Notice that the stack mechanism will continue to work even if the subroutine itself calls another subroutine.C // // MB 1994 // //********************************************************************* #include <iostream. the stack can overflow. If too many levels of nested subroutines are called. the CPU pulls the first address it finds off the top of the stack and jumps to that location.
Stack().getline(input. } else { number = 0. } else if (strcmp(command. cout << "Stack demo\n\n". private: int stackpointer.Push(number).ShowStack(). Stack s. }."push") == 0) { s.int stack[stacksize]. } .Pop(). void Push(int)."pop")==0) { newnumber = s. while (forever) { cout << "Enter command: ". istrstream(input. //********************************************************************** // Level 0 //********************************************************************** main () { char input[bufsize]."quit")==0) { break. int Pop().bufsize). // Extract command cin. newnumber. int number. char command[5]. cout << "Bad command\n\n". s.sizeof(input)) >> command >> number. // Interpret command if (strcmp(command. void ShowStack(). } else if (strcmp(command.
stackpointer++. } stackpointer--. } } //********************************************************************** void Stack::Push (int n) { cout << "Pushing " << n << " on the stack\n". return 0. } //********************************************************************** // Class Stack //********************************************************************** Stack::Stack() { int i. } stack[stackpointer] = n. i < stacksize. cout << "Popped " << stack[stackpointer] << " from stack\n". stackpointer = 0. } //********************************************************************** int Stack::Pop () { if (stackpointer == 0) { cerr << "Stack underflow!\n".ShowStack(). return. } . for (i = 0. if (stackpointer >= stacksize) { cerr << "Stack overflow!\n". i++) { stack[i] = 0. return (stack[stackpointer]).ShowStack(). } s.s.
i--) { cout << "stack[" << i << "] = " << stack[i]. An important area is the interrupt vector. for (i = stacksize-1.3 Input/Output Input arrives at the computer at unpredictable intervals.2 Buffers The CPU and the devices attached to it do not work at the same speed. The difference is that interrupt routines are triggered by events. or the computer will be drowned in the business of servicing interrupts. Interrupts occur at different levels.e. Operating systems also use software controlled stacks during the execution of users' programs. whereas software subroutines follow a prearranged plan. Each interrupt has a number from zero to the maximum number of interrupts supported on the CPU. 2. A buffer is simply an area of memory which works as a waiting area. There is no logical difference between what happens during the execution of an interrupt routine and a subroutine. or the disk device has fetched some data from the disk. This is a region of memory reserved by the hardware for servicing of interrupts. } else { cout << endl. They can also be generated in software by errors like division by zero or illegal memory address. i.3. only numbers are stored. i >= 0. this kind of stack is used by the CPU to store addresses and registers during machine-code subroutine jumps. it saves the contents of its registers on the hardware stack and jumps to a special routine which will determine the cause of the interrupt and respond to it appropriately. 2. } if (i == stackpointer) { cout << " <<-. such as when the user presses a key on the keyboard. Buffers are therefore needed to store incoming or outgoing information temporarily. At the hardware level. the interrupt vector must be programmed with the address of a routine which is to be executed when the interrupt occurs. Interrupt handling routines have to work quickly.//********************************************************************** void Stack::ShowStack () { int i. the system examines the address in the interrupt vector for that interrupt and jumps to that location. Interrupts normally arrive from hardware devices. The system must be able to detect its arrival and respond to it. 2. } } In this example.1 Interrupts Interrupts are hardware triggered signals which cause the CPU to stop what it is doing and jump to a special subroutine. while it is waiting to be picked up by the other party. when an interrupt occurs.Pointer\n". It is a first-in first-out (FIFO) data structure or queue. low level interrupts can be ignored by setting a mask (See also the generalization of this for multiuser systems in chapter 4). .3. The routine exits when it meets an RTI (return from interrupt) instruction. When the CPU receives an interrupt. Low level interrupts can be interrupted by high level interrupts. for each interrupt. For certain critical operations. High level languages subroutines can have local variables which are also copied to the stack as one large stack frame during the execution of subroutines.
Explain why a stack is used to store local variables. the CPU writes appropriate values into the registers of the device controller. For example.2. It can do nothing or idle until the device returns with the data (synchronous I/O). 3. the device controller fetches data from the disk and places it in its local buffer. While the CPU is waiting for the I/O to complete it may do one of two things. This comes from the expression `to lift oneself by one's own bootstraps'. What is the program counter? 2. without the intervention of the CPU. It is not necessary for each task to have its own set of devices. What is a stack-frame? 8. Find out for yourself. 6.3. Without it. The second of these possibilities is clearly much more efficient. This manager is called the `kernel' and it constitutes the main difference between single and multitasking operating systems. if the operation is to read from a disk. The operating system must therefore have a `manager' which shares resources at all times. The basic hardware resources of the system are shared between the tasks. The DMA controller is a device which copies blocks of data at a time from one place to the other. Some microprocessors (68000/Intel 386 upward) support multitasking internally. or it can continue doing something else until the completion interrupt arrives (asynchronous I/O). Write a program to create a stack (LIFO) which can store any number of local variables for each subroutine. 2. What is memory mapped I/O? 3. Once this is done. or speculate on how this takes place. To use it. How can this be achieved? 4.3 Synchronous and asynchronous I/O To start an I/O operation. it executes a program called a bootstrap program. The computer must begin to execute instructions and `get going'. A separate stack is then needed for each process. Multi-tasking and multi-user OS To make a multi-tasking OS we need loosely to reproduce all of the features discussed in the last chapter for each task or process which runs. the CPU would have to copy the data one register-full at a time. using up hundreds or even thousands of interrupts and possibly bringing a halt to the machine! Exercises 1. When a computer is first switched on. 5. It then signals the CPU by generating an interrupt.4 DMA .1 Competition for resources . Hint: use a linked list for the stack and for the variables. 7. The device controller acts on the values it finds in its registers. its registers must be loaded with the information about what it should copy and where it should copy to. The advantage of the DMA is that it transfers large amounts of data before generating an interrupt.3. 3. Write a program to implement a buffer (FIFO). it generates an interrupt to signal the completion of the task.Direct Memory Access Very high speed devices could place heavy demands on the CPU for I/O servicing if they relied on the CPU to copy data word by word.
When interrupts occur. it is switched to user mode. This routine would then be executed when an interrupt was received in privileged mode. When running in user mode a task has no special privileges and must ask the OS for resources through system calls. the file system should record the owner of each file.3. During user execution. When a user logs in.two-mode operation It is crucial for the security of the system that different tasks. No checking is required in privileged mode. every address that the user program refers to. Stored passwords are never decrypted for comparison. Passwords are stored in a cryptographic (coded) form. by tricking the OS.1. the system starts in privileged mode. //****************************************************************** // // Example of a segmentation fault in user mode .1. The way this is normally done is to make the operating system all-powerful and allow no user to access the system resources without going via the OS. a mechanism is required to prevent them from writing to an arbitrary address in the memory. nor should they be able to control the devices connected to the computer. 3. Since each user's files may be private. the OS takes over and switches to privileged mode.1 Users . it will alway remain in control. when running in user mode. then each user must have his or her own place on the system disk.1. Other names for privileged mode are monitor mode or supervisor mode. all users must have a user identity or login name and must supply a password which prevents others from impersonating them. either purposefully or accidentally. where files can be stored. The super-user is a privileged user (normally the system operator) who has permission to do anything. One of the tasks of the OS is to prevent collisions between users. or write directly into memory without making a formal request of the OS. For this to be possible. Certain commands and system calls are therefore not available to normal users directly.4 I/O and Memory protection To prevent users from gaining control of devices. working side by side. 3. The OS switches between these modes personally. which allow several programs to be resident in memory simultaneously). 3. For example: normal users should never be able to halt the system. like the Macintosh. if the user could modify the OS program. If the user attempts to read or write outside this allowed segment. but normal users have restrictions placed on them in the interest of system safety. should not be allowed to interfere with one another (although this occasionally happens in microcomputer operating systems. All a user would have to do would be to change the addresses in the interrupt vector to point to a routine of their own making. a segmentation fault is generated and control returns to the OS.authentication If a system supports several users.2 Privileges and security On a multi-user system it is important that one user should not be able to interfere with another user's activities.3 Tasks . This checking is normally hard-wired into the hardware of the computer so that it cannot be switched off. At boot-time. then it would clearly be possible to gain control of the entire system in privileged mode. the OS encrypts the typed password and compares it to the stored version. so provided it starts off in control of the system. The solution to this problem is to let the OS define a segment of memory for each user process and to check. For example. When I/O or resource management is performed. To prevent users from tricking the OS.1. the OS takes over and it is switched back to privileged mode. Protection mechanisms are needed to deal with this problem. multiuser systems are based on hardware which supports two-mode operation: privileged mode for executing OS instructions and user mode for working on user programs.
The full speed of the CPU is not realized. so that control never returns to the OS and the whole system stops. // An address guaranteed to NOT be in our segment. It looks like the figures in the previous chapter.2 Memory map We can represent a multi-tasking system schematically as in figure 3. This method works. as follows. Clearly the memory map of a computer does not look like this figure.5 Time sharing There is always the problem in a multi-tasking system that a user program will go into an infinite loop. } 3. The OS software switches between different processes by fetching the instructions it decides to execute. Here are two possibilities: The operating system fetches each instruction from the user program and executes it personally. we are by definition in user mode. { int *ptr. The OS loads a fixed time interval into the timer's control registers and gives control to the user process. A more common method is to switch off the OS while the user program is executing and switch off the user process while the OS is executing.1. This is a kind of software emulation. the OS uses a hardware timer to ensure that control will return after a certain time. The timer then counts down to zero and when it reaches zero it generates a non-maskable interrupt. This method is often used to make simulators and debuggers. 3.1. The switching is achieved by hardware rather than software. .// //****************************************************************** main() // When we start. When handing control to a user program. so the OS has to simulate this behaviour using software. cout << *ptr. ptr = 0. never giving it directly to the CPU. but it is extremely inefficient because the OS and the user program are always running together. The point of this diagram is only that it shows the elements required by each process executing on the system. whereupon control returns to the OS. We have to make sure that the OS always remains in control by some method.
but the OS can share this too. The keyboard is only `connected' to one task at a time. 3. The idea of layers and hierarchies returns again and again.3 Kernel and shells . In reality. What is needed is a user interface or command line interface (CLI) which allows users to log onto the machine and manipulate files. it is called a shell around the kernel. compile programs and execute them using simple commands.1: Schematic diagram of a multitasking system. Since this is a layer of software which wraps the kernel in more acceptable clothes. in a window environment. These cannot be the actual resources of the system: instead. The virtual printer is really a print-queue. In a multitasking. the OS controls all the I/O itself and arranges the sharing of resources transparently. multi-user OS like UNIX this is not a bad approximation to the truth! In what follows we make use of UNIX terminology and all of the examples we shall cover later will refer to versions of the UNIX operating system. It is only by making layers of software.Figure 3. to the user program. Each program must have a memory area to work in and a stack to keep track of subroutine calls and local variables. as though it is just normal I/O. this happens when a user clicks in a particular window. The part of the OS which handles all of the details of sharing and device handling is called the kernel or core. for instance. each program has a virtual I/O stream. in a hierachy that very complex programs can be written and maintained. although its services can be accessed through system calls. Each program must have its own input/output sources. The kernel is not something which can be used directly. The virtual output stream for a program might be a window on the real screen.layers of software So far we have talked about the OS almost as though it were a living thing. For example. . The operating system arranges things so that the virtual I/O looks.
3.4 Services: daem.
mountd: Deals with requests for `mounting' this machine's disks on other
machines - i.e. requests to access the disk on this machine from another machine
on the network.
rlogind: Handles requests to login from remote terminals.
keyserv: A server which stores public and private keys. Part of a network security
system.
syslogd: Records information about important events in a log file.
named: Converts machine names into their network addresses and vice versa.
3.5 Multiprocessors - parallelism.
Exercises
1. Write a program to manage an array of many stacks.
2. Describe the difference between the kernel and daemons in UNIX. What is the
point of making this distinction?
3. What is two-mode operation?
4. What is the difference between an emulator or simulator and true multi-tasking?
5. To prepare to for the project suggestion in the next chapter, write a program which
reads fictitious commands in from a file. The commands should be of the form:
6. operator operand
7.
8. load
12
9. add
23
10. store
1334
11. jsr
5678
12. wait
1
13. fork
0
etc. Read in the commands and print out a log of what the commands are, in the
form "Executing (operator) on (operand)". You should be able to recognize the
commands `wait' and `fork' specially, but the other commands may be anything
you like. The aim is to simulate the type of commands a real program has to
execute.
4. Processes and Thread
4.1 Key concepts
Multitasking and multi-user systems need to distinguish between the different programs being executed by the system. This is accomplished
with the concept of a process.
4.1.1 Naming conventions.
Process: This is a general term for a program which is being executed. All work
done by the CPU contributes to the execution of processes. Each process has a
descriptive information structure associated with it (normally held by the kernel)
called a process control block which keeps track of how far the execution has
progressed and what resources the process holds.
Task: On some systems processes are called tasks.
Job: Some systems distinguish between batch execution and interactive
execution. Batch (or queued) processes are often called jobs. They are like
production line processes which start, do something and quit, without stopping to
ask for input from a user. They are non-interactive processes.
Thread: (sometimes called a lightweight process) is different from process or task
in that a thread is not enough to get a whole program executed. A thread is a kind
of stripped down process - it is just one `active hand' in a program - something
which the CPU is doing on behalf of a program, but not enough to be called a
complete process. Threads remember what they have done separately, but they
share the information about what resources a program is using, and what state the
program is in. A thread is only a CPU assignment. Several threads can contribute
to a single task. When this happens, the information about one process or task is
used by many threads. Each task must have at least one thread in order to do any
work.
CPU burst: A period of uninterrupted CPU activity.
I/O burst: A period of uninterrupted input/output activity.
4.1.2 Scheduling:
Queueing. This is appropriate for serial or batch jobs like print spooling and
requests from a server. There are two main ways of giving priority to the jobs in a
queue. One is a first-come first-served (FCFS) basis, also referred to as first-in
first-out (FIFO); the other is to process the shortest job first (SJF).
Round-robin. This is the time-sharing approach in which several tasks can
coexist. The scheduler gives a short time-slice to each job, before moving on to
the next job, polling each task round and round. This way, all the tasks advance,
little by little, on a controlled basis.
These two categories are also referred to as non-preemptive and preemptive respectively, but there is a grey area.
Strictly non-preemptive Each program continues executing until it has finished,
or until it must wait for an event (e.g. I/O or another task). This is like Windows
95 and MacIntosh system 7.
Strictly preemptive The system decides how time is to be shared between the
tasks, and interrupts each process after its time-slice whether it likes it or not. It
then executes another program for a fixed time and stops, then the next...etc.
Politely-preemptive?? The system decides how time is to be shared, but it will
not interrupt a program if it is in a critical section. Certain sections of a program
may be so important that they must be allowed to execute from start to finish
without being interrupted. This is like UNIX and Windows NT.
To choose an algorithm for scheduling tasks we have to understand what it is we are trying to achieve. i.e. What are the criterea for scheduling?
We want to maximize the efficiency of the machine. i.e. we would like all the
resources of the machine to be doing useful work all of the time - i.e. not be idling
during one process, when another process could be using them. The key to
organizing the resources is to get the CPU time-sharing right, since this is the
central `organ' in any computer, through which almost everything must happen.
But this cannot be achieved without also thinking about how the I/O devices must
be shared, since the I/O devices communicate by interrupting the CPU from what
it is doing. (Most workstations spend most of their time idling. There are
enormous amounts of untapped CPU power going to waste all over the world
each day.)
We would like as many jobs to get finished as quickly as possible.
Interactive users get irritated if the performance of the machine seems slow.or have a fast response time.1. 4. For example.1: Multi-level scheduling. 4. to make space for those which are active. and those which sleep for long periods. This is called swapping. This helps to deal with tasks which fall into two kinds: those which are active continuously and must therefore be serviced regularly. For example. We would like the machine to appear fast for interactive users . most systems have priorities. Figure 4. as we remark under Run levels . in UNIX the long term scheduler moves processes which have been sleeping for more than a certain time out of memory and onto disk. what is good for batch jobs is often not good for interactive processes and vice-versa. In particular.1.priority below.priority Rather than giving all programs equal shares of CPU time. Priorities are not normally fixed but vary according to the performance of the system and the amount of CPU time a process has already used up in the recent past. Some of these criterea cannot be met simultaneously and we must make compromises. processes which have used a lot of CPU time in the recent past often have their priority reduced. Sleeping jobs are moved back into memory only when they wake up (for whatever reason). This tends to favour iterative processes which wait often for I/O and makes the response time of the system seem faster for interactive users. .4 Runs levels . The most complex systems have several levels of scheduling and exercise different scheduling polices for processes with different priorities.3 Scheduling hierarchy Complex scheduling algorithms distinguish between short-term and long-term scheduling. Jobs can even move from level to level if the circumstances change. Processes with higher priorities are either serviced more often than processes with lower priorities. or they get longer time-slices of the CPU.
struct proc *p_nxt. pcb_wbuf */ struct v9_fpu *mpcb_fpu. is that of increasing the priority of resource-gobbling programs in order to get them out of the way as fast as possible. struct seguser *p_segu. Here is an example PCB from Mach OS: typedef struct machpcb { char mpcb_frame[REGOFF]. This is called a context switch. UNIX allows users to reduce the priority of a program themselves using the nice command. char *mpcb_spbuf[MAXWIN].In addition. struct fq mpcb_fpu_q[MAXFPQ]. they take must longer to complete because their priority is reduced. The wisdom of this approach is arguable. char p_usrpri.1.5 Context switching Switching from one running process to another running process incurs a cost to the system. int mpcb_wocnt. /* /* /* /* user's saved registers */ user window save buffer */ sp's for each wbuf */ number of saved windows in /* /* /* /* /* /* fpu state */ fpu exception queue */ various state flags */ window overflow count */ window underflow count */ associated thread */ Below is a kernel process structure for a UNIX system. struct regs mpcb_regs. p_nice */ char p_pri. so we don't want to context switch too often. scheduling */ /* linked list of running processes */ /* linked list of allocated proc slots /* /* /* /* /* /* also zombies. Often the best judge of how demanding a program will be is the user who started the program. Only the system manager can increase the priority of a process. struct as *p_as. char p_cpu. This is very difficult for an algorithm to judge. int mpcb_flags. and free procs */ address space description */ "u" segment */ kernel stack top for this process */ u area for this process */ user-priority based on p_cpu and /* priority. since programs which take a long time to complete tend to be penalized. Context switching is a system overhead. for example in UNIX). negative is high */ /* (decayed) cpu usage solely for . */ struct proc **p_prev. struct user *p_uarea. Indeed. (This occurs. or a lot of time will be wasted. struct proc { struct proc *p_link. int mpcb_wucnt. Then the contents of the MMU must be stored for the process (see next chapter). It costs real time and CPU cycles. int mpcb_wbcnt. caddr_t p_stack. This is called process starvation and must be avoided. so that the state of the system is exactly as it was when the scheduler last interrupted the process. `Nice' users are supposed to sacrifice their own self-interest for the good of others. Then all those things must be read in for the next process. so it must be done manually by the system administrator. 4. } machpcb_t. The state of each process is saved to a data structure in the kernel called a process control block (PCB). Another possibility which is often not considered. Scheduling algorithms have to work without knowing how long processes will take. kthread_t *mpcb_thread. If the priority continued to be lowered. struct proc *p_rlink. processes may be reduced in priority if their total accumulated CPU usage becomes very large. The values of all the registers must be saved in the present state. long jobs would never get finished. the status of all open files must be recorded and the present position in the program must be recorded. struct rwindow mpcb_wbuf[MAXWIN].
p_cursig. p_xstat. /* */ struct aiodone *p_aio_back. p_ssize. p_sigcatch. */ short p_swlocks. p_cpticks. p_ppid. p_sig. p_flag. /* struct proc *p_pptr. /* kill+exit+. rusage *p_ru. /* saved (effective) user id from exec gid_t p_sgid. p_sigmask. /* struct proc *p_tptr. /* long p_pctcpu. /* /* /* /* /* name of process group leader */ unique process id */ process id of parent */ Exit status for wait */ ticks of cpu time. struct sess *p_sessp. /* int p_swrss. p_slptime. /* struct aiodone *p_aio_forw. /* short p_idhash. /* caddr_t p_wchan.. /* saved (effective) group id from exec short short short u_short short p_pgrp. /* }. p_tsize. /* int p_threadcnt. used to direct tty signals int p_maxrss. /* struct proc *p_ysptr. p_nice. /* parent */ struct proc *p_cptr. p_uid. p_sigignore. /* */ int p_pam. /* /* /* /* /* /* Process credentials */ mbuf holding exit information */ size of text (clicks) */ size of data space (clicks) */ copy of stack size (clicks) */ current resident set size in clicks /* seconds resident (for scheduling) */ /* nice for cpu usage */ /* seconds since last block (sleep) */ /* /* /* /* signals current signals signals pending to this process */ signal mask */ being ignored */ being caught by user */ /* user id. copy of u. /* struct proc *p_osptr. p_dsize. p_pid. p_time. /* tracer */ struct itimerval p_realtimer..u_limit[MAXRSS] */ resident set size before last swap */ event process is awaiting */ (decayed) %cpu for this process */ pointer to process structure of pointer pointer pointer pointer to to to to youngest living child */ older sibling processes */ younger siblings */ process structure of pointer to session info */ list of pgrps in same hash bucket */ hashed based on p_pid for number of swap vnode locks held */ (front)list of completed asynch IO's (rear)list of completed asynch IO's number of pending asynch IO's */ ref count of number of threads using processor this process is running on processor affinity mask */ . p_rssize. /* proc */ int p_cpuid. uid_t p_suid. /* */ int p_aio_count.*/ */ */ */ */ char char char char char int int int int int uid_t p_stat. /* struct proc *p_pglnk. used for p_pctcpu struct struct int int int int ucred *p_cred.
2. 1.2 Creation and scheduling 4. Synchronization is a tricky problem in multiprocessor systems. This can be achieved using a `mailbox' system. or locates an unused block in an array. 3. We must have a way of synchronizing processes. As soon as we open the door to co-operation there is a problem of how to synchronize cooperating processes. 6. Name. Pipes are a system construction which enables one process to open another process as if it were a file for writing or reading. To do this. The system creates a new process control block. 4. 2. using a default for the type of process and any value which the user specified as a `nice' value (see Run levels . Waiting. A segment of memory must be available to both processes.6 Interprocess communication One of the benefits of multitasking is that several processes can be made to cooperate in order to achieve their ends. Interprocess communication (IPC) involves sending information from one process to another. 4. A priority must be computed for the process. The order in which they are carried out is not necessarily the same in all cases. so that even concurrent processes must stand in line to access shared data serially. Priority. . Each process control block is labelled by its PID or process identifier. Some processes wait for other processes to give a signal before continuing. (Most memory is locked to a single process). a socket (Berkeley) which behaves like a virtual communications network (loopback). suppose two processes modify the same file.UNIX also uses a `user' structure to keep auxiliary information which is only needed when jobs are not `swapped out' (see next chapter). or through the use of `pipes'. This block is used to follow the execution of the program through its course. keeping track of its resources and priority. Share data. We shall return to these below. Process ID and Process Control Block. Locate the program to be executed on disk and allocate memory for the code segment in RAM. Communicate. 4. 5. Schedule the process for execution. The name of the program which is to run as the new process must be known. Load the program into the code segment and initialize the registers of the PCB with the start address of the program and appropriate starting values for resources. they must do one of the following.1 Creating processes The creation of a process requires the following steps. If both processes tried to write simultaneously the result would be a nonsensical mixture. This is an issue of synchronization. For example.priorities above). but it can be achieved with the help of critical sections and semaphores/ locks.1.
Similarly the command interpreter process becomes the parent for the child. Figure 4. but it is never users which create processes but other processes! That is because anyone using the system must already be running a shell or command interpreter in order to be able to talk to the system. Load a completely new program.2. it returns fork to the child process and the process identifier or pid of the child process to the parent. to the parent. a new process cannot be created it returns a value of The following example does not check for errors if fork fails.3 Unix: fork() and wait() As an example of process creation. the process concerned splits into two and both continue to execute independently from after the intruction. usually all its children receive the same signal or are destroyed with the parent. Duplicate the parent process. When a child is created it may do one of two things. If a parent is signalled or killed. When a user creates a process using the command interpreter. we shall consider UNIX.2: Process hierachies The processes are linked by a tree structure. When this instruction is executed. for some reason. It. If fork is successful. 4. the new process become a child of the command interpreter. Processes therefore form a hierarchy. Continue executing along side its children. The syntax of fork is returncode = fork(). This doesn't have to be the case--it is possible to detach children from their parents--but in many cases it is useful for processes to be linked in this way.2 Process hierarchy: children and parent processes In a democratic system anyone can choose to start a new process. . Wait for some or all of its children to finish before proceeding. Similarly the parent may do one of two things.2. and the command interpreter is itself a process.4. The following example program is written in C++ and makes use of the standard library function fork().
} //************************************************************** void ChildProcess() { int i. //*************************************************************** main () { int pid. cout << "I am the child (cid=" << cid << ") of (pid = " << pid << ")\n". i++) { cout << i << ". int getpid(). cid. int fork(). //* //************************************************************** #include <iostream. } } .\n". pid = getpid(). cout << "Child finished. if (! fork()) { cid = getpid(). for (i = 0. parent quitting too!\n".h> extern extern extern extern extern "C" "C" "C" "C" "C" void sleep(). void wait(). wait(NULL). exit(0). i < 10. void exit().C to compile this. cout << "Fork demo! I am the parent (pid = " << pid << ")\n". void ChildProcess(). //* //* g++ unix...//************************************************************** //* //* A brief demo of the UNIX process duplicator fork(). ChildProcess()..\n". } cout << "Parent waiting here for the child. sleep(1).
. the parent catches up. Fork demo! I am the parent (pid = 2196) I am the child (cid=2197) of (pid = 2196) 0.. 1.. wait().4 Process states In order to know when to execute a program and when not to execute a program. 9. 4. Changes of state are made by the system and follow the pattern in the diagram below. suspended) Terminated (defunct) When time-sharing. Child finished. Ready (in line to be executed). 2. Note that the parent and child processes share the same output stream. 3.. it is convenient for the scheduler to label programs with a `state' variable. When the child goes to sleep for one second. 5. 1. Waiting (sleeping. so we see how they are synchronised from the order in which the output is mixed... This is just an integer value which saves the scheduler time in deciding what to do with a process.2.. so the zero appears before the message 4. Broadly speaking the state of a process may be one of the following. the scheduler only needs to consider the processes which are in the `ready' state.Here is the output from the program in a test run.. 7. 8... 4. 2.. Running (active). New.. Parent waiting here for the child.. 6. parent quitting too! Note that the child has time to execute its first instruction before the parent has time to call from the parent. 5. 3.
3: Process state diagram. instead of waiting for each task in the queue to complete.5 Queue scheduling The basis of all scheduling is the queue structure. The transitions between different states normally happen on interrupts.Figure 4. From state New Ready Event To state Accepted Ready Scheduled / Dispatch Running Running Need I/O Waiting Running Scheduler timeout Ready Running Completion / Error / Killed Waiting Terminated I/O completed or wakeup event Ready 4. There are two main types of queue. Queue scheduling is primarily used for serial execution. First-come first-server (FCFS).2. A round-robin scheduler uses a queue but moves cyclically through the queue at its own speed. also called first-in first-out (FIFO). .
Time CPU devices This diagram shows that - - - - - - starts out with a CPU burst. A blank space indicates that the CPU or I/O devices are in an idle state (waiting for a customer). if only short jobs arrive after one long job. We label each process by . the CPU is idle. The SJF scheme can cost quite a lot in system overhead. The most prevalent example of this is the shortest job first (SJF) rule. for example. They will always be scheduled in order one or all of them is finished. . only is left and the gaps of idle time get bigger. when the result returns. Of course this argument is rather stupid. Now suppose has to wait finishes: did not need to use the device. since it is next in the queue. where the devices and the CPU are idle. waiting for the result. . since each task in the queue must be evaluated to determine which is shortest. in which we have three processes. The FCFS queue is the simplest and incurs almost no system overhead. The SJF strategy is often used for print schedulers since it is quite inexpensive to determine the size of a file to be printed (the file size is usually stored in the file itself). but now the device is idle. To understand why simple queue scheduling is not desirable we can begin by looking at a diagram which shows how the CPU and the devices are being used when a FCFS queue is used. A fairer solution is required (see exercises below).Sorted queue. is started and goes through the same kind of cycle. Moreover. - -finishes - - takes over the CPU. because the greatest number of jobs will always be executed in the shortest possible time. Queue scheduling can be used for CPU scheduling. The efficiency of the two schemes is subjective: long jobs have to wait longer if short jobs are moved in front of them. takes over. in which the elements are regularly ordered according to some rule. couldn't the device be searching for the I/O for while the CPU was busy with and vice versa? We can improve the picture by introducing a new rule: every time one process needs to wait for a device. since it is only the system which cares about the average waiting time per job. There are many blank spaces in the diagram. Similarly. This is an example of starvation.. Similarly when takes over again. the device is finished. for the device to complete its I/O. when . At some point it needs input (say from a disk) and sends a request to the device. When . but it is quite inefficient. it gets put to the back of the queue. but if the distribution of jobs is random then we can show that the average waiting time of any one job is shorter in the SJF scheme. . gets executed. because finishes. Now consider the following diagram. Users who print only long jobs do not share the same clinical viewpoint. it is possible that the long job will never get printed. etc.. for its own prestige. Why. until Time CPU devices -finishes - - starts out as before with a CPU burst. and when has to wait. But now when it occupies the device. Also. While the device is busy servicing the request from waits idle while the next CPU burst takes place.
A better solution is to ration the CPU time. New processes are added to the end. the kernel has to keep detailed information about the cumulative use of resources for each process. if one process went into an infinite loop. it is possible to limit the CPU time used by any one job. the cost of context switching becomes high in comparision to the time spent doing useful work. If a job exceeds the limit. but the problem is that it assumes that every program goes in a kind of cycle If one program spoils this cycle by performing a lot of CPU intensive work. If the slices are too short.e. someone will always be starting new processes so this might not be a problem. This is called accounting and it can be a considerable system overhead. In order to make such a decision.and in the worst case. but it does not prevent certain jobs from hogging the CPU. The basic queue is the FCFS/FIFO queue. and 3. this second scheme looked pretty good . the whole system would stop dead. then the whole scheme goes to pieces. For example. Also. processes which are waiting spend too much time doing nothing . twenty percent of all context switches are due to timeouts . The success or failure of round-robin (RR) scheduling depends on the length of the time-slice or time-quantum. it does not provide any easy way of giving some processes priority over others.the remainder occur freely because of waiting for requested I/O.both the CPU and the devices were busy most of the time (few gaps in the diagram). the length of the time-slices can be varied so as to give priority to particular processes. If they become too long. it interrupts the CPU which then steps in and switches to the next process.how can we improve this scheme? The resource utilization is not too bad.In the beginning. or by waiting for dozens of I/O requests. 2. everything reverts back to FCFS. say. 4.3 Threads . Let us ask . i. but on a real system.CPU burst cycle to requeue jobs improves the resource utilization considerably. we don't have to wait for CPU intensive processes.7 CPU quotas and accounting Many multiuser systems allow restrictions to be placed on user activity. by introducing time-slices. 4. the system loads the timer with the duration of its time-slice and hands control over to the new process. as are processes which are waiting. On each context switch. The time-sharing is implemented by a hardware timer. processes which get requeued often (because they spend a lot of time waiting for devices) come around faster. As processes finished.2. Most system administrators would prefer not to use accounting . 4. When the timer times-out. A rule of thumb is to make the time-slices large enough so that only.2.though unfortunately many are driven to it by thoughtless or hostile users. the efficiency got worse. This means that 1. Indeed. it is terminated by the kernel. no process can hold onto the CPU forever.6 Round-robin scheduling The use of the I/O .
one executable file. and if each of those pieces could (at least in principle) run in parallel. or be made to wait for one another. until they need to communicate. the two independent pieces must be synchronized. with less overhead. . Why introduce another new concept? Why do we need threads? The point is that threads are cheaper than normal processes. We say that a task is multithreaded if it is composed of several independent subprocesses which do work on common data.there is only one program. In order to work. They could communicate using normal interprocesses communication. All of these resources are shared by all threads within the process. which might run in parallel or one after the other.3. Since there are more appliances than power points.a higher priority thread will get put to the front of the queue.4. In a sense. When one part of the program needs to send data to the other part. The power sockets are like kernel threads or CPUs. They allow certain functions to be executed in parallel with others. Figure 4. It has no open file lists or resource lists. A job like making a cake or tidying up might involve several threads (powered machinery). But what is the point of this? We can always run independent procedures in a program as separate programs. and have the pieces run independently of one another.1 Heavy and lightweight processes Threads. using the process mechanisms we have already introduced. until they need to communicate. then each electrical appliance which contributes to the work of the kitchen is like a thread. and that they can be scheduled for execution in a user-dependent way. no accounting structures to update. On a truly parallel computer (several CPUs) we might imagine parts of a program (different subroutines) running on quite different processors.4: System and user level threads: suppose we think of a household kitchen as being a process. since each thread has only a stack and some registers to manage. Threads are cheaper than a whole process because they do not have a full set of resources each. sometimes called lightweight processes (LWPs) are indepedendently scheduled parts of a single program. a thread needs power. Threads simply enable us to split up that program into logically separate pieces. we have to schedule the time each appliance gets power so as to share between all of them. Threads can be assigned priorities . Whereas the process control block for a heavyweight process is large and costly to context switch. If we write a program which uses threads . one task in the normal sense. threads are a further level of object orientation for multitasking systems. the PCBs for threads are much smaller.
the maximum number of user level . A kernel level thread behaves like a virtual CPU.the switching is performed by a user level library. In other words. the scheduler assigns CPU time on a thread basis rather than on a process basis. or a power-point to which user-processes can connect in order to get computing power. memory segment etc. memory segment etc. (The programmer has control. Threads context switch without any need to involve the kernel . the scheduling algorithm can never know this . 4. If there is only one CPU. It is possible to write a more efficient program by making use of threads. Using threads it is possible to organize the execution of a program in such a way that something is always being done.but the programmer who wrote the program does know. process control block. then only one thread can be running at a time.3. Inside a heavyweight process. unless the program decides to force certain threads to wait for other threads. threads are processes within processes! Threads can only run inside a normal process.3. program code. program code. 4. Of course.In other words. If the kernel itself is multithreaded. we can see that the sharing of resources could have been made more effective if the scheduler had known exactly what each program was going to do in advance. threads are scheduled on a FCFS basis. Object Thread (LWP) Task (HWP) Resources Stack 1 thread set of CPU registers CPU time.3 Levels of threads In modern operating systems.) A process which uses threads does not get more CPU time than an ordinary process . there are two levels at which threads operate: system or kernel threads and user level threads. Threads allow a programmer to switch between lightweight processes when it is best for the program. when ever the scheduler gives the heavyweight process CPU time. Multithreaded task n-threads process control block. The kernel has as many system level threads as it has CPUs and each of these must be shared between all of the user-threads on the system.but the CPU time it gets is used to do work on the threads. Let's define heavy and lightweight processes with the help of a table. so time is saved because the kernel doesn't need to know about the threads.2 Why use threads? From our discussion of scheduling.
then several threads can be scheduled at the same time. We shall first present the program without threads. it must then find out from that process which is the next thread it must execute. The asymmetric variant is potentially more wasteful. whereas the non-threaded version must read the files sequentially. The threaded version of the program has the possibility of reading several of the files in parallel and is in principle more efficient. This approach is more common on very large machines with many processors. since it is rare that the system requires a whole CPU just to itself.3. When the kernel schedules a process for execution. If the program is lucky enough to have more than one processor available.5 Example: POSIX pthreads The POSIX standardization organization has developed a standard set of function calls for use of user-level threads. the normal process scheduler cannot normally tell which thread to run and which not to run . . No CPU is special. Symmetric: All processors can be used by the system and users alike. where the jobs the system has to do is quite difficult and warrants a CPU to itself. which in turn is equal to the number of CPUs on the system. Some important implementations of threads are The Mach System / OSF1 (user and system level) Solaris 1 (user level) Solaris 2 (user and system level) OS/2 (system level only) NT threads (user and system level) IRIX threads POSIX standardized user threads interface 4.3. The non-threaded version of the program looks like this: // // // Count the number of lines in a number of files. and then rewrite it. the other CPUs service user requests. This library is called the pthread interface. This program will serve as an example for the remainder of this chapter.4 Symmetric and asymmetric multiprocessing Threads are of obvious importance in connection with parallel processing. non threaded version. There are two approaches to scheduling on a multiprocessor machine: Asymmetric: one CPU does the work of the system. 4. Since threads work ``inside'' a single task.threads which can be active at any one time is equal to the number of system level threads. starting a new thread for each file.that is up to the program. Let's look at an example program which counts the number of lines in a list of files.
void ParseFile(char *). ParseFile("proc2"). ParseFile("proc4"). cout << filename << ":" <<buffer << endl.close()... } This program calls the function ParseFile() several times to open and count the number of lines in a series of files. ParseFile("proc3"). file.\n". by definition.bufsize). A global variable is.LINECOUNT. } while (!file. This will cause a problem when we try to parallelize the program using threads. cout << "Number of lines = %d\n". The number of lines is held in a global variable called LINECOUNT. ParseFile("proc1").eof()) { file.open(filename. int LINECOUNT = 0.getline(buffer. return.h> #include <fstream.h> const int bufsize = 100. ios::in). } file. } /**********************************************************************/ void ParseFile(char *filename) { fstream file. shared data. Here is the threaded version: // . cout << "Trying to open " << filename << endl. LINECOUNT++. /**********************************************************************/ main () { cout << "Single threaded parent.// //////////////////////////////////////////////////////////////////////// #include <iostream. char buffer[bufsize]. if (! file) { cerr << "Couldn't open file\n".
// Scheduling will be different each time since the system has lots // of threads running.."proc3"). // // This program uses POSIX threads (pthreads) // /////////////////////////////////////////////////////////////////////// #include #include #include #include <iostream.. ParseFile. pthread_create(&(tid[3]). void *ParseFile(char *). } cout << "Parent thread continuing\n". NULL. Note: run this program // several times to see how the threads get scheduled on the system. /**********************************************************************/ main () { pthread_t tid[maxfiles].h> <sched. they will be killed // before they can start. this program has a potential // race condition to update the shared variable LINECOUNT. NULL.h> <pthread. pthread_create(&(tid[1]). const int maxfiles = 4. on a multiprocessor system. } . pthread_mutex_t MUTEX = PTHREAD_MUTEX_INITIALIZER. // Create a thread for each file ret ret ret ret = = = = pthread_create(&(tid[0]).(void **)NULL)."proc4").h ! int LINECOUNT = 0. NULL. // Illustrates use of multithreading. ParseFile. // Must be void *.ret.h> const int bufsize = 100.h> <fstream.\n". int i. which we do not see and these will affect the // scheduling of our program. cout << "Number of lines = " << LINECOUNT << endl.. // If we don't wait for the threads. i < maxfiles. so we // must use a mutex to make a short critical section whenever accessing // this shared variable. pthread_create(&(tid[2]).. NULL. cout << "Parent thread waiting. i++) { ret = pthread_join(tid[i]. ParseFile."proc1"). for (i = 0. // // Note that. ParseFile. defined in pthread..// Count the number of lines in a number of files."proc2").
A new thread is spawned with a pointer to the function the thread should execute (in this case the same function for all threads). we can spread the the CPU time evenly between each thread. A final point to note is the commented out lines in the ParseFile() function. int ret. We shall discuss this more in the next section. compared to the total number of threads waiting to be scheduled by the system. LINECOUNT++. it has the effect of simulating round-robin scheduling. to allow next thread to be run // sched_yield().open(filename. threads are usually scheduled FCFS in a queue. Actually. The main program is itself a thread. } In this version of the program. if (! file) { cerr << "Couldn't open file\n". On a single CPU system.getline(buffer.. because the threads in our program here are only a small number. cout << "Trying to open " << filename << endl.eof()) { file. char buffer[bufsize].. return NULL. called ParseFile(). Thread join-semantics are like wait-semantics for normal processes. This is called a race condition and can lead to unpredictable results. Each of the threads updates the same global variable./**********************************************************************/ void *ParseFile(char *filename) { fstream file. By calling this function after each line is read from the files. This function can be used to switch between several threads. It is essential that we tell the main program to wait for the additional threads to join the main program before exiting. it is difficult to predict precisely which threads will be scheduled and when. Suppose now that two threads are running on different CPUs. Several things are important here. . cout << filename << ":" <<buffer << endl.. } file. It is possible that both threads would try to alter the value of the variable LINECOUNT simultaneously. // Try uncommenting this . The interaction with disk I/O can also have a complicated effect on the scheduling. First we call the function pthread_create() for each file we encounter. otherwise the main program will exit and kill all of the child threads immediately. For this reason we use a mutex to lock the variable while it is being updated. so that the next thread to be scheduled can run instead. ret = pthread_mutex_unlock(&MUTEX). The call sched_yield() tells a running thread to give itself up to the scheduler. // Critical section ret = pthread_mutex_lock(&MUTEX). ios::in).close(). file. which reads lines from the respective files and increments the global variable LINECOUNT. // Yield the process. If we yield after every instruction. } while (!file.bufsize). a separate thread is spawned for each file.
printf("Done! .MAXPRIORITY.6 Example: LWPs in Solaris 1 Early solaris systems had user-level threads only. } Here is an example program containing several threads which wait for each other.0).task. which were called light weight processes. only one userlevel thread could run at any given time.h> #include <lwp/stackdep.3 */ /* */ /********************************************************************/ #include <lwp/lwp.Now other threads can run. pod_setmaxpri(MAXPRIORITY).1. Since the kernel was single threaded. The `lightweight processes library' then converts the normal process into a process descriptor plus a thread. one simply has to execute a LWP system call.3.4. Here is the simplest example /********************************************************************/ /* */ /* Creating a light weight process in SunOS 4.\n"). /*********************************************************************/ /* Zone 0 */ /*********************************************************************/ main () { thread_t tid. } /*********************************************************************/ /* Zone 1 */ /*********************************************************************/ task () { printf("Task: next thread after main()!\n")..STKTOP(stack).3 (Solaris 1) */ /* */ /* Yielding to other processes */ . int task().h> #define MINSTACKSZ #define STACKSIZE #define MAXPRIORITY 1024 1000 + MINSTACKSZ 10 /*********************************************************************/ stkalign_t stack[STACKSIZE].0. /* This becomes a lwp here */ lwp_create(&tid. To create a threaded process in solaris 1.. /********************************************************************/ /* */ /* Creating a light weight process in sunos 4.1.
lwp_self(&tid_main).0. lwp_create(&tid_prog2.h> #define #define #define #define #define MINSTACKSZ STACKCACHE STACKSIZE MAXPRIORITY MINPRIORITY 1024 1000 STACKCACHE + MINSTACKSZ 10 1 /*********************************************************************/ stkalign_t stack[STACKSIZE]./* */ /********************************************************************/ #include <lwp/lwp.0. */ /*********************************************************************/ prog1 () { printf("Two "). lwp_setstkcache(STACKCACHE.prog2.MINPRIORITY. /* Get main's tid */ /* Make a cache for each prog */ lwp_create(&tid_prog1. thread_t tid_prog1.0).h> #include <lwp/stackdep. if (lwp_yield(THREADNULL) < 0) { . thread_t tid_prog2. lwp_yield(THREADNULL).0).. /*********************************************************************/ /* Zone 0 */ /*********************************************************************/ main () { thread_t tid_main.lwp_newstk(). } /*********************************************************************/ /* Zone 1.3).2. prog2(). exit(0). int prog1(). lwp_yield(tid_prog2).prog1. printf("Four "). printf("One "). printf("Six ").lwp_newstk().MINPRIORITY.
when the next line of the executing script gets read. printf("Five ").4. Suddenly. Is it even sensible for two programs to want to modify data simultaneously? Or is it simply a stupid thing to do? We must be clear about whether such collisions can be avoided. If two unrelated processes want to write a log of their activities to the same . return. This is called serialization. the pointer to the next line points to the wrong location and it reads in the same line it already read in four lines ago! Everything in the program is suddenly shifted by four lines. The value depends on which of the threads wins the race to update the variable. Suppose one user is running a script program and editing the program simultaneously. During the execution of the script.1. The value of the variable afterwards depends on which of the two threads was the last one to change the value. this is a reasonable thing to do. } 4.1 Problems with sharing for processes It is not only threads which need to be synchronized.clearly. without the process execting the script knowing about it. lwp_yield(THREADNULL).4 Synchronization of processes and threads When two or more processes work on the same data simultaneously strange things can happen. 4. } /*********************************************************************/ prog2 () { printf("Three "). This example (which can actually happen in the UNIX shell) may or may not turn out to be serious . 2. } printf("Seven \n"). When do we need to prevent programs from accessing data simultaneously? If there are 100 processes which want to read from a file. the user adds four lines to the beginning of the file and saves the file.lwp_perror("Bad yield"). We have already seen one example in the threaded file reader in previous section: when two parallel threads attempt to update the same variable simultaneously. it can be quite catastrophic. in general. It is a problem of synchronization on the part of the user and the filesystem 4. or whether they are a necessary part of a program. For instance. What we need in a multitasking system is a way of making such situations predictable. This is called a race condition. this will cause no problems because the data themselves are not changed by a read operation. the result is unpredictable. if two independent processes want to add entries to a database. 1. We must consider programs which share data. The script is read in line by line. A problem only arises if more than one of the parties wants to modify the data.
if two processes try to add something to a list. not in parallel. // Update shared data . B must be excluded from doing so. This method is very inefficient on multiprocessor machines. though it is more difficult to visualize. in the case of print queues.file. This method works on multiprocessor machines also.it can only provide tools and mechanisms to assist the solution of the problem.which one? On the other hand. so we shall only undertake to describe the problem and some of the principles involved here.in some cases it might be logically incorrect for two processes to change data at the same time: if two processes try to change one numerical value then one of them has to win . To serialize access to these shared data. it is probably not sensible: a better solution would be to use two separate files. if B is modifying the data. 3. How should we handle a collision between processes? Should we signal an error.3 Mutexes: mutual exclusion Another way of talking about serialization is to use the concept of mutual exclusion. but we have to be sure that they do not write their data on top of each other. Processes which ignore the protocol ignore it at their own peril (and the peril of the remainder of the system!). This is called mutual exclusion. As we mentioned. We are interested in allowing only one process or thread access to shared data at any given time. A mutual exclusion lock is colloquially called a mutex. Synchronization is a large and difficult topic. or try to make the processes wait in turn? There is no universal answer to this question . The OS cannot impose any restrictions on silly behaviour . to prevent control being given to another process during a critical action like modifying shared data. we have to exclude all processes except for one. The writing must happen serially.2 Serialization The key idea in process synchronization is serialization. parallelism is not always appropriate. The protocol ensures that processes have to queue up to gain access to shared data.4. The scheduler can be disabled for a short period of time. then: if A is modifying the data.4. The responsibility of serializing important operations falls on programmers. This means that we have to go to some pains to undo the work we have put into making an operating system perform several tasks in parallel. Mutual exclusion can be achieved by a system of locks. since all other processors have to be halted every time one wishes to execute a critical section. 4. The idea is for each thread or process to try to obtain locked-access to shared data: Get_Mutex(m). 4. A protocol can be introduced which all programs sharing data must obey. There are essentially two strategies to serializing processes in a multitasking environment. Suppose two processes A and B are trying to access shared data. A must be excluded from doing so. that makes sense. You can see an example of mutex locking in the multithreaded file reader in the previous section.
4. //********************************************************************* // // Example of a program which uses a file lock to ensure // that no one starts more than one copy of it. If other users then try to access the file. a global variable). void RemoveLock(). const int exitstatus=1. This is useful because if something goes wrong and the editor crashes. to the locking problem at the user level.4 User synchronization: file locks A simple example of a protocol solution. extern "C" void unlink(char *). the file lock is deleted. This protocol is meant to ensure that only one process at a time can get past the function Get_Mutex.h> //********************************************************************** // Include file //********************************************************************** extern "C" int getpid(). //********************************************************************** // Main program //********************************************************************** main () { if (Locked()) . The mutex variable is shared by all parties (e. int Locked(). allowing others to use the file.g. then the file is already in use.lock and contain the user identifier of the user currently editing the file. A method for implementing this is discussed below. It is then possible to see that the process the lock referred to no longer exists and the lock can be safely removed. const int true = 1. is the so-called file-lock in UNIX. We can implement a lock very easily. the lock will not be removed. If another user or process has already obtained a lock. When write-access is required to a file. a `lock' is placed on the file by creating the file lock. If we wanted to edit a file blurb. the lock might be called blurb. Mutexes are a central part of multithreaded programming. All other processes or threads are made to wait at the function Get_Mutex until that one process calls Release_Mutex to release the lock. When the user has finished. This is often used in mail programs such as the ELM mailer in UNIX.Release_Mutex(m). we try to obtain a lock by creating a lock-file with a special name. Here is an example from UNIX in which the lock file contains the process identifier. In most cases a lock is simply a text file.h> #include <fstream. When the user is finished with the file. and we are denied permission to edit the file. const int false = 0. This indicates that the file now belongs to the new user. the lock is removed.4. since it would be unwise to try to read and delete incoming mail with two instances of the mail program at the same time. // //********************************************************************* #include <iostream. If the file is free. The same method of locks can also be used to prevent two instances of a program from starting up simultaneously. they find that the lock file exists and are denied access.
close(). lockfile. } pid = getpid(). } //********************************************************************** ** void RemoveLock() { unlink("/tmp/lockfile").open("/tmp/lockfile".{ cout << "This program is already running!\n".4. if (! lockfile) { cerr << "Cannot secure a lock!\n". lockfile. return false. if (lockfile) { return true. } // Program here RemoveLock(). } //********************************************************************** // Toolkit: locks //********************************************************************** Locked () { ifstream lockfile. lockfile.ios::out). return exitstatus.ios::in).5 Exclusive and non-exclusive locks To control both read and write access to files. return true.out << pid. } 4. int pid. } lockfile. . we can use a system of exclusive and non-exclusive locks.open("/tmp/lockfile".
since the system `spins' on the loop while waiting. turn = pid. when another already has it. Other users can also get non-exclusive locks to read the file simultaneously. as mentioned in chapter 2.so all processors would have to be switched off.6 Critical sections: the mutex solution A critical section is a part of a program in which is it necessary to have exclusive access to shared data. the program actively executes an empty loop. By switching off interrupts (or more appropriately. ii) it is not general enough in a multiprocessor environment. void Get_Mutex (int pid) { int other. In 1981 G. The key to serialization here is that. so why should they wait for C and D? The modern way of implementing a critical section is to use mutexes as we have described above.7 Flags and semaphores Flags are similar in concept to locks.If a user wishes to read a file. In the past it was possible to implement this is by generalizing the idea of interrupt masks. iii) it is too harsh. For example. other = 1 . This solution is said to involve busy waiting--i. by switching off the scheduler) a process can guarantee itself uninterrupted access to shared data.L. Only one process or thread may be in a critical section at any one time. suppose we want to ensure that procedure1() in process 1 gets executed before procedure2() in process 2. Programs A and B might share different data to programs C and D. if a second process tries to obtain the mutex. int interested[2]. which does not terminate until the other process has released the mutex. interested[pid] = true.4. // Process 2 wait(mysignal). This method has drawbacks: i) masking interrupts can be dangerous . .there is always the possibility that important interrupts will be missed. We only need to prevent two programs from being in their critical sections simultaneously if they share the same data. Peterson discovered a simple algorithm for achieving mutual exclusion between two processes with PID equal to 0 or 1. wasting CPU cycles. it will get caught in a loop. A typical behaviour is that one process decides to stop and wait until another process signals that it has arrived at a certain place.pid. The code goes like this: int turn. while (turn == pid && interested[other]) { } } // Loop until no one // else is interested void Release_Mutex (int pid) { interested[pid] = false. } Where more processes are involved.4. // Process 1 procedure1(). since interrupts will continue to be serviced by other processors . no other users can read or write to the file. we must get an exclusive lock. some modifications are necessary to this algorithm. rather than moving the process out of the scheduling queue.e. 4. This is also called a spin lock. no user may write to it. a non-exclusive lock is used. The idea is that two cooperating processes can synchronize their execution by sending very simple messages to each other. but when a non-exclusive lock is placed on a file. 4. To write to a file. When an exclusive lock is obtained.
the process which is filling it must be made to wait until space in the buffer is made available. . There are many uses for semaphores and we shall not go into them here. procedure2(). . A monitor is a language-device which removes some of the pain from synchronization. It is the stale-mate of the operating system world. When the buffer becomes full. they only have to create a monitor. Wait and signal operations can be defined to wait for specific condition variables. Consider the European road rule which says: on minor roads one should always wait for traffic coming from the right. . 4.. A semaphore is an integer counting variable and is used to solve problems where there is competition between processes.. The idea is that one part of a program tends to increment the semaphore while another part tends to decrement the semaphore. Such an environment is called a monitor..signal(mysignal). A process can thus wait until another process sends a signal or semaphore which changes the condition variable. A procedure or function defined under the umbrella of a monitor can only access those shared memory locations declared within that monitor and vice-versa.8 Monitors Some languages (like Modula) have special language class-environments for dealing with mutual exclusion. or whether something special will occur. These operations are a special case of interprocess communication. where we count how many items are in the buffer. according to the rule all of them must wait for each other and none of them can ever move. The value of the flag variable dictates whether a program will wait or continue. 4.4.. Only one process can be `inside' a monitor at a time . If four cars arrive simultaneously at a crossroads (see figure) then. This situation is called deadlock. A semaphore is a flag which can have a more general value than just true or false.users don't need to code this themselves. A simple example is reading and writing via buffers.5 Deadlock Waiting and synchronization is not all sweetness and roses.
Figure 4. No preemption. This is the method used by most operating systems. Ostrich method.5. 3. We can try to design a protocol which ensures that deadlock never occurs. Non-sharable resources. Prevention. Recovery. We can pretend that deadlocks will never occur and live happily in our ignorance. These are the essential requirements for a deadlock: 1. there is no reason to wait. The system does not interfere.. This is . User programs are expected to behave properly. Circular waiting. We can allow the system to enter a deadlock state and then recover. 2. 2. If the resource can be shared. The processes can not be forced to give up the resources they are holding.5: Deadlock in the European suburbs. There are likewise three methods for handling deadlock situations: 1. 4. is waiting for where .1 Cause Deadlock occurs when a number of processes are waiting for an event which can only be caused by another of the waiting processes. and is waiting is waiting for . It is not possible to share the resources or signals which are being waited for. 3. There must be a set of processes for a resource or signal from ..
Termination is a safer alternative. The problem with this approach is that. Processes can be synchronized using semaphores or flags. 4.5. At regular intervals the system is required to examine the state of all processes and determine the interrelations between them. but leave the problem to user programs. The simplest possibility for avoidance of deadlock is to introduce an extra layer of software for requesting resources in addition to a certain amount of accounting. The same applies for wait conditions.5. Each process may be described by A process identifier.what should it do instead? At best the system would have to reject or terminate programs which could enter deadlock. if a process is not permitted to wait for another process . returning an error condition. Since this is quite a performance burden. A private stack for that process. The system could then analyse (and re-analyse each time a new process arrives) the resource allocation and pinpoint possible problems.5.2 Prevention Deadlock prevention requires a system overhead. The cause of deadlock waiting is often a resource which cannot be shared. 4. 4. The latter method is somewhat dangerous since it can lead to incorrect program execution.4 Recovery To recover from a deadlock. it is not surprising that most systems ignore deadlocks and expect users to write careful programs. or repossess the resources which are causing the deadlock from some processes until the deadlock is cured. . If a process has to wait for a condition which can never arise until it has finished waiting. 4. and go on terminating them until the deadlock is cured. and any interruption of that reasoning could lead to incorrect execution. The scheduling of processes takes place by a variety of methods. then a deadlock is said to arise. Similarly all wait conditions should be declared. Each time a new request is made. Protocol constructions such as critical sections and monitors guarantee that shared data are not modified by more than one process at a time.6 Summary In this chapter we have considered the creation and scheduling of processes. One might demand that all programs declare what resources they will need in advance.3 Detection The detection of deadlock conditions is also a system overhead.understandable: it is very hard to make general rules for every situation which might arise. the system analyses the allocation of resources before granting or refusing the resource. Another method is the following. The aim is to maximize the use of CPU time and spread the load for the devices. A process control block which contains status information about the scheduled processes. the system must either terminate one of the participants. Processes usually wait for a good reason. Most operating systems do not try to prevent deadlocks.
Add process priorities to each task. i. Copy and modify the input files so that a deadlock can occur. 4. Memory and storage . What is meant by the critical section of a program? What is meant by deadlock? Explain why round-robin scheduling would not be appropriate for managing a print-queue. 5. The output of the kernel should show this clearly. 1. When presenting your results. Explain what you have chosen to do in your solution. 5. Devise a combination of first-come-first-serve (FCFS) and shortest-job-first (SJF) scheduling which would be the `fairest' solution to scheduling a print queue. Some of the input files contain `fork' instructions. for instance. 2. Explain carefully how it occurs. The output of your kernel should show clearly what is being executed and when. The new process should have a different pid and should have the same priority as the old one. For example. Make a fake kernel simulator which. This is like counting `fake CPU cycles'. instead of executing processes in memory. Modify your code so that when such an instruction is detected. You should aim to share the time spent reading each `process' equally between all tasks. The aim is to write the clearest program. Decide for yourself how you wish to handle this situation. the current process spawns a new copy of itself which begins executing from the instruction after the fork command. The idea is to make a time-sharing system of your own. `wait 4' etc. You can decide how these are assigned yourself. Explain the difference between a light weight process and a normal process.e. or increment a counter each time an instruction is read. Project You can learn a lot by solving the following problem. 3.Exercises 1. four letters followed by a number. Add to your kernel a simple test to detect such deadlock situations. 5. You can either call the real system clock to do this. 2. give a listing of the output of each part and explain the main features briefly. it waits for process number 5 to finish before it continues. reads instructions from a number of files. The input files contain `wait <number>' instructions. Hint: use a status variable which indicates whether the process is `ready' or `waiting'. You should give each process a process identifier (pid). Keep a record of how long each process takes to complete and print status information when each process finishes. 4. The `command language' you are reading in contains instructions like `abcd 3'. Try to make your program as structured as possible. 3. rather than the most efficient one. make two processes wait for each other. Modify your program so that when one of the tasks reads an instruction `wait 5'.
1. except for the program counter and address registers which were 16 bits. On most microcomputers this is located in ROM. Similarly. and so these machines could not handle more memory than this. multi-user systems use paging to disk.1 Physical Address space Every byte in the memory has an address which ranges from zero up to a limit which is determined by the hardware (see below). . The size of a word on any system is defined by the size of the registers in the CPU.the CPU itself requires some workspace. Up to about 1985. As we shall see later.individual registers belonging to other chips and hardware devices.Together with the CPU. On early machines this memory was soon exceeded and it was necessary to resort to tricks to add more memory. This takes up a fair chunk of memory. since the accumulator and index registers were all 8 bits wide.not all memory can be used at once and the method is seldom used nowadays since modern CPUs can address much larger memory spaces. The interrupt vector . Instead of switching between hardware banks of memeory. The largest address which can be represented in a 16 bit number is or bytes. they copy the old contents to disk and reuse the memory which is already there for something else. the physical memory (RAM) is the most important resource a computer has. (This is why bytes have a special status. The CPU chip has instructions to manipulate data only directly in memory. so all arithemtic and logic operations must take place in RAM. The DEC alpha machines. normally just a few bytes.2 Word size A word is a small unit of memory. 5. This operation is called paging. all CPUs had eight bit (1 byte) registers. The possible address range and internal number representations are enormous. containing a register which could choose between banks of memory. Although bytes are numbered from zero upward. A special hardware paging chip was used to switch between banks. Usually the interrupt vector and sometimes the processor stack occupy fixed locations. together with the OSF/1 operating system are based on 64 bit technology. Since it was not possible to address any more than the limit. 5. The new memory bank used the same addresses as the old.1. 64 bit versions of other versions of unix and NT are also starting to appear. 5. but only one could be accessed at a time. The physical address space consists of every possible address to which memory chips are connected. On multiuser systems upgrades are much more frequent and it is always loaded from disk into RAM. Nowadays most CPUs have 32 bit registers.1 Logical and Physical Memory 5.1.3 Paged RAM/ROM The size of the physical address space is limited by the size of the address registers in the CPU. these machines temporarily switched out one bank of memory with another.) After that came a number of 16 bit processors with larger program counters. not every address is necessarily wired up to a memory chip. The operating system itself. Paging has obvious disadvantages . This determines both the amount of memory a system can address and the way in which memory is used. no more than one byte could be manipulated at a time. Some addresses may be reserved for Memory mapped I/O .
Use address binding.1. Needless to say. Also. (This means that every program which is to be able to coexist must know about every other!) 2. In machine code there are no procedure names. once and for all? 2. on some microprocessors (e. intel 6502). Then it is possible that the addresses coded into one program will already be in use by another. since 2 incurs a runtime overhead. it is the last of these methods which is used in modern systems. Again there is a choice. Here the idea is that ``dummy" addresses are used when code is generated. This immediately begs the question: how do we know what the addresses will be? How do we know where the program is going to be located in memory? On microcomputers. This freedom is very important in a multitasking operating system where memory has to be shared continually. so that all of the addresses referred to in the program are correct every time. we have the freedom to completely reorganize the use of physical memory dynamically at any time. In fact 2. This requires a special program called a loader. The system defines a certain range of addresses which can be used by user programs (See figure 2. When a program is loaded in.5.coexistence in memory When a high level language program is compiled. Machine code uses addresses relative to the start address at which the program was loaded. . 3. A problem arises if the system supports several programs resident in memory simultaneously. By performing the distinction at runtime. Relative addressing. A user program writes only to logical addresses.g. the relative addressing instructions available are limited to fairly small relative ranges. is the better alternative. Whenever the program is loaded from disk. The addresses are converted to physical addresses automatically. this is very straightforward. All references to data or program code are made by specifying the address at which they are to be found. This incurs a performance penalty. In that case there are three possible options 1. not knowing where the program will end up in the physical address space. the true addresses are computed relative to the start of the program and replaced before execution begins. It introduces an important distinction between logical and physical addresses. While the program is being executed? Initially it would seem that 1. it is loaded into the memory at the same address. due to the size of the CPU registers.1). is the more flexible option for reasons which will become more apparent when we consider paging to disk. or variable names.4 Address binding . Demand that programs which can coexist be compiled to run at different addresses. When the program is loaded into memory. The CPU must then add the start address to every relative address to get the true address. A program is compiled to run starting from some fixed address. When should this conversion take place? 1. it gets converted into machine code.
Because of the nature of the linker.5 Shared libraries The concept of shared libraries lies somewhere in the grey zone between compiling and linking of programs and memory binding. thus there is only one copy of the shared library on the system. The advantages and disadvantages of this scheme are the following.1. the linker attaches a copy of standard libraries to each program. It is therefore imporant to have a way of allocating addresses dynamically. Considerable savings in disk space are made. . which only loads the shared library into RAM when one of the functions the library is needed. the whole library is loaded too. there will be collisions when we try to load a second program into memory. When the program is loaded into memory. 1. shared libraries are called dynamically loaded libraries or dll's. when you compile a program. so it is also a waste of RAM. We introduce it here for want of a better place. because the standard library code is never joined to the executable file which is stored on disk. copying the same code for every program.1: If a program hard codes addresses. On windows systems.Figure 5. The advantages of shared libraries should be clearly apparent by the end of this section. The solution is to use a run-time linker. the whole library has to be copied even though perhaps only one function is required. 5. On older systems. Thus a simple program to print ``hello'' could be hundreds or thousands of kilobytes long! This wastes considerable amount of disk space.
2: Statically linked files append the entire library to each compiled program. Figure 5. See under segmentation below. A saving of RAM can also be made since the library. by means of a conversion table. performed by the system and the user need know nothing about it. the amount of RAM needed to support programs is now considerably less. at some location which is completely transparent to the user. which now can share it. Logical addresses are mapped into real physical addresses. this might be outweighed by the time it would otherwise have taken to load the library for programs.2. In the long run. at any rate. User programs know only about logical addresses. 3. Each logical address sent to the MMU is checked in the following way: . A performance penalty is transferred from load-time to run-time. once loaded into RAM can often be shared by several programs. The conversion can be assisted by hardware processors which are specially designed to deal with address mapping. 5. With shared libraries we can save disk and memory by linking a program dynamically with a single copy of the library. Also. the first time a function is accessed: the library must be loaded from disk during the execution of the program.6 Runtime binding Keeping physical and logical addresses completely separate introduces a new level of abstraction to the memory concept.1. The conversion table of addresses is kept for each process in its process control block (PCB) and mmust be downloaded into the MMU during context switching (this is one reason why context switching is expensive!). The part of the system which performs the conversion (be it hardware or software) is called the memory management unit (MMU). The conversion is. This is much faster than a purely software solution (since the CPU itself must do the conversion work).
How is the translation performed in practice? To make the translation of logical to phyical addresses practical. The mapping from logical to physical is only visible to the designer of the system.if only a small part of the program is actually required in practice. it is necessary to coarse grain the memory. If every single byte-address were independently converted. If we bind physical addresses to a special user it means that we cannot later reorganize the physical memory and part of the point of the exercise is lost. whereas a unit of . The disadvantage with this scheme is that either too much or too little memory might be allocated for the tasks. the same scheme is called ``indirect addressing''. 2. Each page is chosen to have the same fixed size (generally 2-4kB on modern systems). Give each process/task a fixed amount of workspace (a fixed size vector) which is estimated to be large enough to meet its needs. Instead of referring to data directly. Moreover . called pages. Then we only need to map the start address of each block. but the mapping is one-to-one. A unit of logical memory is called a page. then a large amount of memory is wasted and cannot be reused. as we shall see below). we can fiddle as much as we like with the physical memory and the user will never know. On the other hand. Only the base address of the workspace and the size need to be stored i. Coarse grain or ``quantize'' the memory in smallish pieces. We shall return to this in the next section. and thus pointers only point from one place in logical memory to another place in logical memory. one uses a pointer variable which holds the true address at which the data are kept. since it is fixed). The ownership checking is performed at the logical level rather than the physical level because we want to be able to use the physical memory in the most general possible way. In machine language. generate an ownership error (often called a segmentation fault. The conversion of logical addresses into physical addresses is familiar in many programming languages and is achieved by the use of pointers . then two bit addresses would be required for each byteaddress in the table and the storage space for the conversion table would be seven times bigger than the memory of the system! To get around this problem. we have to break up the memory into chunks of a certain size. There are two schemes for coarse graining the memory in this way: 1.e. Are the data we want to access actually in the physical memory? As we shall see later in this chapter. One more question must be added to the above. the whole vector in logical memory is mapped into a corresponding vector in physical memory. The difference between logical addresses and pointers is that all pointers are user objects. The base address of each page is then stored in the conversion table (the length is known. which is much cheaper if the blocks are big enough. many systems (the most immediate example of which is UNIX) allow paging to disk. given by some power of bits (this varies from system to system). We don't know where it lies in the physical memory. if users are only bound to logical addresses.Does the logical address belong to the process? If not. Translate the logical address into a physical address.
Large programs need not be entirely in memory if they are not needed. in regular base . array bound-checking will be performed automatically by the hardware of the system. this would require addresses. Page numbers and addresses Page addressing is a simple matter if the size of one page is a power . one simply has to drop the lowest three digits.7 Segmentation . By breaking up the memory into smaller pieces. Arrays can conveniently be placed in a segment of their own . The tables which map logical to physical memory are called the page table and the frame table. is that what appears to the user as of sequential memory may in reality be spread in some random order just about anywhere in physical memory. they must of course have the same size. An important consequence of the mapping of pages. they can share pages. Code segment . The memory given to any process is divided up into one or more segments which then belong to that process. Without pages.1. A segment is a convenient block of logical memory which is assigned to a process when it is executed.physical memory is called a frame. 5. The second of these possibilities is an attractive propostion for a number of reasons. To number blocks of size in base 10. Also.program code Data segment . and are stored per process and loaded as a part of context switching. This is advantageous for shared-libraries. process management and efficiency. The purpose of segments is to help the system administrate the needs of all processes according to a simple paradigm. it is highly convenient to view the memory for different processes as being segmented. page numbers can be assigned by simply throwing away the lower bits from every address. for instance.that way. if two programs use the same code.the program stack and dynamically allocated data. It is analogous to counting in blocks of a thousand. we must cover all addresses from to . since are both in page . so two logical pages map into the same physical frame. Each segment of memory is administrated separately and all of the checks on valid addressing are made on each segment. Apart from the difference in names. Thus to store the mapping from logical to physical here. we have the possibility of reorganizing (reusing) each piece separately.sharing From the point of view of the system: sharing. It is therefore convenient to use separate segments for logically separate parts of a program/process. with paging we need only and addresses. . Since addresses are stored in bits.
That way.offset). Segmentation is an additional overhead which relates to the sharing of logical memory between processes. . When we use new.3: The UNIX process model.Figure 5.1. The page overhead relates to the mapping of logical to physical addresses. Addresses are written 5. the compiler translates this into a call to malloc().8 The malloc() function The operator new which dynamically allocates memory is a wrapper function for the C library function malloc(). The stack contains all local (automatic) variables and the heap is allocated by malloc(). As an example. Memory addressing with segments is like plotting points in a plane with coordinates (segment. showing the various segments used by each process. we retain the advantages of the page system. let's ask what happens when we call the function . The segment idea can all be built on top of the page/frame concept above by demanding that segments be a whole number of pages.
it leaves a hole of a certain size. fragmentation and alignment The process of allocating memory is really only half the story of memory management. the number of these holes grows quite large and the memory is said to become fragmented. This is because the multiple-byte registers of the CPU need to align their ``footprints'' to the addresses of the memory.malloc(). and is added to the free memory list. If there is not sufficient space. Alignment is a technical problem associated with the wordsize and design of the CPU. When memory is freed from a segment. Fragmentation occurs at two levels: . or any memory in the same segment which it allocated earlier.1. but if the holes are not big enough to fit the data we need to allocate then this is not possible. This when malloc is called. malloc is part of the standard C library on any system. Or.4: Levels of mapping from user allocation to physical memory. but we shall only be concerned with how it is implemented in BSD UNIX. malloc uses this space and updates the free-memory list. For example. malloc makes a call to the brk() function. The most extreme example would be the allocation of one char variable (one single byte). the CPU regards the addresses as being effectively multiples of the word-size. by deeper level kernel commands. Then the remainder of the page is free. The next time malloc is called. which is added to the free-list. it checks to see if the data segment of the current process has free bytes. malloc must normally acquire too much memory. 5. which tries to extend the size of the data segment. We would clearly like to re-use freed memory as far as possible. The function is used to obtain a pointer to (the address of) a block of memory pointer= malloc(n). Since malloc is a user-level command. Certain memory objects (variables) have to be stored starting from a particular (usually even) address. it tries to use the remainder of the last allocated page. Fragmentation can lead to wasted resources. it obtains logical memory for the caller. Another technical problem which leads to fragmentation and wastage is alignment.which are therefore wasted. If the space already exists within the pages already allocated to the process. bytes long. The acquisition of physical memory is taken care or by the system on behalf of malloc. The fact that malloc divides up pages of logical memory is of no consequence to the memory management system. Since the segment always consists of a whole number of pages there is no conflict with the page mapping algorithm. In most cases. In order to meet this requirement. by virtue of the word-size of the system. This is because the smallest unit of memory is a page. In order to obtain bytes of memory. memory sometimes has to be `padded' out with empty bytes . but has since been freed. Eventually.9 Page size. Figure 5. We must also be able to de-allocate or free memory. since each process maintains its own free memory list for the data segment. not all the memory obtained is required.
Note that external fragmentation is formally eliminated by the page concept. (There are never holes between segments in logical memory since we can always just renumber the logical addresses to remove them . Try to fit data into the holes that already exist. External fragmentation is cured by only mapping pages as in figure 5. Best fit. so as usual the size of pages is a tradeoff between two competing requirements. every object in physical memory is always the size of a page or frame. the system overhead grows larger as the page size is reduced. Choose the smallest hole that will do. External fragmentation. At the user level. The first method can be implemented in one of three ways. Internal fragmentation happens inside segments of logical memory when programs like malloc divide up the segment space. For example. .10 Reclaiming fragmented memory (Tetris!) There are two strategies for reclaiming fragmented memory. This is space lying between segments in the physical memory.4. This is space wasted by malloc in trying to fit data into a segment (logical memory). since it is fastest.) See the figure below. If. Internal fragmentation. Reorganize the data so that all the holes are regrouped into one large hole. With pages. a program only allocates memory in fixed size structures (like C's struct and union variable types). That means that. on average.they are not real anyway. on the other hand.for what it's worth). every hole must also be the size of a page and thus one is guaranteed to be able to fit a page block into a page hole. The first two are preferable. 5. one may choose a space on the basis of First fit. Of course. then every hole will be the same size as every new object created and (as with pages) it will always be possible to fit new data into old holes. The second alternative clearly represents a large system overhead and is seldom used. External fragmentation occurs in the mapping of logical segments to physical segments when there are gaps between the segments in physical memory. Fragments are show as the white gaps between allocated objects. but neither works best in all cases. Figure 5.5: Fragmentation occurs because we allocate memory in blocks of different sizes and then free the blocks. 1. because the problem is only transferred from external to internal fragmentation . if a program allocates and frees memory objects of random sizes. The criterea are i) minimization of fragmentation and ii) minimization of the allocation overhead. Given a free-list of available holes. Worst fit Choose the largest hole (which in some screwy sense leaves the biggest remainder .but such is the nature of definitions. This is a program design consideration. it will be a random issue whether or not the holes left over can be used again.1. Unions were designed for precisely this kind of purpose. The first is perhaps preferable. 2. Internal fragmentation can be minimized by choosing a smaller page size for the system. To some extent this is a cheat though. it is possible to avoid of the fragmentation problem when writing programs. Choose the first hole which will do. fewer bytes will be wasted per page.
sacrificing utilization of space for speed. until someone wants to use it. Instead.Lazy evaluation You might ask . In the system 5 based HPUX operating system. All of this should happen in such a way that the user of the system do not realize that it is happening. Such a scheme is possibile. and then makes a command decision to empty their contents onto a disk. 5. it could be dangerous to start a program which was unable to complete because it was too large to run on the system. Swapping and paging dump the system memory in special disk caches. Paging. but the OS declares that the disk is full! Early versions of UNIX used swapping exclusively when RAM was in short supply.so if we then discover that large parts of the program are never used. but virtual memory is more subtle than that.so if they are not used.3. A lazy pager never brings a page back into memory until is has to i. (The BSD UNIX system allocates sufficient space in its swap area to swap or page out each entire process as it begins. the system does this by determining which parts of its memory are often sitting idle. Swapping. This can save a considerable amount of I/O time. More often programs are large and contain sections of code which are visited rarely if ever at all by the majority of users . thereby freeing up useful RAM. Of course. As we noted earlier.if a program has a lot of pages which do not get used. none of them will ever run out of swap during execution. we can page them out and never bother to page them in again.2 Demand Paging .2.2. Another name for this is demand paging.5. Additional swap space can simply be grabbed from some part of the filesystem. One argument against this extreme form of paging is that. what is the purpose of loading them in the first place and then swapping them out? One could simply make a rule that no page should be brought into memory until it were needed. Normally these disk areas are not part of the usual file system structure.) If this fails to provide enough space. under SunOS the system administrator can either add other partitions. normally a whole disk partition (see next section) is reserved for swapping and paging. or use the mkfile command to create a swap file on a normal in a part of the file system where there is sufficient space. it could compromise important data. Only single pages are swapped out. That way. On BSD systems.e. it is quite seldom that every byte of every program is in use all of the time. including code segment and data segments is expunged from the system memory. the system stores swap files in large contiguous blocks. Rather than using mirrors. An entire process. which has a reserved size. but few systems allow a program to run if it cannot be loaded fully into memory on start-up. why keep them in RAM? Virtual memory uses two methods to free up RAM when needed. since it only occurs on demand from user processes. there both methods are available. if a program can be loaded in. . since the overhead of maintaining a file system is inapropriate when only the system needs to use the disk. In UNIX. If it started to run and then crashed or exited. with the intention of copying the data back again later. Eventually this can lead to a paradoxical situation in which the user sees nothing on the disk. under the conditions of the moment. all systems which have learned something from the BSD project use paging as their main method of virtual memory implementation.) On the other hand. The idea is to free RAM only temporarily. the simplest way to clear a space in RAM is to terminate some processes. it is most likely safe . Some systems also allow swapping to a special file in the normal filesystem.1 Paging and Swapping Virtual memory is a way of making the physical memory of a computer system effectively larger than it really is. by the kernel. This is an example of what is called lazy evaluation. Since BSD 4. the normal swap area is invisible to the user.2 Virtual Memory 5. (This is called the swap partition for historical reasons. if the system goes short.
larger segment. The OS proceeds to locate the missing page in the swap area and move it back into a free frame of physical memory. also disks are getting faster. The simplest way of replacing frames is to keep track of their age (by storing their age in the frame table). By copying read-only segments to the swap area at load time. the only way that memory segments could increase their size was to 1. Here we shall only consider some examples. Before paging was introduced. 2. For instance. as if it had never been gone. since the data are always where we need them in swap space and never change. the first three processes to be started are ) the swapper.It is now easy to see how the paging concept goes hand in hand with the logical memory concept: each time the system pages out a frame of physical memory.so it would probably I/O locked into RAM. reallocate and swap in again. 5. Moreover.3 Swapping and paging algorithms How does the system decide what pages or processes to swap out? This is another problem in scheduling. A multitude of schemes is available. This could either be the date. Suppose a page fault occurs and there are no free frames into which the relevant data can be loaded. . it is clear that a process is swapped out while it is waiting for a suitable hole in to appear in the memory. when control returns to the waiting process. It will begin to swap out processes however. Try to allocate a new. Try to look for free memory at the end of the current segment and add it to the current segment. The success or failure of virtual memory rest on its abililty to make page replacement decisions. A frame does not have to be moved back into the same place that it was removed from. the and ) the pagedaemon. Another case for swapping out a job is if it has been idle (sleeping) for a long time.3. The pagedaemon is responsible for examining the state of the page-table and deciding which pages are to be moved to disk. 3. In modernized versions of UNIX. the running overhead of paging out read-only data is removed. This might take a long time and it might be immediate. because the pager will first strip them of their inactive pages. it would be foolish to page it out . Certain facts might influence these algorithms. the missing page is automagically restored. The most important aspect of paging is that pages can still be accessed even though they are physically in secondary storage (the disk). Swap out the process. read only pages from the code segment are thrown away when they are selected for swap out and then read in from the filesystem if needed again. This is called (obviously) page replacement. data pages are only allocated swap space when they are forced out of physical memory. On a BSD-like UNIX system. ) init. A page fault is a hardware or software interrupt (depending on implementation) which passes control to the operating system. it sets a flag in the page table next to the logical page that was removed. Then the OS must select a victim: it must choose a frame and free it so that the new faulted page can be read in. such as the Solaris systems by Sun Microsystems. The page which has been in memory longest is then selected as the first to go. It then binds the addresses by updating the paging table and. because the runtime binding of addresses takes care of its relocation. Normally the swapper will not swap out processes unless they have been sleeping for a long time. or a sequential counter. if a process is receiving I/O from a device. Here we see the frames in the physical memory of a paging system. If a process attempts to read from that page of logical memory the system first examines the flag to see if the page is available and. 5. if it is not.6: Illustration of the FIFO page replacement scheme. if the average load on the system is very high. Let us now look more generally at how paging decisions are made. that the location of the physical frame is completely irrelevant to the user process. we can load in pages until the physical memory is full thereafter. When a new page fault occurs. as recorded by the system clock. copy the data to the new segment and deallocate the old one. we have to move out pages. These optimizations reflect the fact that modern systems have more physical memory than previously. In this use of swap space. The memory is rather small so that we can illustrate the principles of contention for pages most clearly. (The load average is a number based on the kernel's own internal accounting and is supposed to reflect the state of system activity.) This gives `cheap' processes a chance to establish themselves.first in first out Consider the figure below.1 FIFO . Consider the UNIX system a moment. a page fault occurs.2. Here are some viable alternatives for page replacement.2. Notice. Figure 5. It is the pagedameon which makes the paging decisions.
and.sets the bit to but jumps over it and looks for another page. In such a case. locate page table entry.2. so that the page number of the most recently accessed page is always on the top of the stack.it still results in a large system overhead to maintain the stack. As with all good ideas. store system clock time.This algorithm has the advantage of being very straightforward. We must update a data stucture which requires process synchronization and therefore waiting. A kind of compromise solution is to replace the page which has not been used for the longest period (see the figure below). this is not economical. Although this sounds cheaper in principle. this means that we have to update the time-stamp every single time memory is referenced.3. load system clock time. the paging system simply fails. We record the time at which each page was last referenced. Every time the memory management unit accesses a page it sets that bit to . The idea is that pages which are frequently use will have their bits set often and will therefore not get paged out. so unless the memory management unit can do something fancy in hardware.but unfortunately. the system has no way of knowing what that is. Consider the sequence of events which takes place when a page fault occurs: . the algorithm uses FIFO. Two possibilities for such an implementation are the following. 5.2. The UNIX pagedaemon uses this approach.if it is set to . In the extreme case that all pages are in heavy use the page algorithm must cycle through all the pages setting their bits to zero before finding the original page again. if the bit was set again while it was looking through the others.3. Compared to memory speeds. this means roughly speaking . Again. this testing incurs an overhead. 5. When a page fault occurs. We keep a stack of page addresses. Even then. Figure 5. Such a page would be selected even though it was still in heavy use. it costs the system quite a lot to implement it. return). many systems use something like the second-chance algorithm above. If the copying operation takes.7: LRU page replacement algorithm. Of course. When there is a tie. in spite of optimized disk access for the swap area. In practice. since the page replacement algorithm never has to search for a replacement . this scheme is not worth the system's time. five CPU instructions (jump to update routine. it might not find a page to replace. these operations delay the system markedly. but its performance can suffer if a page is in heavy use for a long period of time. disk access is quite slow . but it does require some appropriate hardware support to make it worthwhile.that the system is slowed down by a factor of around five. This does not require a crystal ball.2 Second chance A simple optimization we can add to the FIFO algorithm is the following.4 Thrashing Swapping and paging can lead to quite a large system overhead.3 LRU . say.least recently used The best possible solution to paging would be to replace the page that will not be used for the longest period of time . Unlike the FIFO scheme above. This is an unacceptable loss. Suppose we keep a reference bit for each page in the page table. instead of only each time a page is replaced.2. 5.it just looks on top of the stack . without special hardware. the page replacement algorithm looks at that bit and .
1.
2.
3.
4.
5.
6.
Interrupt / trap and pass control to the system interrupt handler.
Save the process control block.
Determine cause of interrupt - a page fault.
Consult MMU - is the logical address given inside the process' segment i.e. legal?
Look for a free frame in the frame table. If none is found, free one.
Schedule the disk operation to copy the required page and put the process into the
waiting state.
7. Interrupt from disk signals end of waiting.
8. Update the page table and schedule the process for running.
9. (On scheduling) restore the process control block and resume executing the
instruction that was interrupted..
5.3 Disks: secondary storage.
5.3.1 Physical structure.
Figure 5.8: Hard disks and floppy disks.
without any head movement.
surfaces and
heads, it could read
sectors sectors-per-track).
5.3.2 Device drivers and IDs??.
5.3.3 Checking data consistency and formatting
1. (if necessary) created by setting out `signposts' along the tracks,
2. labelled with an address, so that the disk controller knows when it has found the
correct sector..
5.3.4 Scheduling.
5.3.4.1 FCFS.
5.3.4.2 SSTF - Shortest seek time first.
5.3.4.3 SCAN, C-SCAN and LOOK.
Figure 5.9: Scanning disk scheduling algorithms.
5.3.4.4 Which method?
The choice of scheduling algorithm depends on the nature of disk usage. For heavily use disks the SCAN / LOOK algorithms are well suited
because they take care of the hardware and access requests in a reasonable order. There is no real danger of starvation, especially in the CSCAN.
5.3.5 Partitions
it does not matter that they overlap. 5. Partitions can overlap. they will destroy each other! In BSD UNIX. which is defined as the whole disk. we could. increased manifold. each new partition becomes a logical device and is given a logical device name which identifies the disk. in principle read from the whole disk. For instance. BSD UNIX. it is quite normal to keep the system software in one partition and user data in another partition. it has to be mounted in order to be reachable from the directory structure of the filesystem. Macintosh. and since we are focussing on UNIX we shall discuss its partitions in more detail. On the Macintosh and Amiga operating systems. when one makes a back-up of the disk. The separation becomes a hardware matter. if we read from logical device c. Otherwise they appear in the mountlist. A prerequisite for mounting a UNIX partition is that the partition must contain a filesystem. whereas if we use logical device b we may only read from the swap partition. This involves reserving space workspace for the operating system and suitable markers for navigating over the surface of the disk. In Sun's SunOS and Solaris operating systems. A partition (also called a cylinder group) is just that: a group a cylinders which lie next to each other. Each disk may have up to eight logical partitions which are labelled from a to h. Instead of saving all the data from a given file on one . it is split across many. That way. new disks are immediately sensed by the system and are mounted.. in principle. user data can easily be kept separate from system data. For example a command like mount /dev/sd0g /user-data would mount partition g on disk number zero onto the directory /user-data. most operating systems allow the disk surface to be divided into partitions. Disk striping is a way of increasing the disk transfer rate up to a factor of . The mount action is analagous to the opening of a file. AmigaDOS etc. for convenience. To use a partition we have to create a filesystem on it. Since partitions are defined for convenience.For the purposes of isolating special areas of the disk. In UNIX a partition is mounted using the command mount. the disks appear together with their device names on the workbench in the same way as the Macintosh.3. partitions are created by editing a table which is downloaded into the device driver. Thus. If two filesystems overlap. a special command format is used to make partitions.6 Stripes In recent years some UNIX systems (particularly Hewlett Packard) have experimented with disk striping. Under AmigaDOS. The different disks. Partitions are supported on MS-DOS. BSD UNIX uses a special convention for the partitions on each disk. Partition Usage a root and boot partition b swap partition c the whole disk d anything e anything f anything g anything h anything Each partition is assigned a separate logical device and each device can only write to the cylinders which are defined as being its own. because they are just limits. The newfs command is used to create a filesystem. Remarkably there are versions of system 5 UNIX which do not support partitions. Once a partition has been created. BSD UNIX partitions are a good example. If the Workbench (graphical user interface) is running. What is important is that the filesystems on two partitions do not overlap! This is extremely important. the speed of transfer is. by splitting files across disk. By defining partitions we divide up the storage of data to special areas. In the Macintosh case (which has only a pictoral graphic user interface) new partitions or disks are mounted on the desktop at the root level. Since the heads can now search independently. The result would be that all files on that partition would appear under the directory /user-data.
. When links jump across different branches of the file tree. The conflict of interest can be solved by introducing links.10: Disk striping: files are spread in parallel over several disks. A symbolic links is literally just a small file which contains the name of the true file. 5. Each user had a separate login area. For example. but the login area was not able to hold subdirectories. The special files `. Links enable one file to appear to exist at two or more places at the same time. sub-directories and so on.EXE. the executable binaries might have to be placed in /usr/local/bin. it is an alias for the true route to the file through the hierachical system. The Macintosh operating system determines whether files are executable or text files. if one of the disks becomes damaged. organize files in directories and separate off special areas using partitions.11: Links . A filesystem is said to be created on a disk by running a special program. whilst clicking on an application file loads the application which created it and then tells the program to load that file. the directory structure is sometimes called an acyclic graph.4. In fact they are pointers to other files which are elsewhere in the strict hierarchy. 5. but for all intents and purposes it looks like another instantiation of the file.COM. . and some configuration files for the system might have to be placed in a special directory where the operating system can find them. which contains a complete package of software that we have obtained in all of its sub-directories.exe is a link to /usr/local/mysoftware/prog. In order to delete a file with hard links. The UNIX file system makes a distionction between hard links and symbolic links.4 Disk Filesystems A filesystem is a high level interface to the disk. disks is lost.deviations from a strict hierachical filesystem. which distinguish between different types of file and which permit or disallow certain operations on the basis of file type. and delete the file to which a symbolic link points.disadvantage with disk striping is that. Look at the diagram below. On many systems this is identified with formatting the disk and involves writing address data to the surface as well as reserving system workspace on the disk. This requires a list of links to be associated with each file.' and `. relocatable executables and textfiles. . Thus striping needs to Figure 5. Clicking on an executable file loads and runs the program. all of the hard links must be removed.but it might also be necessary for some of the files in the package to be installed in special places. Earlier operating systems like MTS did not have a directory structure. The Macintosh filesystem refers to such links as `aliases'. 5.. A hard link is more permanent however. then the data on all be combined with a reliable form of backup in order to be successful. A link is not a copy of a file.' are hard links to their parent directories.exe and /local is a link to /usr/local Suppose are in the directory /usr/local/mysoftware.4. Figure 5. /usr/local/bin/prog. The hierachical file structure is a very convenient way of organizing data in directories.2 File types and device nodes Extremely elaborate filesystem interfaces can be made. We can create a symbolic link to a file which does not exist.TXT for executable files.1 Hierachical filesystems and links The most popular type of filesystem interface is the hierachical one. elsewhere in the file tree. Links are objects which appear in the file system and look just like files. But this rigid preoccupation with a hierachical ordering is not always the most appropriate one. Since the package is a unit. we would like to keep all of its files together and preserve that unity . MS-DOS distinguishes file types by using filename extensions like . which allows users of a system to give names to files.
obtain pointers to where the file exists physically within the secondary storage and set up a data structure called a filehandle which the system will use to describe the state of the file as we use it. like the Apollo Domain OS and the Andrew file system. see whether the file is inaccessible because it is being used by another user. 2. there are very intricate schemes for protecting files. This generalizes the notion of file owner and group by allowing a file to be accessible to a named list of users and a named list of groups. When finished. UNIX will try to pass the lines of the file to the command line interpreter or shell. Suppose one user is reading a file and another user wants to write to it. (This can also be arranged in BSD by setting the `sticky bit'. Certain files in the UNIX operating system are not really files at all but `handles' to devices. because we do not have permission to open it. In system 5. A device node is a way `into' a device through a filesystem interface. keeping enough flexibility to enable groups of users to share certain files. In BSD. called the login group. The owner of the file is the only one (apart from the system administrator) who can decide whether others can read or write to the file and to which group it belongs. When we open a file for writing. Each file has one owner and belongs to one group.) More modern UNIX systems and other operating systems now provide access control lists or ACLs. They are called device nodes. a lock is placed on the file to prevent others from writing to it simultaneously. but the interface is useful for those that can. all operating systems require that users formally open the file. If a text file is marked executable. Modern filesystems like NFS 3.even by their owner. Not all devices can be accessed in this way. UNIX will not try to run it. It is convenient to be able to use normal filing commands to access devices. the file inherits the group ownership from the directory it is created in. When a user program reads lines from a file. provided the user sets the correct protection bits. they must close the file. This formal procedure has several purposes.4. it will try to execute the file. 3. For user convenience. The group ownership is determined differently for BSD and system 5 UNIX. the system must present the user with a consistent picture of the filesystem. If a file is not marked as executable. it will fail. A more complex situation is the following. see whether the file is inaccessible. Once a file is open. but there is currently no standard implementation and the different systems are not compatible.4 File system protocols To read or write to a file. In some operating system. This is normally done by giving files permissions or protection bits. If the file is not a valid binary program. consisting of lists of users who are allowed or disallowed access to them. Novell systems (based on Apollo NCS) also provide ACLs. rather than just a single user or a single group. the file command attempts to guess the contents of UNIX files. . the file is secure. he or she may open the file for reading or writing to i) the other members of the group to which the file belongs. When a new file is created by a given user. ACLs were first introduced in the DOMAIN operating system by Apollo and were later adopted by HPUX and then Solaris. the kernel process list is represented as such a directory of pseudo-files. the group is normally set to a default group for that user. In the Solaris 2 operating system. that user is automatically the owner of the file.4. set up any cached data which might be used by the OS. It is also advantaegous to be able to lock files so that they cannot be deleted . If the owner wishes. 4. Executable binary files must conform to a certain protocol structure which identifies them to the operating system as being fit for execution. a pointer should be advanced so that every line is read exactly once. but this is not used by the system. An end of file condition should be signalled when the file is read (this is usually achieved by storing an EOF character at the end of the file) etc. Instead it keeps a flag to say whether a file is executable or not. If it is marked executable.3 Permissions and access On multi-user systems we must have a mechanism to prevent one user from freely modifying the files of another user .The UNIX system does not make any particular distinction on the basis of filenames. Here we shall briefly sketch out the simple system used by UNIX as an example. This lock is removed by the close operation. It allows us to 1. AFS and DFS also provide ACL support. Files must be readable and or writable only to very specific users. 5. These are all aspects of an agreed protocol defined by the filesystem. Since only the system administrator can add users to a group. or ii) anyone.while at the same time. 5.
we know that large blocks are more efficient. This is the analogous problem to that of memory allocation in RAM. If so. Space for it is normally allocated on the disk itself. A block can in principle be any size. notably BSD UNIX's ufs filesystem. We have a much better chance of being able to fit files into the spaces om a disk if we can allocate space in small blocks. A file is allocated in large blocks except for the last one which is allocated as a fragment. should the user be able to see the changes made to the file until after they have closed it? There are two possibilities . a way of storing the pointers must be found. This table could also be cached in RAM for faster access.either all users see changes immediately. fragmentation. The index blocks are grouped in one place for convenient access. Figure 5. Some systems. Each file has an index containing a list of blocks which contain the file itself. Our problem is then to identify files amongst all of the blocks. we must make a choice about whether we wish files to be stored contiguously. This method is used by MSDOS and a number of other microcomputer operating systems. of course. and a typical size for fragments is from 512 bytes to 1kB (eighths). This is largely a subjective issue. solve this problem by using two block sizes: major blocks and fragments. To use the space on a disk. This index might be stored anywhere in principle. or whether we wish to use a scheme of logical and physical addresses. so problems can be encountered in addressing all of the blocks on the disk. from release 4. and partly inside an index table which is built when the filesystem is created. we normally have to allocate a whole block and the remainder of the block is wasted. We shall briefly mention some general strategies below . Contiguous allocation is seldom used (except in the swap area) for filesystems because of the fragmentation problem. Indexing. The larger the block size. Each block of data includes a pointer to the next block of data in a linked list. The difficulty with this method is that each block must be read in the correct order and the blocks might be spread randomly over the disk. All of the pointers for every file are collected together in one place. Thus the retrieval of a file could require a lot of disk head movement which is slow. the OS has to keep several copies of the file until all file handles are released and everyone agrees about the contents of the file. Should the user be allowed to write to the file while someone is reading it? 2. Typical sizes for large blocks are 4kB to 8kB. What is important is that the filesystem defines its behaviour and sticks to it consistently. A linked list of blocks for each file is stored in a file allocation table. as we did in primary memory and allow files to be spread liberally any on the disk. since we can read or write a file in fewer operations in the block size is large. then most of the block . Linked lists.12: Blocks and fragments We must now address the issue of how the blocks are allocated. The principal difference is that disk memory is considerably larger than primary memory. Linked table. On the other hand. If we want to save a file which is only three bytes long. There are three ways of doing this: 1.4. the more efficient file transfers will be. 5. This system is used in UNIX. 2. Usually they are from 512 bytes (the same size as a sector) up to 8k. 3.1. It is difficult to say that one or the other type of behaviour is more correct. These are called blocks.5 Filesystem implementation and storage Although a sector is the smallest unit of allocation for the physical disk.2. The problem with contiguous allocation is. Both versions of this are in use by different filesystem implementations. or only users opening files after the changes were made see the changes.and then look more closely at the UNIX ufs filesystem. In the latter case. most filesystems create logical structures on top of sectors in order to optimize disk access. If the list is small and is held in a filesystem block. Since the index table must contain pointers to disk blocks. inside reserved disk blocks. The behaviour of the filesystem is often called filesystem semantics. Instead files are divided up into blocks and each file consists of a list of blocks which may be scattered anywhere on the disk.
kB.h and inode.6 The UNIX ufs filesystem A file system under UNIX is created using the newfs command. but a minimum block size of bytes i. a quarter or an eighth of a block. thus we have space for bytes per file. The total space accessible per file is now multiplied by . Inodes which are not in use. the type of file and its protection bits. the number of 32 bit pointers which will fit into the minimum block size.) Partition `a'on disk zero is special. it creates a fixed number of inodes. the whole filesystem would be unreadable. Using this method. Data blocks are (of course) addressed by indexing. There are three levels of indirect addressing.4. .) A superblock contains the information on the boundaries of the partition (cylinder groups) and information about where the inode table is and where datablocks start. These are the most important objects in the filesystem. Since the minimum blocksize is kB these pointers can address up to bytes i. inodes use three separate ways of addressing data on the disk (in fact four different ways are built in to the inodes.e. 4kB is stipulated so that the system can address bytes of memory without using three level indirection (see below).e. A separate filesystem must be created on each separate partition of the disk. not in the inodes themselves. If the superblock is lost or damaged. it can always be changed by changing the parameters given to newfs.and anyway. The inode contains a list of twelve 32bit pointers to blocks on the disk. The program sync is run the the OS at regular intervals in order to synchronize the disk structure with the present state of the cache. timestamps indicating the last time the file was written to etc.13: UNIX inodes. we have space for four-byte pointers in the address block . Thus if one block is detroyed. be added to the single-indirect and direct memory above. This is the default boot device. the inode pointer points to a block of pointers (as before). the boot program (in ROM) looks to the first few sectors of this partition for a boot block. Regardless of how large a file is. This file block has room for 4kB at least. fsck is run automatically on every boot of the system in case the system went down uncleanly. 5. again.and most often experienced users are wise to use. Both of these objects are information structures. The UNIX filesystem check program fsck can do this. the superblock will be invalid and will have to be repaired. Usually the limit is no problem in practice . are kept in a doubly linked list called the free-list.will be wasted. Each inode contains a plethora of information about the file: the device on which the file resides. If the system crashes or goes down without synchronizing the filesystems. in the sense of the C language and they are defined under the /usr/include/ufs directory in files fs. but only three are used). but the main advantage of this method is that it has few limitations. Also. When file system is created. since every fourth byte of the single-indirect memory above now forms a pointer to a block of 4kB. the inode has a pointer which points to a file block (not another inode). This must then be added to the 48kB of direct pointers. To define a filesysytem we have to define the blocksize and numerous other parameters.i. On power up. For larger files. when a file system is created. In single-indirect addressing. It is so important that. the size of the file and of course pointers to the actual blocks of data. In double-indirect addressing. This is a drawback of the index method.i. the last block of a file can be allocated as a fragment of a block whose size is recorded in the inode. Sectors contain the boot-block. Each system has its own defaults which inexperienced users . As an attempt at optimizing the index. Sector 16 marks the start of the superblock. The elements of an inode are drawn in the figure below. which is roughly 4 giga-bytes. Filenames are stored in a directory structure. though only two are used currently. It is not possible to create more files on the system than the number of inodes. The total size is bytes. with pointers to the appropriate inodes for the start of the file. It is instructive to look at these files. but now these pointers point to blocks which also contain pointers . the user id and group id of the owner. Two structures are created when a file system is created: inodes and superblocks. For small files this would be enough. An inode or index node is the data structure which holds the specific data about a particular file. Figure 5. (It might be a half. the pointers to the real data. there is exactly one inode per file. This is where the default blocksize etc will be defined on your system! The blocksize is variable.e.e.h. This should.and each pointer can point to bytes (4kB). another can be used to repair the damage. (UNIX uses buffered and cached I/O so data are not always written to the filesystem immediately. a system of indirect addressing is used. Those 4kb are used to store a sequential array of 32bit pointers to other data-blocks which contain the true data. so a limit is built into each file system. superblock backups are made at regular intervals throughout a partition.
sectors_per_track = 20.. as many times as you like. Save the two short files onto your `virtual disk' from the real disk. in view of what you have learned about logical. * 14. 3. * 19. * 9.) Suppose you have four short files of data.. Make sure that you can retrieve the files by name. * 16. 6. * 15. Save the longer file now. const const const const int int int int tracks = 20. Delete the first file. internal pointers in the file structures are still OSF/1) and so a limitation to bytes is imposed by the word size of the system hardware. 5. so that the output looks something like the following. two short and one long. Why is this array not exactly like a disk? (Hint: think geometry. You can model the tracks and segments of the disk as an array. Track -> 7. 1 2 3 4 5 6 7 8 9 . What are the advantages and disadvanteges of shared libraries? 2. 2. * 13. 8. 4. bit (except in Exercises 1. You can choose yourself whether you base it upon SCAN or CSCAN. Write a device-driver program which moves the head of the disk according to the LOOK scheme. * 10. Go back and think about shared libraries. Write programs to code the page replacement algorithms discussed in above. physical and virtual memory. Design a simple filesystem so that you can do the following: 1. * . * 17. Retrieve the files again. using the space that was freed when you deleted the shorter file in . bytes_per_sector = 64. Plot the head movement of your disk on the screen using track number for the horizontal axis against time vertically. * 12. Project Write a program to model the behaviour of a hard disk. A disk drive contains a stepper motor which pushes the head one track at a time.Although the inodes can span an address space which is larger than bytes. heads = 2. * 18. * 11. char harddisk[tracks][sectors_per_track][heads][bytes_per_sector].
like owning a big car.one click of the stepper motor. proportional to the number of users. say. can be spoiled in practice for most users by a few greedy users. Show how the head moves when you save and retrieve your files. eight characters to make the problem easier. To some extent. software has grown to absorb that power . More and more programs are interactive and the I/O overhead is much larger since mice and windows came along. The solution which is popular at present is to give everyone a smaller machine with their own CPU. What's more. Not everyone can afford their own . Consider a large company or a university with thousands of users. however. Hint: you might want to limit the filename size to. Interactive I/O places a big load. and what scheme your filesystem uses to recover files in the correct order. since computing power has generally increased. Explain carefully how you locate files on your disk. is that a single monolithic computer with terminals is not a good solution. What big multiuser mainframe machines have tought us. linked together by a network we Spread the interactive I/O load over all the machines. you need to know what is data corresponding to which file. User authentification data (password database). The disk array is just an array of characters. Although perhaps wasteful in theoretical terms. keyboard and screen. Scheduling the CPU. Note that you will have to design a `protocol' for saving the data into the array. many of whom have workstations or terminals on their desks . . Addresses and telephone numbers.so it is not wasted for long. like in DOS. Networks: Services and protocols In this section we shall consider how to use the concepts we have considered so far to make the task of implementing network communication as straightforward as possible. without having to be reproduced on every machine: The printer.all of whom are connected to a network. A reference clock which can be used to set the local clocks on all systems.so we share. Everyone wants their own private CPU. which improves the quality of life for those who have it. By giving everyone their own machine. Hint: use separate output files to print the result of the head movement and the result of retrieving a file. Disks holding user data. 6. In this situation it is natural to share certain resources so that they can be accessed from anywhere on the network. A large machine with a hundred keyboards attached to it can quickly become overwhelmed by keyboard I/O. in practice it is one of those little luxuries. even if efficient on paper. Users demand more and more CPU power every day. you if you want to save a file. this idea of sharing was the idea behind multi-user systems.Time is measured in units of one head movement .
for requesting services. It also has high level tools called protocol compilers which help programmers to write the code to interpret data at both ends of a client-server connection. is a high level software package which works on top of sockets and allows programs to send typed data using a protocol known as XDR . A `socket' is a communications link from one process to another. Sockets work over a network using the internet protocol set (see next chapter). We don't want to keep clients waiting. we need: Interprocess communication which works across machine boundaries. The system kernel is a kind of server. we introduce the concept of a service. Data are transferred as streams or packets of raw.or Remote procedure call software package.2 Communication and protocol There are two ways of making a client-server pair. Network services need to be multi-threaded. The key idea is that there are always two elements: clients and servers. are servers which answer requests or perform some house-keeping on behalf of other processes. we would like to arrange things so that the server and the client might be anywhere . This is what is known as the client server model6. To achieve these aims. 6. One is to use Berkeley sockets directly and the other is to use RPC . Protocols .on the same machine or on different machines.1 Services: the client-server model To share public resources on a network.Allow machines to have public and private resources. We would like a flexible system for sending out requests for services into a network and getting an answer without having to know too much about where the services are coming from.external data representation. in the UNIX terminology.1. A service is simply a job done by one part of a system on behalf of another. Opening a `socket' to another machine is like opening a file to read from or write to. 6. The client and the server need not be on the same machine when there is a network present. which provides I/O services for all the processes on a system. Figure 6. The service is provided by a server on behalf of a client. since several clients might request services simultaneously.agreed systems of behaviour . RPC. noninterpreted bytes. There are two main implementations of this software: Sun Microsystems' RPC . We have already encountered this kind of behaviour before in connection with system calls. Services need to have names or numbers which identify them uniquely. On a network. Also daemons. Introduce a new overhead: the network software.1: The client server model. The interpretation of the data once they arrive at their detsination is a problem for the user to deal with. on the other hand.
and works out port numbers for itself. we do not request a file handle but a port. but has since been adapted to all the popular systems in use. 6. Most of the software was developed for the UNIX-like operating systems. Every computer.Apollo's NCS system. A port is a software concept . we are not interested in how the message gets to the server over a network. Well-known ports. The terminology is also different. We just want to be able to call some function DoService(myservice) and have the result performed by some invisible part of the system. ftp 21/tcp telnet 23/tcp smtp 25/tcp mail time 37/tcp timserver . A well-known port is a port number () which is reserved for a well-known service like ftp or telnet. It has been registered in a world-wide register. All of the software runs on top of the TCP/IP network protocol which we shall discuss in the next chapter.4. we distinguish between services and RPC. 6. which looks like this. Historically. The system of calling RPC services is different to normal services . It is a number which an operating system uses to figure out which service a client wants.. Port numbers are listed in the file /etc/services.4 UNIX client-server implementation It is useful to describe how UNIX deals with services. When we ask for a service. To make this happen. RPC program numbers.. 6. We say that a particular service lives at port xxx. Here is some important terminology. a system of `handles' is used. all over the world has to agree on the port numbers for different services. although the effect of the two is the same.it uses program numbers first. It is rather like opening a file . since this is the model which has been adapted for other systems.1 Socket based communication To send data to a server using sockets.but now we want to open a service. To obtain a service. we need to know the port number at which the server lives. Internet style # This file is never consulted when the NIS are running # tcpmux 1/tcp # rfc-1078 echo 7/tcp echo 7/udp .3 Services and Ports Services are a high level concept. # # Network services.it should not be confused with the hardware connector which couples your machine to the network (which is also called a port on some systems).
we can send a broadcast request to all hosts. there are two possibilities. # # Configuration file for inetd(8).. inetd reads a configuration file called /etc/inetd. The server process gets started when the request arrives. If a server is expected to receive a lot of requests. which looks like this: # # Internet (IP) protocols # This file is never consulted when the NIS are running # In order to open a socket. This is a kind of public server which works on behalf of any service.. The server process is always running. See inetd. hoping that one of them will reply with their correct address (see next chapter). If we don't know this information in advance. it should be running all the time.conf. If it spends long periods sleeping it should probably started when a request arrives. . login shell printer courier uucp biff who syslog talk route ingreslock bootpc bootp 37/udp 42/udp 43/tcp 53/udp 53/tcp 101/tcp 111/udp 111/tcp 513/tcp 514/tcp 515/tcp 530/tcp 540/tcp 512/udp 513/udp 514/udp 517/udp 520/udp 1524/tcp 68/udp 67/udp timserver nameserver nicname # usually to sri-nic hostname # usually to sri-nic cmd spooler rpc uucpd comsat whod # # # # no passwords used line printer spooler experimental uucp daemon router routed bootps # boot program client # boot program server The file maps named services into port numbers and protocol type. Both methods are used in practice.time name whois domain domain hostnames sunrpc sunrpc . Here are a few typical lines from this file. Also. The protocol type is also an agreed standard which is defined in the file /etc/protocols. when the message arrives at a host which runs the server process. we must know the name of the host on which the server lives.conf(5). The second of these possibilities is handled by a yet another server called the internet daemon or inetd.
rshd in.240. # shell stream tcp nowait root /usr/etc/in.. There is now an extra step in the chain of events .22. each type of server program must have a unique program number.4. which must be obtained from Sun Microsystems.if such a request arrives . does not only contact the well-known port number . mark Login name: mark In real life: Mark Burgess . and . it sends a request to the portmapper on the server host asking for a server which can deal with program number (service) xxx. For example. but can also be used to send a message to any port.ftpd in. as opposed to the telnet service.rlogind exec stream tcp nowait root /usr/etc/in. # ftp stream tcp nowait root /usr/etc/in. telling it which port it is listening to and what program number it is using.telnetd # # Shell. Escape character is '^]'.should it wait for the first request to finish (single threaded) or should it start several processes (multi-threaded) to handle the requests 6.comsat talk dgram udp wait root /usr/etc/in.# # Internet services syntax: # <service_name> <socket_type> <proto> <flags> <user> <server_pathname> <args> # # Ftp and telnet are standard Internet services. A suitable free port can be found at start-up.rexecd in.5 The telnet command The telnet command.talkd inetd listens on the network for service requests for all of the daemons which are in its configuration file. a procedure number and a version number.rshd login stream tcp nowait root /usr/etc/in.2 RPC services In the RPC way of doing things. When an RPC server starts up on its host. On the other hand. we could contact the well-known port on host `mymachine' as follows: anyon% telnet anyon finger Trying 129. comsat and talk are BSD protocols. 6. The program numbers are stored in /etc/rpc. This tells inetd what to do if another request arrives while one request is being processed . The real benefit of the RPC packages is the high level concepts which they handle on behalf of the programmer. instead of the command finger mark@mymachine to get information on user mark from the finger database. When an RPC client wants a service. 6.. This is called the portmapper. it registers itself with the portmapper.2. we called a service based on a program number.rexecd comsat dgram udp wait root /usr/etc/in.rlogind in.14 . login. The portmapper replies by giving the port on which the RPC server is listening.comsat in.it starts the server for the duration of the request. The protocol compilers and XDR protocols provide a set of `frequently need subroutines' which enhance the system of communication across a network.ftpd telnet stream tcp nowait root /usr/etc/in.talkd in. The advantage of this scheme is that RPC applications do not have to run on well-known ports.yet another common server which must be consulted. Notice the field `wait' and `nowait'. Connected to anyon. exec.telnetd in.
A user's workstation runs a server process called X. so that you don't read only half of the answer by mistake. whose job it is to display windows on the user's display. This insists upon simple rules for formatting pages and references. Figure 6.2: The X windowing system. for example: .6 X11 The X11 window system. but telnet will try to contact their ports nevertheless. Exercises 1. Each application the user wishes to run is a client which must contact the server in order to have its output displayed on the X-display. All other window systems require you to run programs on the computer at which you are sitting. or whether they are running over the network.0 17 minutes Idle Time Mail last read Sun Aug 14 14:27:02 1994 No Plan. Strangely.7 html: hypertext markup language A further example of a protocol is the world wide web hypertext markup (formatting) language (html). Or had finger not been in /etc/services. 2. Explain what `protocol' means. and the server should send back the answer. (Hint: you could use the `sleep' command to wait for the server to reply. X uses its own system of protocols which is layered on top of socket communication.) You will need to think of the following: 1. What filenames should you use to send messages from the client to the server and from the server to the client? 2. it makes no difference whether applications are running on the same host as the X-display. 3. used by Unix. The client must be able to exit gracefully if the server does not answer for any reason. we could have written telnet hostname 79 Not all services accept textual input in this way. Since the client and the server are independent processes. The server should be sent an arithmetic problem to solve. By making this simple client-server abstraction. 6. is a client-server based application. 6.Directory: /mn/anyon/u2/mark Shell: /local/bin/tcsh On since Aug 14 11:59:39 on ttyp1 from :0. Describe briefly the client-server model. . you need to find a way of discovering when the client and the server have finished writing their replies. The client should send this request to the server. What role do dæmons play in with respect to the unix kernel? Why are servers daemons? Project Make a simple client-server model which commuicates via unix files. X and Sun's variant News are the only window systems which have understood the point of networking.
7. The International Standards Organization (ISO) has defined a standard model for describing communications across a network. the most practical way of solving complicated problems is to create `layers of detail'. It does not have to be taken literally .but it is useful as a way of discussing the logically distinct parts of network communication. . Physical layer. Data link layer. If the type of cable changes (we might want to reflect signals off a satellite or use fibre optics) we need to convert one kind of signal into another.1 The OSI model We begin by returning to the `most important idea in computing' . 7.1. Each type of transmission might have its own accepted ways of sending data (i. The next thing to look at is how such a scheme of protocols is achieved in practice.so we never see the irrelevant pieces of the computing puzzle we are trying to solve. removing noise etc. then we need a way of structuring the data so that they make sense. The layers are described as follows. This means we need a device driver for the signaller. The server should loop around and around.e. and something to receive the data at the other end . amplifying it if it gets weak. while the client sends only one request and exits when it gets a reply. called the OSI model.it might not be natural to separate all of these parts in every single program . waiting for maultiple requests. the sending of data between two machine takes place by manipulating voltages along wires. This is the problem of sending a signal along a wire. As we have noted before. The OSI model is a seven layered monster. At any level in a hierarchy.1 The protocol hierarchy 7. the details of the lower levels are invisible .a way of converting the signals into bytes.namely hierarchies. TCP/IP Networks In the last chapter we looked at some of the high level considerations for enabling transparent communication over a network. 1. Each of these elements is achieved by a different level of abstraction. This is a layer of checking which makes sure that what as sent from one end of a cable to the other actually arrived. This is sometimes called handshaking. for Open Systems Interconnect (reference model).3. 7 Application layer Program which sends data 6 Presentation layer XDR or user routines 5 Session layer RPC / sockets 4 Transport layer tcp or udp 3 Network layer IP internet protocol 2 Data link layer ethernet (protocols) 1 Physical layer ethernet (electronics) At the lowest level. protocols). 2.
This is typically done with sockets or the RPC. what the information is for. where the data are going. Because many machines could be talking on the same network all at the same time. 5. 4 Application layer user program 6. Session layer This is the part of a host's operating system which helps a user program to set up a connection.only the physical layer is changing appreciably. As always. data might be divide up into numbered packets. Each layer of abstraction we introduce requires a small overhead (the header) which gets added to the data so that the receiver can make sense of them. the length of the packet and so on. This is divided into four layers which correspond roughly to a coarse OSI model.e. Presentation layer.thus as new technology arrives. Most of these layers are quite static .i. namely the transport layer. The form of the data might be a sequence of messages. we can improve network communication without having to rewrite software. the length of the message and so on. This is the essence of implementing a protocol in practice. Often when data are encapsulated. we need to `package in' the information in some agreed format. It establishes connections and handles the delivery of data by manipulating the physical layer.2 Data encapsulation Each time we introduce a new layer of protocol into network transport. Application layer. for example. The transport layer builds `packets' or `datagrams' so that the network layer knows what is data and how to get the data to their destination. Network layer. This is analogous to the sharing of CPU time by use of time-slices. where it is going and which piece of the total information the current packet represents. since data might flow along many cables and connections to arrive where they are going. We could change all of these without doing serious damage to the upper layers . We shall concentrate on this layer for much of what follows. 4. 7. data are broken up into short `bursts'. 7. Only one machine can talk over a cable at a time so we must have sharing. This is information which includes.7 . Layers to are those which involve the transport of data across a network. each `packet' (to use the word loosely) is given a few bytes of `header information'. We might find that the data are structured at a higher level. How are the data to be sent by the sender and interpreted by the receiver.3. so that there is no doubt about their contents? This is the role played by the external data representation (XDR) in the RPC system.1. each of which has a header of its own containing a port number or RPC program number of the receiver application program. Suppose now that we were to `unpack' these data.2 The internet protocol family The set of protocols currently used on most networks is called the internet protocol family. 6. 7. Notice the parallels between this and the system of segments and pages in the virtual memory concept of chapter 5. The program which wants to send data. This is the layer of software which remembers which machines are talking to other machines. the advantage of using a layered structure is that we can change the details of the lower layers without having to change the higher layers. removing their headers and reassembling the data. This is called data encapsulation. It is easy to share if the signals are sent in short bursts. At the level of the network layer. each of which contain the address of the sender and receiver. Transport layer. The network layer needs to know something about addresses .
i. we have the IP or internet protocol which includes a specification of addresses and basic units of data transmission. On the other hand. based on the efficiency and speed of the network. On a slow network. since it requires no handshaking by the system. the overhead becomes a significant portion of the total packet size and the transfer is inefficient.2. where no particular reply is required. The above header is then reproduced in each fragment together with a `fragment offset' which determines the order in which the fragments should be reconstructed at their final destination.2: udp packet header Notice that this header contains no ordering information . At the next level (the transport layer). it will divide datagrams up into smaller datagrams called fragments.e. Each datagram consists of a number of 32 bit words. it would be used to send print jobs to a printer queue across a network.2 tcp A single message of the transmission control protocol is called a segment. there is no guarantee that data will arrive at the destination and no confirmation of receipt is sent by the receiver. If a router transmits datagrams from one physical network to another. The client knows that its question arrived if it gets an answer from the server. . and wants every single line of data to arrive in the correct order without having to worry. A single `message' of udp encapsulated datagrams is officially called a packet and is given a small header as shown in the figure below. It is useful for applications which either need or want to provide their own form of handshaking. The ordering of each message implies a concept of two machines being continual contact with one another. Only the integrity of the data are checked. This is chosen when the physical layer is designed. a small packet size would be used so that the multiuser sharing of network time is more equitable. 7.1 udp The user datagram protocol is called unreliable because when an application chooses this protocol. For example. the best it can do is to `throw its data out to the wind' and hope that it will arrive.1: IP datagram format The size of datagrams may be altered by the transport agents during the process of being sent. The tcp protocol is called reliable or connection-oriented because sufficient handshaking is provided to guarantee the arrival and the ordering of the segments at their destination. The official name for the lowest level data `packages' in the internet protocol is datagrams. it would be natural to use the udp protocol for a `question-answer' type of client-server system. The first six of these words consists of the IP header. Figure 7. Udp is the simpler of the two transport layer protocols. They are sometimes called connection-oriented and connectionless protocols respectively. The packet size on different physical networks is part of the low-level protocol definition. using a checksum. and the second network uses a smaller packet size. The sender receives no reply from the print spooler. This is like a telephone conversation: both parties are in contact all the time.so the order in which the packets arrive at their destination is not guaranteed by the protocol itself. or reliable and unreliable. We shall explain these names below. if the packet size is too small. It is called connectionless because the messages are sent one by one.5 2 Internet layer lower level datagram transport 2 1 Physical layer Network 1 At the internet layer. Figure 7. TCP connections are useful for sending data to servers. so asking the network protocols to guarantee it would be a waste of time.2.3 Host to host transport higher level data encapsulation 3. 7. This is like sending a letter in the post. there are two standard protocol types provided. These are called tcp for transmission control protocol and udp for user datagram protocol. For example. When we use udp transport. a greater number of packets per unit time can be sent if the packets are smaller. without any concept of there being an on-going connection between the sender and receiver.4.
Modern ethernet uses neither method. That is. Every machine listens into the same cable with one interface connector (see figure). which costs time and resources. Every host sees every message but only the host with the destination address bothers to accept the message. Figure 7. A basic ethernet network consists of a single cable or bus.4: Chains and rings.2 Ethernet addresses An ethernet address is a number which is wired into every ethernet interface card. . If there are many machines. This would require network connections per machine if there were machines. The first few hexadecimal digits of the ethernet address identify the manufacturer of the interface. From the hub there is one twisted pair wire to each machine.Each tcp segment has a header as shown in the figure below. A multiport repeater combines amplification with dividing up a thin ethernet cable into branches.3: TCP segment header 7. The disadvantage with this scheme is that each machine has to send signals forward to the next one. Another solution is to chain machines together (see figure below) or put them in a ring. thin ethernet a coaxial (usually black) cable a few millimetres thick and twisted pair ethernet. whereas the older types use coaxial and Dpin connectors. The latter comes out of an ISDN telephone connector. at strategic places on a master cable (usually thick ethernet) a `hub' is attached. Each host flashes its signals to all the hosts on the cable like sending Morse code with a torch.3 The physical layer As an example of a physical layer.3. Instead it uses a combination of two solutions. It is unique for every machine in the world.5: Ethernet Since all machines share the same cable. FDDI fibre optic transmission works like this. a fat yellow cable with black markings. Each machine waits its turn to transmit data.3. we require many hubs. Figure 7. since the number of connections is limited to ten or so. 10BaseT or ISDN.6: Star base networks A similar arrangement can be achieved with thin ethernet using a multiport repeater. only one machine can be talking at once. rather than a hub. Other kinds of cable include fibre optics (FDDI) . Ethernet is one form of cabling which is in common use. Ethernet comes in three flavours: thick ethernet. we have to connect computers together.1 Network connectivity To send messages from one computer to another. This is a device which converts one connection into many. It's pretty clear that this is not a good solution. Figure 7. which is used over long stretches of cable. This requires only two connections per machine. One way of doing this would be connect every machine to every other machine in some bizarre `cat's cradle' of wiring. Figure 7. It is called a token ring. A repeater is simply an amplifier. 7. Twisted pair ethernet is usually structured in star formation. 7. until they arrive at the correct machine. we can take a brief look at the ethernet. The twisted pair solution is the most modern of these solutions.
The ethernet address is the only piece of information a machine has before it is configured. 7. `iu.mmm where xxx etc can be numbers from to (Certain addresses are reserved). so the question is how many machines do we want on one side of a router? The netmask only has a meaning as a binary number.39. you have to ask yourself . networks and domain names The internet is. each machine belongs to a logical domain which has a name.7: The netmask sets the division between network address and host address in .hioslo.net The addressing scheme is based on a hierarchical splitting of networks. the domain name `uio.net' are domain names. It is only used by diskless machines and some xterminals as part of an ARP/RARP ((Reverse) Address resolution protocol) request to get an IP address.no'. can encompass any number of different networks.240.14 128.89.*. The IP packets need to know which subnet the machine is on in order to get to their destination. each host has a name. 129. a world-wide network. The * is unknown.no. subnets and hosts.hioslo.zzz. As of today. we need a database which maps internet domain names to internet addresses.host.and we don't know how to choose the right one.which bits are ones and which are zeroes? The bits which are ones decide which bits can be used to specify the domain and the subnets within the domain. but is not contained directly in the textual name form.96. To complete this information. This information is correctly coded into the numerical address. because the text name only says that they should to to 120. In the textual form. the following are valid internet addresses and the names they correspond to. the remainder of the address (the domain name) is usually a generic name for a group of networks .no' and `uu.no' encompasses all of the subnets under the address 129.2 Netmask and broadcast address Each address consists of two parts: a network address and a host address.9 anyon. The numerical form is strictly speaking a combination of a network address and a host address. When you look at the netmask.yyy.10 192. For example. it is version 4 (IPV4) of the internet protocol which is in common use. Subnets are usually separated by routers. Every host on the internet has to have a unique address so that network communication is unambiguous.22. To arrive correctly at its destination an IP packet has to know exactly which network a host is connected to. like the above examples.240. In addition to a numerical address. Figure 7. 7. The bits which are zeroes decide which are hostnames on each subnet. A system variable called the netmask decides how IP addresses are interpreted locally.*. Given a numerical IP address. The netmask decides the boundary between how many bits of the IP address will be kept for hosts and how many will be kept for the network location name. while the hostname is unique.240. The local network administrator decides how the netmask is to be used.*. This mapping is performed by the Domain Name Service (DNS) or Berkeley Internet Name Domain (BIND) which we shall discuss below. The textual form is a combination of a hostname and a domain name.4 Internet Addresses and Routing 7.iu.1 IP addresses. datagrams can find their way precisely to the correct network and machine.uio. by now. There is a subtle difference between these. For example.4. There is thus a trade off between the number of allowed domains and the number of hosts which can be coupled to each subnet.no nexus.domain-name Thus in the above examples. `nexus' and `ftp' are host names and `uio. The address takes the form machine.uu. The textual information is not sufficient however because. `anyon'. This is given by a 4-byte word of the form xxx.48. A logical domain.
A gateway is another name for a router.zzz represent the domain and the last number mmm represents the machine.0. Thus a total of hosts could use the same domain name.3 Routers and gateways A router is a device which connects two physically different segments of network. it forwards the request to the official nameserver for that domain. The host on which the nameserver runs is often called a nameserver too. it has different network interfaces and forwards datagrams between them. The most common situation is that the first three numbers xxx.255. would like to use names rather than numbers to talk about network hosts. we would have to introduce a different domain name for the extra hosts! If we wanted more machines on each subnet. This is a service which takes a textual internet address of the form host.1 The Domain Name Service Although the system of textual internet addresses is very convenient from a user point of view. Another address is reserved as an address for the network itself. it creates a problem. This is called resolving the name. called a nameserver. The solution to this problem is the domain name service or DNS. The domain name service is a daemon. Roughly speaking. In this case the netmask is 255.4.yyy. Each server covers only the list of hosts in its local domain.yyy. and xxx. If a nameserver receives a request to resolve a name which is not in its own domain. or it can be a dedicated piece of machinery. though these names are normally used as it suits.5. . we would have to change the netmask and the definitions of the domain address. The UNIX program nslookup can be used to browse in the Domain Name Service. and routers which just isolate different segments of network of the same type.255 is the broadcast address.0. The router must be able to understand internet addresses in order to do this .to refer to all machines in a given domain simultaneously. If we wanted more.zzz. leaving the last byte for machine addresses.5 Network Naming services 7. Notice that no two machines in the same domain may have the same name. Users. One address is always reserved by the internet protocol. on the one hand. as in the figure above. since the software was written for UNIX) and looks up names in a database. which runs on some chosen host (usually a UNIX machine.248. It is only possible to have 254 different machines in the domain with address xxx.zzz.zzz.0 was used for both of these. 7. but the name form is not sufficient in itself as an exact specification of a network and host addresses.yyy. Usually xxx. The DNS also performs the reverse service. not those of other domains . otherwise the DNS would not be able to resolve the IP address from the textual form. Some authors distinguish between gateways which forward packets with different network protocols. This is an address which is used like a wildcard .0 is the network address. but on older networks the address xxx. namely the broadcast address. we add an extra bit to the host part. the network on the short end of a router is called a local area network (LAN) and the greater network on the long end is a wide area network (WAN).255. By making the netmask 255.the 4-byte IP address.domain and returns the numerical form of the IP address for that host.but it has a knowledge of other nameservers which can answer queries in other domains.255.yyy. A router can be an ordinary workstation.since it must know where packets want to go. If a router joins different networks.yyy. converting numbers into names and stores extra information about which hosts are mail-exchangers etc.zzz with this netmask. 7.
The hosts information actually reproduces the information which is stored in the DNS. but the information is not complete . so that when the machine comes back up. 7.6. Suppose we are in the middle of writing or retrieving a file and the server machine supplying the file crashes. as is the list of all hosts on the local network. The message is received by every host on the network that is listening.1 NFS . The advantage of NIS is that each user on a network can have the same login name and password on all of the machines which use the network information service . If no servers are available.no matter where in our corporate empire we happen to be sitting . the user registration database is contained in NIS. It is nevertheless used by DOS and Macintosh machines which run software to communicate with UNIX servers on TCP/IP networks.the network filesystem NFS was historically the first distributed filesystem to be implemented. by Sun Microsystems. so the load of answering the service is spread around. One problem with a network file system is what to do about machine crashes. has the least to do.5. This NIS is simply a way of sharing the information which would otherwise have to be typed in separately to each machine. The concept of a distributed filesystem is about sharing disks across a network. NFS works by implementing a number of servers which run on UNIX machines. since it lacks adequate functionality and security. hopefully the other one will. Wheras each host must know the name of its nameserver. Microsoft have their own implementation called WINS (Windows internet nameservice) as their own commercial solution but this will soon be abandoned in favour of DNS. The data it stores are commonly required configuration files for UNIX. a client may never get its information! Most networks have backup servers in case on should fail. written by Sun Microsystems is another service which provides information on the network.because they all read the same database. it replies to the sender with its IP address. no host has to know the name of the local NIS server . The advantage of the system is that a client always ends up asking the server which can answer quickest . .6 Distributed Filesystems Probably the first thing we are interested in doing with a network is making our files available to all hosts. NFS is based on Sun's own RPC system (Remote procedure call). so that . so that the sender knows which host to query for NIS information. 7. It will continue to use that address for a while (even though the server may crash in the mean time) and then it broadcasts its query again. All manufacturers now support Sun's NFS. such as network-wide mail aliases and information about groups of users. The DNS software which is in most widespread use is the Berkeley BIND software (Berkeley Internet Name Domains). That way if one doesn't answer. When a NIS server receives the messages. until Sun Microsystems were politely informed that Yellow Pages was a trademark of British Telecom. The software which connects clients to the server sends out a request to the broadcast address. NIS was formerly called the Yellow Pages.since only host names are mapped to IP addresses . The idea behind NFS is to imitate UNIX filesystem semantics as closely as possible from across a network.no domain names are included. The Network Information Service (NIS). 7. In fact this is almost impossible to achieve in practice .that is because NIS uses the broadcast system.and which. For example.we always have access to our files. Only one of these is in widespread use. Many people still refer to NIS as YP. the operation can continue where it left off.2 Network Information Service The DNS is not the only database service which is in use on the internet. Each new network which is set up on the internet has to register its nameserver centrally so that this information is complete.Nameservers update each other's information constantly about what official nameservers addresses are so that the data are always up to date. We need some way of remembering where we were. Every host on a network must know the name of its local nameserver in order to send requests for name resolution. Many operating systems have There are three main contenders for such a system in the UNIX world. A number of other databases are held in NIS.NFS's solution works in many cases. presumeably. NIS was designed for the UNIX operating system. but not in all.
and even the same password. The state information has to be cleared and restarted.the andrew filesystem Another filesystem which is increasingly discussed. Power failure might bring the system down. One can protect against power failure by using un-interruptable power supplies (UPS). access control lists (ACLs) etc. Malicious users may try to hack into the system to destroy it. NFS is sometimes called a stateless protocol. If this is not possible because the filesystem has changed in the meantime or because of unfortunate timing. The CERN high energy physics (HEP) group use the AFS as a global filesystem and many other institutions are starting to follow suit. Although discussions of security usually concentrate on the first of these possibilities. called NCS. These include better caching. A lock is obtained from a lock server on the host where the real disk filesystem lies and the state of the filesystem is communicated by a state server. AFS is a commercial product maintained by Transarc and is therefore not in widespread use. The state of the filesystem on the server is maintained on the server which owns the filesystem. In other words. NFS version 3 is now in use by some vendors and includes a number of improvements (and a few new bugs) over NFS. It doesn't know how much of a file the client has read. There are many reasons why these need not be secure. A system may not be able to function any longer because one user fills up the entire disk with garbage. Security: design considerations System security can mean several things. the same system applies. It is in many ways superior to NFS. An improved version of AFS. To have system security we need to protect the system from corruption and we need to protect the data on the system. The open software foundation (OSF) has adopted DCE as its official network solution. Thus AFS has to take into account the username problem. Whereas NFS tries to reproduce UNIX-like file semantics across a network. DFS/DCE is also now licensed by Transarc. the result is a `stale NFS filehandle' . in fact. In NFS. DCE works on top of Domain sockets. as complete substitute for Sun's NFS system from RPC up. AFS solves the problem of user authentification between different sites. Instead of using Sun's RPC software. This system is efficient on a read mostly filesystem. A badly designed system may allow a user to accidentally destroy important data.6.an unrecoverable error. DCE uses software originally developed for the Apollo Domain operating system. 8.the distributed computing environment The Digital Equipment Corporation's Distributed Computing Environment is. NFS is called stateless because the server does not record the requests which the client makes (except for locks). is the Andrew file system.In the UNIX filesystem. AFS also has more advanced caching features to speed up file access and access control lists (ACLs). the latter two can be equally damaging to the system in practice. Maintaining these copies requires complex algorithms and a time-consuming copying overhead. AFS is a different filesystem altogether. The server processes requests without caring about which client it is serving.the UPS gives a system administrator a chance to halt the system by the proper route. It is possible (though perhaps unlikely) that very different users around the world might have the same user ID and login name. When a write is made to such a filesystem it must be made synchronously to disks. These are units which detect quickly when the power falls below a certain threshhold and switch to a battery.3 DCE . Although the battery does not last forever . . If there is a crash. This requires several servers to have disk-copies of the same files. 7. One of the features of the DCE system is the concept of multiple backups of files. the server tries to reestablish the locks it held before the crash.2 AFS . If one server fails. A problem in sharing files between different sites around the world is that usernames and passwords are local to each site. 7. a user must obtain a lock on a file in order to read or write to it. DCE allows another server to take over. though its on operating system NSF1 still supports NFS.6. called DFS has been incorporated into Digital's Distributed computing environment. or about what it has done in previous requests. it is the client's responsibility to keep track of what requests it sends to the server and whether or not it gets a reply. but whereas NFS is free software. but this is a misleading title.
on very many systems. This files looks something like this:. Moreover. 8. 8. we must consider potential threats to system security to come from any source.1 Who is responsible? System security lies with The user. the coded passwords are readable to all users. fix it!'. the encrypted password is readable as the second field. The system designer. The final point can be controlled by enforcing quotas on how much disk each user is allowed to use.2 Passwords and encryption The first barrier to malicious users is the password. that system modeule you wrote is not secure. Every user on a multiuser system must have a password in order to log on. The system administrator. The response would at any rate take some time. Nevertheless. Many would prefer to write this list upside down . The UNIX standard library command crypt() converts a text string into this coded form. they may succeed.e.1 UNIX passwords In most UNIX systems. 8. All users of the system should be aware of security issues. This means that anyone can try to crack the passwords by guessing.but we must be practical. and accidents can happen even to friendly users. Rather. passwords and login information are stored in the file /etc/passwd. If a machine is very insecure. Anyone connected to a network can try to log on to almost any machine. the algorithm which encrypts passwords is usable by all users. Passwords are stored in a coded or encrypted form so that other users cannot read them directly.The problem of malicious users has been hightened in recent years by the growth of international networks. we have to learn to take the system as it comes (pending improvements in later releases) and make the best of it. Usually we are not in a position to ring to the system designer and say `Hey. everyone would think about the welfare of the system and try to behave in a system-friendly way.we are not only looking at out local environment anymore. if all users were friendly and thoughtful. In other words .2. Unfortunately some users are not friendly. Ideally. .
The super-user has unlimited access to files on a system. Your login name!! 9. it would be impossible to compute the same encrypted form more than once!) To try to guess passwords automatically. He/she is the only user who can halt the system and is the only user who can make backups of system data. Your phone number. Most commercial operating systems don't care whether users have no passwords at all. take the first two characters of the encrypted password as the salt. Your birthday. Marx. 4. 6. or system administrator The super-user is a trusted user. 5. and prevents them from doing so. crypt was designed to this way. . Elaborate programs have been written to try to crack passwords in this way. but rather encrypts the password and compares the coded forms. place names or the name of your computer. On newer UNIX systems. Names of famous people like Einstein. Some enhanced systems take the view that users should not be able to choose an insecure password.2. 10. The salt ends up being the first two characters of the encrypted form of the password.3 Super-user. numbers and special symbols like !@#$%^&* etc. the system does not try to decrypt the password. It is better that the system administrator finds a bad password before a malicious user does. Passwords should not be 1. Your car registration plate 7. Any word in an English or foreign dictionary. they can only try to log onto other users' accounts by trial and error. Clearly such a user is required: To maintain the system and deal with special circumstances which arise. 8. Any personal information which is easily obtained. The reason for this is that there is no (publicly) known algorithm for decoding passwords encrypted using crypt(). 8. Such programs are useful tools for the system administrator who should keep an eye on which users have poor passwords. 3. all we have to do is to send a whole list of guesses as passwd_string. known as a salt. and compare the result of the crypt function with the encrypted form from the password file. Since normal users cannot read this file. A keyboard sequence like qwerty.2 Bad passwords Surveys of user passwords show that very many users choose extremely simple passwords. 8. Passwords should be a combination of large and small letters. They cannot compare an encrypted list of their own to the password file. crypt takes the password string and a random number. Just to reverse the process would take hundreds of thousands of years of CPU time. 11. Your name or anyone else's name (your dog's name!) 2. Any of the above spelled backwards.salt). passwords are stored in a shadow password file which is not /etc/passwd but a different non-readable file.When a user types in his or her password. Names from books. code_passwd = crypt (passwd_string. (If we didn't know the salt. To encrypt a password.
In practice. but only the group id is set to that of the owner of the file. All of the commands in the program are executed by the owner and not by the user-id of the person who ran the program. Normally the X-server only allows the machine on which it is running to access the display. Often system administrators end up doing a lot more than this. As another example of network security . the superuser is called root. and so the xhost problem is still widespread.1 Network administration Networks make it possible to link computer systems in an unprecedented way. When such a program is executed by a user. Many users do not understand the X system (which is quite complex) and simply disable access control by calling xhost +. When giving away one's rights to another user (especially those of root) one is tempting hackers. 8.2 Setuid programs in unix The superuser root is the only privileged user in UNIX. To make backups of the system. All other users have only restricted access to the system. he sees the files as though they were a part of his system. all security was based on the xhost program. Setuid programs must be secure.or lack of it . this is wrong . When the Oslo administrator mounts the filesystem on his machine (without needing to give a password). X works by connecting to a server. What is important to understand is that the superuser has a highly responsible position which must not be compromised. The designer of an operating system must be acutely aware of the need to preserve the security of privileged access. Such a user wants all the windows to appear on his or her workstation. Often the effect is the same. This allows any host in the world to connect to the user's server. This means that the superuser has rights only on the local machine.3.To create and destroy new and old users. . A set-uid program is a program which has its setuid-bit set. but sometimes it is a nuisance. Suppose the administrator of one machine in Oslo gets permission from a system in California to access a filesystem on the Californian machine.let us consider also the X-windows system. This was host based meaning that anyone using a named host could open windows on the server. If the owner of the setuid program id root then the commands in the program are run with root privileges! Setuid programs are clearly a touchy security issue. it is run as though that user were the owner of the program. To get rights on another machine.or the user must be able to log onto the machine by knowing the root password.3. since root has the rights to all files. A setgid program is almost the same. either special permission must be given by the remote machine .the superuser of a machine in Oslo cannot be regarded as a trusted user for a system in California! UNIX gets around this problem by mapping the user root (which has user id and all rights) to the user nobody (which has user id and no rights) across a network. across a network. Usually this is desirable. Many programs have not adopted the xauthority system which is user based. X is a windowing system which is designed to work transparently over a network. it might seem natural that he would be able to read and modify the files of all users in California. 8. but in a network situation it is not unusual to find users logged in on several different machines. this means that anyone in the world can view the picture on such a user's screen. We must ask: what is the role of the superuser in a networked environment? Consider the following. We can `mount' (see chapter 5) filesystems from one computer onto another computer across a network and log in to systems all around the world (if we have an account!). Under the UNIX system. The password is of crucial importance. anywhere on the network. But surely. Before the introduction of the xauthority mechanism. To install software and system upgrades. so the X server allows certain other named hosts to open windows on its display. The administrator's account must not be used authorized users. Now.
made a setuid root program which executed every command every user gave to it . Naturally. Next.then suddenly everybody who accessed this file over the network would have root access on their local machine! Clearly careless setuid programs can be a security risk. it allows ordinary users to be able to read as much as the writers of ps thought fit. we have no control over what goes into the file. It is therefore important to backup user data regularly. it is useful to be able to take backups centrally. Backup software is usually intelligent enough to be able to extract only files which have been modified since the last backup. New disks can be bought.but once user data are gone. In BSD UNIX. Less important data might only be worth backing up once a week. In order to do this it needs permission to access the private data structures in the kernel. Users delete files without meaning to.anything else can be replaced. power failure leads to disk corruption. Tape machines are becoming more intelligent and often include compression software in their device drivers which packs more information into the same space on the tape. By making ps setgid root. The cheaper alternative is to use tape. Every week. Suppose then a stupid system administrator. this can be done using the rdump command. Depending on how often users actually use the system. How long you keep backup tapes depends on how long you value your data. we have the problem of what to do with setuid programs which are read across the network. Backing up data is expensive . only the superuser can make a file setuid or setgid root.even though this can be loaded in again from source.An example of a setuid program is the ps program. 8. so network-based filesystems give the option of disallowing setuid programs. they are gone.and of course someone might actually steal your computer! User data are the most important part of a computer system . Tape comes in many forms. Every month. A year is not an unreasoable length of time.4 Backups Accidents happen even to the most careful users. . An EXABYTE video tape with normal compression can hold up to 5GB of data. For convenience you might want to record the setup of your system software once in a while . Some systems use secondary disks to keep backups of important data. If we mount a filesystem across a network. How long should you keep a backup? It might take some time to discover that a file is gone. The most important data should be backed up at least as often as significant changes are made. EXABYTE 8mm (video tape!) DAT (Digital audio tape) Larger systems may also use half-inch tape. it is worth considering making backups Every night. so daily backups need not copy every file every night. From a network vantage point. software bugs can delete files. but device drivers are not easy to come by. Newer drives support 10GB.both in terms of man-hours and in the cost of storage media. angry at the world and dying for revenge. ps lists all of the processes running in the kernel. system administrators can make mistakes . software can be loaded in afresh . but no more. the most common in use today are Standard -inch tape cartidges.
1145 ESTABLISHED tcp 0 0 saga.there are many ways to hide intruders. It gives a listing as follows: Active Internet connections Proto Recv-Q Send-Q Local Address tcp 0 0 saga.1138 ESTABLISHED tcp 0 0 saga. so that .1120 FIN_WAIT_1 tcp 0 0 saga. which simply copy themselves in order to overwhelm the system and logic bombs which go off when some condition is met.24.6000 xantos-2. Most intruders enter via a network. the command netstat shows a list of machines which are connected to the system. Many backdoors are setuid root programs which contain bugs that can be exploited by clever users.89.89. so that these users can gain privileged access. which devious programs can use to gain access to the system from outside.6000 xantos-2.no.1 Back doors Back doors or Trojan horses are faults in the system software. A virus is a piece of code which attaches itself to another program in order to get executed surreptitiously.login (state) . they can also enter on disk.6000 xantos-7.39. A worm is a program which propagates from computer to computer.6000 njal. sendmail is one. Multiuser systems are generally harder to affect with intruders than microcomputers since the operating system exercises a much greater level of control. 8.5. In most cases these are network based programs.6000 128.6000 xantos-7.5 Intruders: Worms and Viruses Worms and viruses are intruder programs which enter a system illegally and take hold of the system in some way.39. programs which masqeuerade as something they are not.6000 xantos-7.uio.1022 ESTABLISHED Foreign Address xantos-7. Some program have become well-known backdoors in the UNIX world.6000 anyon.1130 ESTABLISHED tcp 0 0 saga.1132 ESTABLISHED tcp 0 0 saga.1143 ESTABLISHED tcp 0 0 saga.if one is not specificaly thinking about the possibility of threats.6000 njal. but on small computers which use floppy disks or diskettes.1141 ESTABLISHED tcp 0 0 saga. How do we know when an intruder is on the system? This is an extremely difficult problem . without necessarily changing anything.6000 njal.1147 ESTABLISHED tcp 0 0 saga.1125 FIN_WAIT_1 tcp 0 4 saga.e.1146 ESTABLISHED tcp 0 0 saga.1144 ESTABLISHED tcp 0 0 saga. bacteria.6000 128. Other kinds of intruders are Trojan horses i.8.24. it is easy to miss them. On UNIX inspired systems.
arranging it so that it appeared that a network request came from a trusted machine. so another thing to do is to monitor all the connections made to your machine continuously and dump the result to a file. On the other hand. Each user also has a public key. both parties combine their private keys with the others' public keys and end up with a conversation key which they both agree on. 8. The ingeneous part is that. In order to exchange information. we cannot live in a perpetual state of paranoia. That is .login 0 saga.1094 xantos-7. both the sender and the receiver need to have the correct key. Each host has a private key which is a large number which is encrypted with the user's password and stored in a database.and to whom.6000 This gives an indication of who is currently connected. The problem is then to agree on a key. . Because the key encryption is quite time consuming and difficult. This requires a considerable amount of storage and some skill in interpreting the data. The firewall is the only machine which is connected to a wide area network. Sun have their own version called etherfind. Both the sender and the receiver need to know the key .1023 anyon. A balance must be struck by taking all reasonable precautions and being aware of the problem.tcp 0 ESTABLISHED tcp 0 ESTABLISHED tcp 0 ESTABLISHED tcp 0 ESTABLISHED 0 saga. where they can be shared on the local network but not by the external network. Of course. thinking that everyone is out to get us. Thus sensitive data can be hidden behind the firewall.no.uio. 8. Two parties wish to communicate with one another in private.1086 xantos-4. it is only used to establish an initial connection and conversation key. To try to prevent such problems from occurring. 2. we can use a system of data encryption (coding). This can be achieved using public and private keys. when in fact it came from the intruder's machine. intruders could connect when you are not watching.6000 0 saga.the encryption and decryption algorithms are publicly known. 3. The coding algorithm is based on some inspired mathematics of modulo arithmetic. The message sent to A was encrypted using the conversation key. but it does not forward packets to the local network and vice versa.1080 xantos-4.a barrier to stop the spread of network threats. The program tcpdump will do this. Moreover. Once the conversation key is known. The idea is to isolate important machines by placing another highly secure machine between the outside world and the local network. The only way that B could generate the conversation key would be by knowing A's public key and B's private key. which anyone can look up in a database. Finally. This key encryption scheme is the basis of secure communication links like SSL (Secure socket layer) and PGP (Pretty Good Privacy). one could easily make a device which collected all the information which was sent over a network and analyzed it to find out what was being said . B's password is needed.6000 0 saga. the super-user should never install software which is of suspicious or unknown origin. so they encrypt the data they send over the network. To know B's private key.7 Public and Private Keys A clever intruder could always behave as an imposter . It is also connected to the local area network.6 Firewall One way of designing a network to protect it from attack is to use a machine as a ``firewall''. normal crypt() type encryption is used for passing data. The idea is to encode all data using a special key. To decode a message they only need the conversation key and their own private key. Party A knows that party B is who he claims to be because 1.
joined together by pointers. because the system can perform time-sharing. IPC: Inter-process communication. data would alway be stored in contiguous blocks.hioslo. CPU: Central processor unit.no/~mark/lectures Glossary Assembler: An assembler is a program which converts assembly language into machine code.Where next? There are many topics which have only been covered superficially in this introduction. are called concurrent processes. leaving holes in the data which must then be filled up by the OS.iu. 1's and 0's. Compiler: A program which converts a high level language into machine code. A mechanism by which unix processes can exchange data with one another. Ideally. Parallel computers have several CPUs. Host: A machine. Processes which have the appreance of being executed simultaneously. each instruction tool several cycles of the system clock. kernel: The core of a multitasking operating system which deals with the basic system resources. computer. The kernel drives physical devices and handles I/O. Handshaking: A system of signals between two processes/computers which allows them to tell each other when to start and stop sending information. The clock works like a pump. Bits: Binary-digits.: This is distinct from parallel. A buffer is used to synchronize the arrival of data from a device with the eventual reading of the data by a program. A name or number which refers to something . This is the chip which adds numbers together and moves data around in the memory. On earlt microprocessors. ID: Identifier. A deeper understanding of networks and system administration can be found in. but in practice files may be deleted. driving the CPU to execute intstructions. and some can even perform several instructions per clock cycle. . memory management and process scheduling. Fragmentation: Data are said to be fragmented when parts of the data exist in very different locations. Buffer: Waiting area for data. See also RPC. I/O: Input/output. Assembly language is a mnemonic (symbolic) form of the numerical machine code. Each instruction corresponds to one machine code instruction.often a process or a user. Fragmentation occurs because the OS must find a free space whereever it can. Concurrent. by ingeneous design of the hardware. Newer RISC processors can execute a whole instruction per clock cycle. Clock cycles: The system clock is an inmportant part of the hardware of a computer.
the OS handles timesharing transparently . The protocols for talking to the loopback device are the same as those for the physical network. The RPC protocol makes use of the XDR (external data representation) protocol for passing data. Loopback: The loopback device is a pseudo network device in the UNIX operating system which. Primitive: A very low level function or routine. RISC: Reduced instruction set chip. For example. This is one `grain' . regardless of whether the processes are on the same machine. Transparently: This word is often used to mean that something happens without anyone needing to know the details about how it happens. sends packets straight back into the system. The logic of operation of the system. Vector: An array of data or a segment of memory. Pixel: A single dot on the screen.e. Semantics: This term is used to describe the `method of operation' of a particular system. without users needing to know about how it happens. Primary memory: RAM internal memory (see secondary memory). Single-user system: A system in which only one user can use the system at a time. Virtual: Almost. Usually each instruction completes in a single clock cycle. Machine code: The basic numerical language of codes which the CPU understands. rather than sending data out onto a physical network. Multi-user system: An operating system where several users can use the system simultaneously. Compilers and assemblers convert programs into machine code. This can occur is there are errors or deadlocks in scheduling. OS: Operating system. . or on different machines. Starvation: A process is said to starve if it never gets a share of the CPU. a likeness of. This is part of a new philosophy to make microprocessors faster by giving them fewer (less complicated) instructions which are optimized to run very fast. This is a mechanism for executing tasks on remote machines across the network.i.or the object of maximum resolution. so programs employing interprocess communication have only to hold to a single standard. Multiplex: To switch between several activities or devices. The prescribed way in which a particular system is supposed to behave. simulated. RPC: Remote Procedure Call. It is a relatively high level interface between networked machines. Parallel: Parallel processes are not merely timeshared (concurrent) but actually run simultaneously on different CPUs. A basic element in a library of functions. Secondary memory: Disk or tape storage.
Index crypt() fork() malloc() wait() Access control lists Accounting Accumulator ACL Address Address binding Address resolution AFS Alignment AmigaDOS Andrew file system Application layer ARP ASMP Asymmetric multiprocessing Asynchronous I/O AT&T Authentication Back door Backups Batch Be Box BIND BIOS Black boxes Block allocation Blocks British Telecom Broadcast address BSD unix Buffer Busy waiting C-SCAN algorithm C-shell resources Caching CISC CLI Client server model Clock cycles Command language Communication .
Connection oriented socket Connectionless socket Context switching Core CPU CPU burst CRC Critical sections Cyclic redundancy check Cylinder Cylinder groups Daemons Data links layer Data types Datagram Deadlock Deadlock prevention Defect list Demand paging Device driver Device ID's Devices DFS Direct memory access Disk blocks Disk scheduling Disk surface Distributed file system (DFS) Distributed file systems DLLs DMA DNS Domain name service DOS Encapsulation of data Encryption Entry points to OS code Exceptions FCFS FDDI FIFO File locks File permissions File types Filename extensions Filesystem .
Firewall First come first served First in first out Floppy disks Formatting Fragmentation Fragments (network) Frame table Ftp Gateway Handshaking Hewlett Packard Hierarchical filesystems High level I/O burst I/O sharing Index register Internet protocol Interprocess communication Interrupt vector Interrupts IP address IPC IPV4 ISDN ISO Job Kernel Last in first out Lazy evaluation Least recently used algorithm LIFO Lightweight process Lightweight processes Linux Loader Locks Logical device Logical memory LOOK algorithm Low level LRU Mach MacIntosh Memory management unit Memory map .
Memory mapped I/O MMU Monitor mode Monitors MTS Multi tasking OS Multi user mode Multiple processors Multitasking system Mutex Mutual exclusion Name server Netmask Network filesystem Network information service Network layer NFS NIS NT Object orientation Operating system OSI Overhead Page fault Page table Pages Paging algorithms Paging memory banks Paging to disk Parallelism Partitions Passords PCB Permissions and access on files Physical layer Physical memory Port number Ports POSIX threads Preemptive multitasking Presentation layer Primary memory Priorities Private key Privileged user Process .
Process control block Process creation Process hierarchy Process states Protocol Protocols Pthreads Public key queue Queue scheduling Quotas RAM RARP Register Relative addressing Reliable protocol Resolving addresses Resources Response time Rings ROM Round robin scheduling Round-robin Router RPC RR SCAN algorithm Scheduing criterea Scheduling Screens SCSI Seamphores Second chance algorithm Secondary memory Secondary storage Sectors Security Segmentation Segmentation fault Serialization Services Session layer Setuid programs Shared data Shared libraries Shortest job first .
Shortest seek time first algorithm Signals Single task system Single user mode SJF SMP Socket Sockets Spin lock Spooling SSL SSTF Stack frame Stack overflow Stack pointer Starvation Status register Sun Microsystems Super user Supervisor Supervisor mode Swapping Syhcronization Symmetric multiprocessing Synchronization Synchronous I/O System 5/System V System calls System overhead Task TCP TCP/IP Telnet Tetris algorithm Thrashing Thread Thread levels Threads Time sharing Track Transarc Transport layer Traps Trojan horse Two mode operation UDP .
Macquarie University. University of Leeds. 1995. Nikos Drakos. Sydney. A short introduction to operating systems This document was generated using the LaTeX2HTML translator Version 99.2beta6 (1. 1999.. Computer Based Learning Unit. 1994. 1998. Copyright © 1997.UFS filesystem UID Unreliable protocol User ID User mode Users Virtual memory Virus VLSI Waiting Well known ports Window system Windows Word size Worm X windows X11 XDR About this document .tex The translation was initiated by Mark Burgess on 2001-10-03 . Mathematics Department.42) Copyright © 1993. Ross Moore. 1996. The command line arguments were: latex2html -show_section_numbers -split 1 os.. | https://www.scribd.com/document/257553611/A-Short-Introduction-to-Operating-Systems | CC-MAIN-2018-47 | refinedweb | 41,662 | 68.87 |
Posted 31 Jan 2011
Link to this post
Posted 02 Feb 2011
Link to this post
you have hit a limitation that we currently have: In order to leverage the relational server we're trying to push as much as possible to it, and therefore we attempt to do that with your ToCustomer method, which is purely client side.
What you can do is to use an expression of form
return (from empl in ctx.Employees select empl).ToList().Select(x => ToCustomer(x)).ToArray();
The .ToList() will finish and trigger the server side query, and then the resulting list is further been used as input to the Select extension method that can then call your ToCustomer in memory. This has nothing to do with public/private.
As for your second question: The string that is passed as the argument to the context is tried as the name of a connection string in the connection string settings element in your app.config/web.config. If there is no matching entry, it is tried directly as a connection string. If you do not want to change the web.config, you can hardcode it in your app, but this is usually not something I would do. I would prefer the connection string in the web.config as this gives you more flexibility in the deployment.
Posted 07 Feb 2011
Link to this post
Posted 08 Feb 2011
Link to this post
no, there is currently no way but to change the web.config as text. I think a real administrator should not have a problem with it. If you want to make your app more mouse-friendly, you could always add such a dialog by yourself. | http://www.telerik.com/forums/not-implemented-on-server-tocustomer-empl | CC-MAIN-2017-34 | refinedweb | 283 | 70.23 |
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.
On Thu, Aug 20, 2015 at 11:43 PM, Andrew Pinski <pinskia@gmail.com> wrote: > On Thu, Aug 20, 2015 at 10:24 PM, Wilco Dijkstra <wdijkstr@arm.com> wrote: >> Enable _STRING_ARCH_unaligned on AArch64. >> >> 2015-08-20 Wilco Dijkstra <wdijkstr@arm.com> >> >> * sysdeps/aarch64/bits/string.h: New file. >> (_STRING_ARCH_unaligned): Define. >> >> --- >> sysdeps/aarch64/bits/string.h | 24 ++++++++++++++++++++++++ >> 1 file changed, 24 insertions(+) >> create mode 100644 sysdeps/aarch64/bits/string.h >> >> diff --git a/sysdeps/aarch64/bits/string.h b/sysdeps/aarch64/bits/string.h >> new file mode 100644 >> index 0000000..5221e69 >> --- /dev/null >> +++ b/sysdeps/aarch64/bits/string.h >> @@ -0,0 +1,24 @@ >> +/* Optimized, inlined string functions. AArch64 version. >> +_H >> +# error "Never use <bits/string.h> directly; include <string.h> instead." >> +#endif >> + >> +/* AArch64 implementations support efficient unaligned access. */ >> +#define _STRING_ARCH_unaligned 1 > > I don't think this is 100% true. On ThunderX, an unaligned store or > load takes an extra 8 cycles (a full pipeline flush) as all unaligned > load/stores have to be replayed. > I think we should also benchmark there to find out if this is a win > because I doubt it is a win but I could be proved wrong. Are there benchmarks for each of the uses of _STRING_ARCH_unaligned so I can do the benchmarking on ThunderX? Also I don't see any benchmark results even for any of the other AARCH64 processors. Thanks, Andrew > > Thanks, > Andrew Pinski > >> -- >> 1.9.1 >> >> | http://sourceware.org/ml/libc-alpha/2015-08/msg00860.html | CC-MAIN-2019-30 | refinedweb | 251 | 70.29 |
NAOqi Vision - Overview | API | Tutorial
Namespace : AL
#include <alproxies/alvideorecorderproxy.h>
As any module, this module inherits methods from ALModule API. It also has the following specific methods:
Returns the current color space.
Returns the current video framerate.
Returns the current frame resolution.
Returns the current video format used to encode video. See ALVideoRecorderProxy::setVideoFormat() for the list of possible return values.
Sets the color space used.
Sets the number of frames per second (FPS).
Note
MJPG format requires a frame rate greater than 2 FPS.
Sets the frame resolution. It can be either VGA, QVGA or QQVGA.
Sets the codec name used to encode the video.
There are two overloads of this function:
Starts recording a video with the current parameters until ALVideoRecorderProxy::stopRecording() is called. If the destination file already exists, it will be overwritten.
Note
Only one record at a time can be made.
Starts recording a video with the current parameters until ALVideoRecorderProxy::stopRecording() is called.
Note
Only one record at a time can be made.
Stops a video record that was launched with ALVideoRecorderProxy::startRecording(). The function returns the number of frames that were recorded, as well as the video absolute path.
Note
This is a blocking method that can take several seconds to complete depending on the image resolution, the framerate, the video length and the chosen video format. MJPG format gives better performance than other formats. | http://doc.aldebaran.com/1-14/naoqi/vision/alvideorecorder-api.html | CC-MAIN-2019-13 | refinedweb | 233 | 61.02 |
How to custom the ok button in the edit dialogleon825 Mar 13, 2013 8:19 PM
Hi Team:
I want to log down the user's action when user after edit the attribute in the edit dialog,how to do that?
any suggestions/pointers to solve this problem would be appreciable.
1. Re: How to custom the ok button in the edit dialogrush_pawan Mar 13, 2013 8:32 PM (in response to leon825)
Hello,
Please provide more detail about your requirement. Do you want to log user actions like click "ok" or "cancel" button after authring the data in edit dialog or something else?
Thanks,
Pawan
2. Re: How to custom the ok button in the edit dialogleon825 Mar 13, 2013 10:28 PM (in response to rush_pawan)
3. Re: How to custom the ok button in the edit dialogMatheusOliveira Mar 14, 2013 7:19 AM (in response to leon825)
Maybe you can try to do something progamatically, in the Widgets configuration?
I'm pretty sure that there is some code lines in the JSP's of the component where you can work with the buttons of the dialog.
leon825 wrote:
yes,for example: when i input the data of the "Publish Date" i click the "ok" button, before the screen refresh i want to log user's action info like which fields has been modified and the modified date and user account,including but not limited to the "Publish Date",thank you!
4. Re: How to custom the ok button in the edit dialogrush_pawan Mar 14, 2013 9:07 PM (in response to leon825)
Hello,
In that case you have to add cq:editConfig node to your component and define the cq:listners node with what ever the event you want. Please refer below link
Now here you have many events and based on your requirement either you can use client side event (means java script events to track the information for example before/after <action> event from or
OR
you can write your own handler which will help you to do you task, the benefit here is that you will have all the CQ API available to do your job mainly fetching user information from current session.
but its your choice which best suits to your requirement.
Also if you select client side even handler then ecma script will help you to fetch user information for example take a look at /etc/workflow/scripts/activitystreams/dummy-activity.ecma
I hope above information will help you to proceed. Let me know for more information.
Thanks,
Pawan
5. Re: How to custom the ok button in the edit dialogleon825 Mar 14, 2013 11:12 PM (in response to rush_pawan)
Thank you for your answer,I have a sample:
<?xml version="1.0" encoding="UTF-8"?>
<jcr:root
xmlns:cq=""
xmlns:jcr=""
jcr:
<listeners jcr:primaryType="nt:unstructured"
beforesubmit="function(dialog) {
var url = '/bin/member.json';
var result = CQ.HTTP.eval(url);
}" />
<items jcr:
<tab1 jcr:
<items jcr:
...
</items>
</tab1>
<tab2 jcr:
<items jcr:
...
</items>
</tab2>
</items>
</jcr:root>
java servlet:
@Component(immediate = true)
@Service
@Properties({
@Property(name = "service.description", value = "Group Member servlet checks if a user is a member of a group."),
@Property(name = "sling.servlet.paths", value = "/bin/member")
})
public class myServlet extends SlingAllMethodsServlet {
.......
}
my questions as following:
1.dose this way accords with my requirement?
2.should i need to declare the myServlet to any configuration file?
3.where should i place the myServlet.java to?
4.how can i know which field has been changed?
6. Re: How to custom the ok button in the edit dialogrush_pawan Mar 16, 2013 3:27 PM (in response to leon825)
Hi,
I have never come to this type of requirement but based on experience i can suggest as below.
Answer1: I think you can use your servlet to get user information back for your track pursponse.
Answer2,3 : You can deploy your servlet as part of OSGI bundle using normal build process. As a best practice we create the file inside project src folder.
Answer4: Now i want to some of the thing here
a. You already have got user information from your servlet.
b. Now use the client listner function "beforestatesave" (i hope this will help you more than to other or search similar type of method which suits your requirement.) where you will have the object of component through which you can get the path of this component on page. So now because the data has not been saved to component you can use the path of component to fetch already stored data in respository and compare the values of properties that you want to track with current passed "this" object.
I also recommend you to look into an example /apps/geometrixx/components/productlist/cq:editConfig/cq:listeners and go thourgh some of the public property or public methods of panel/tabpanel/component xtypes at here
Let me know if you need more information.
Thanks,
Pawan | https://forums.adobe.com/thread/1170252 | CC-MAIN-2017-39 | refinedweb | 829 | 60.04 |
I am using a Raspberry PI 2/3 credit size computer. How do I find out my GPU or ARM CPU temperature from Linux operating system command line option?
You can easily find out the GPU and ARM CPU temperature using the following commands.
Show Raspberry Pi GPU temperature
Open the Terminal application and type the following command to view GPU (Graphics Processing Unit) temperature:
vcgencmd measure_temp
OR
/opt/vc/bin/vcgencmd measure_temp
Sample output:
Display Raspberry Pi ARM CPU temperature
Type the following cat command:
cat /sys/class/thermal/thermal_zone0/temp
Divide it by 1000 to get the ARM CPU temperature in more human readable format:
Sample outputs:
Putting it all together
Create a simple bash script called my-pi-temp.sh to see both ARM CPU and GPU temperature of Raspberry Pi. Type the following command:
nano my-pi-temp.sh
OR
vi my-pi-temp.sh
Append the following code:
Save and close the file. Set permission:
chmod +x my-pi-temp.sh
Run it as follows:
./my-pi-temp.sh
Sample outputs:
Thu 10 Mar 01:02:19 IST 2016 @ raspberrypi ------------------------------------------- GPU => temp=44.4'C CPU => 44'C
The temperature sensor for the SoC is on the same silicon. Your reading the same sensor for CPU when you say access to GPU. Propagation(delay) is what you are seeing when accessing the temperature for the CPU when you are assigning for the GPU temperature. Same sensor, CPU temperature access is correct but your GPU does not have a sensor independently. Just a CPU temperature can be truly accessed. Leave out the GPU temp and you will be correct. Same metric for temperatures.
You are right, no need for measuring CPU and GPU separately for SoC.
best way to see thermal temps and arm_freq via terminal command:
while true; do vcgencmd measure_clock arm; vcgencmd measure_temp; sleep 1; done
To check it under 100% CPU load:
sudo apt-get install sysbench
while true; do vcgencmd measure_clock arm; vcgencmd measure_temp; sleep 1; done& sysbench –num-threads=8 –test=cpu –cpu-max-prime=10000000000 run
For optimizing, benching, and overclocking, see:
to see all cores and load % (hit ‘1’), enter in terminal:
top
In one line:
echo -e "CPU => $(echo "scale=1; $(cat /sys/class/thermal/thermal_zone0/temp)/1000" | bc)'C\nGPU => $(/opt/vc/bin/vcgencmd measure_temp | sed "s/^.....//g")"
Thank you, great post!
Just a simple addition to your ‘my-pi-temp.sh’ script:
echo "GPU => $(/opt/vc/bin/vcgencmd measure_temp | cut -d = -f2)"
Makes the output a little cleaner ;-)
Below is the script I use for displaying RPi temperature and frequencies. It’s a hack, but usable.
Larry, very useful script! I hope you don’t mind a few changes that I made.
Somehow, I lost some spacing in the script even though my copy has them. Aargh!
You need to wrap script between <pre> and </pre> . I edited out to add <pre> and </pre> in your comment. Cheers.
Vivek – thanks (belated!).
Thanks mate, this is really helpful !
enola
worked nicely.
My question is: is there such a nice code to do it in python, too?
def get_temp():
with open('/sys/class/thermal/thermal_zone0/temp', 'r') as infile:
return float(infile.read()) * 1e-3 | https://www.cyberciti.biz/faq/linux-find-out-raspberry-pi-gpu-and-arm-cpu-temperature-command/ | CC-MAIN-2017-17 | refinedweb | 537 | 65.83 |
StickC - SH200Q Acceleration Range
How to init the acceleration range to 16g for Sh200q on StickC with the MicroPython code?
I tried the code blow. But I don't know whether it is correct.
from hardware import sh200q imu = sh200q.Sh200q(accel_fs=sh200q.ACCEL_FS_SEL_16G)
@zhufu86 said in StickC - SH200Q Acceleration Range:
imu = sh200q.Sh200q(accel_fs=sh200q.ACCEL_FS_SEL_16G)
I tried the codes in the StickC and got the acceleration value four times as big as the value before. So I suppose SH200Q is initiated with 4g range if lack of accel_fs setting.
For temporary solution I devide the returned acceleration value by 4.
x = imu.acceleration[0]/4 y = imu.acceleration[1]/4 z = imu.acceleration[2]/4
I also did some tests as below. No clue...
>>> imu = sh200q.Sh200q() >>> imu._accel_so 8192 >>> imu.acceleration[2] 0.995 >>> imu = sh200q.Sh200q(accel_fs=sh200q.ACCEL_FS_SEL_16G) >>> imu._accel_so 2048 >>> imu.acceleration[2] 4.058 >>> imu.acceleration[2]/4 0.9985 >>> imu._accel_so=8192 >>> imu.acceleration[2] 0.999 >>>
@zhufu86 said in StickC - SH200Q Acceleration Range:
imu._accel_so=8192
Update...
No matter how I initiate SH200Q, I can not get the range larger than 4g.
Need more investigation.
Finally, I had it solved.
It seems I need to write the correct configuration into the SH200Q register manually.
Here is how to....
MicroPython ESP32_LoBo_v3.2.24 - 2018-09-06 on M5Stack with ESP32 Type "help()" for more information. >>> from hardware import sh200q >>> imu = sh200q.Sh200q(accel_fs=sh200q.ACCEL_FS_SEL_16G) >>> imu.acceleration[0] 4.332 >>> imu.i2c.readfrom_mem(108,0x16,1) b'\x10' >>> imu.i2c.writeto_mem(108,0x16,'\x12') 1 >>> imu.acceleration[0] 1.086 >>>
Is it a bug in module "hardware.sh200q"?
@property def acceleration(self): so = self._accel_so data = self._regThreeShort(0x00) return tuple([round(value / so, 3) for value in data])
output data is not raw data, and ACCEL_FS_SEL_16G only measuring range, does not affect the value of acceleration
I know, it a bug..., will fix it | http://forum.m5stack.com/topic/1051/stickc-sh200q-acceleration-range | CC-MAIN-2019-43 | refinedweb | 323 | 54.29 |
Hey,
Im working on quite a large program, and i would like to store all my error messages as string variables inside a dedicated .cs file so they are easy to update, baring in mind these need to be accessible over the entire project, whats the best way of doing it? I realise global variables are against the idea of OOP, but there must be a suitable method for doing this :confused:
I assume they need to be inside the same namespace, and also inside a class. Could some give me an example .cs file that i can include, Also can this be done so i could use the strings without making a new instance of the class everytime i need to use one.
Thanks for any tips.
Jack | http://cboard.cprogramming.com/csharp-programming/78231-whats-best-way-do-printable-thread.html | CC-MAIN-2016-18 | refinedweb | 129 | 75.03 |
Revision history for Perl extension Search::HiLiter. 1.003 08 June 2015 - swapped namespace::autoclean in, namespace::sweep out. See 1.002 18 Aug 2014 - warn and skip undefined transliteration mappings 1.001 23 July 2014 - zap use of File::Slurp in tests - small optimization to to_utf8() to privilege is_ascii() over other encoding tests. 1.000_01 21 July 2014 - test 'use locale' (and its absence) against cpantesters 1.000 18 April 2014 - the official Moo release 0.999_04 11 April 2014 - make namespace::sweep dependency explicit in Makefile.PL. Same issue as 0.999_03. 0.999_03 11 April 2014 - make Moo dependency explicit as cpantesters does not seem to pull it in via Search::Query 0.999_01 08 April 2014 - Drop Rose::ObjectX::CAF object system in favor of Moo + Class::XSAccessor. Moo is required by Search::Query (dependency) and Class::XSAccessor was already being used by Rose::Object if present. 0.99 02 March 2014 - Snipper doc fixes - change !$query to !defined $query check in @_ unrolling 0.98 14 Nov 2013 - add new method as_sentences() to TokenListUtils - fix perl_to_xml for blessed objects 0.97 4 Oct 2013 - fix_cp1252_codepoints_in_utf8 now operates on bytes internally in regex substitution. 0.96 13 June 2013 - force blessed references to stringify in xml conversion 0.95 7 June 2013 - make POD tests optional with PERL_AUTHOR_TEST 0.94 31 May 2013 - quiet regex whitespace warning in Perl >= 5.18.0 0.93 19 March 2013 - (more) fix off-by-one memory bug regression introduced in 0.91 0.92 19 March 2013 - QueryParser->stemmer will now coerce return values through to_utf8() (RT #83771) - fix off-by-one memory bug regression introduced in 0.91 0.91 4 March 2013 - XML->escape() now converts single quote to ' instead of ' in order to conform with both the HTML and XML specs. 0.90 14 Feb 2013 - fix bug in refactor of perl_to_xml where 2nd arg is hashref representing root element. 0.89 14 Feb 2013 - fix bug in refactor of perl_to_xml to preserve markup escaping for old syntax. 0.88 13 Feb 2013 - fix bug in Snipper when strip_markup=>1 and show=>1 and length of text less than max_chars - XML->perl_to_xml now supports named key/value hashref as argument instead of C-style method signature. 0.87 12 Feb 2013 - XML->tag_safe() now catches edge case where double colons (as in Perl package names) are properly escaped. - add Snipper->strip_markup feature 0.86 04 Jan 2013 - switch from " to ' marks in HiLiter tag attributes. This allows for compat with hiliting within JSON blobs. 0.85 03 Dec 2012 - fix failing test t/30-perl-to-xml.t from assuming predictable hash key order, which in Perl >= 5.17.6 is random. See. 0.84 25 Oct 2012 - internal HeatMap refactor to relax sanity checks around stemmed phrase matching. 0.83 12 Oct 2012 - UTF8::is_sane_utf8() now runs through entire string instead of stopping at first suspect sequence. - add Query->unique_terms, ->num_unique_terms, ->phrases, and ->non_phrases methods in aid to HeatMap, which needed a refactor to fix a bug affecting duplicate terms in phrases when stemming was on. 0.82 28 Sept 2012 - fix off-by-one bug in HeatMap proximity counting for phrases 0.81 6 Sept 2012 - refactor sanity check for HeatMap matches against phrases, to try and avoid false positives when stemmer is used. - HeatMap weight now includes term proximity when sorting likely snippets 0.80 3 Sept 2012 - fix Query->matches_* stemming support to work with phrases. 0.79 22 Aug 2012 - allow XML->perl_to_xml to support root_element as a hashref with tag and attrs 0.78 21 Aug 2012 - optimizations to HeatMap and Snipper sentence detection, which has the nice side effect of avoiding breaking HTML entities in snipped HTML. To take advantage, use as_sentences => 1. 0.77 15 Aug 2012 - add stemming support for Query->matches_html and Query->matches_text - add HiLiter->html_stemmer with passthrough to plain_stemmer until failing test cases materialize. - some fixes for stemming support, mostly turning off optimizations based on regular expressions. 0.76 7 Aug 2012 - finally(!) add real stemming tests and support to Snipper and HiLiter 0.75 6 Aug 2012 - add some tests for Perl 5.17.x test failures - fix edge case where short snip generated spurious ellipses 0.74 21 May 2012 - yank some meta data from a test doc to avoid security scan problems on CPAN 0.73 13 May 2012 (Happy Mothers Day) - fix edge case with snipping phrases that contain non-word characters other than spaces. 0.72 30 April 2012 - more fixes, similar to 0.71 (for now missing Keywords class) 0.71 28 Feb 2012 - fix failing tests due to removed classes in 0.70 0.70 23 Feb 2012 - refactor XML->escape for some performance gain - remove long-deprecated Keywords classes 0.69 22 Feb 2012 - fix XML->escape() to preserve UTF-8 flag on the returned SV* 0.68 15 Jan 2012 - add missing dTHX macro per 0.67 12 Jan 2012 - bolster Tokenizer sentence detection, adding list of abbreviations from Linga::EN::Tagger. - fix missing 'lang' param for SpellCheck - fix placement of dSP macro in tokenize() C func to properly scope stack variables. - add slurp() method to Search::Tools 0.66 05 Dec 2011 - undo 0.65 change, since HTML entities are case sensitive () 0.65 02 Dec 2011 - lowercase named entity matches. patch from Adam Lesperance. 0.64 02 Dec 2011 - optimizations to regex matching in Query->matches and HiLiter - according to Unicode spect \xfeff (BOM) is deprecated as whitespace character in favor of \x2060. HTML whitespace definition changed accordingly. - fix edge case in HiLiter where match on single letter could cause infinite loop. - add Query->fields method to see the fields searched for. - fix XML->unescape_named to support entities with \d in them, and case-insensitive. 0.63 06 Oct 2011 - change __func__ macro to use FUNCION__ instead since Perl core implements that portable macro. 0.62 26 Aug 2011 - remove ';' as sentence boundary character (it was marked as TODO in search-tools.c) because character entities use it (e.g. &). 0.61 29 July 2011 - add term_min_length option to QueryParser, to ignore terms unless then are N chars or longer. Useful for skipping single-character words when Snipping or HiLiting. For backwards compatibility the default is 1. - fix treat_uris_like_phrases regex to add / character in addition to @.\ 0.60 13 July 2011 - fix whitespace def to include (broke HTML::HiLiter) 0.59 19 June 2011 - add normalize_whitespace feature to XML->no_html() method. - add several Unicode whitespace defs to $whitespace regex in XML class per 0.58 27 May 2011 - fix unescaped string in regex in HiLiter 0.57 22 Feb 2011 - extend bug-fix from 0.56 to prevent false matches on match markers. 0.56 10 Feb 2011 - fix bug where query terms 'span' or 'style' were breaking hiliting by "double-dipping" 0.55 25 Oct 2010 - disable one more test for perl >= 5.14 (see 0.54) 0.54 24 Oct 2010 - fixes for Search::Query 0.18 - disabled some tests that break under perl >= 5.14. See 0.53 26 June 2010 - add ->matches_text and ->matches_html methods to Query class 0.52 22 June 2010 - tweek locale tests because some OSes (linux) use 'UTF8' instead of 'UTF-8' naming. - small optimizations to HiLiter 0.51 23 May 2010 - singularizer in XML->perl_to_xml will now treat common English plurals 0.50 19 May 2010 - fix default regex for QueryParser->term_re and Tokenizer->re to match default QueryParser->word_characters. The chief difference is that now the hyphen "-" is considered a word character if it appears like a single quote does. So this: don't think twice it's all-right is now 5 tokens instead of 6. 0.49 08 May 2010 - change from FUNCTION__ to __func__ in all .c code. 0.48 30 April 2010 - fix treat_phrases_as_singles bug in Snipper where phrases were never being matched. - compromise on proximity query syntax ("foo bar"~10) by always treating as single terms. 0.47 16 April 2010 - fix regex bug in Transliterate->convert where newlines were being skipped. 0.46 06 April 2010 - fix croak message for debug-level sanity check on text match in HiLiter. - fix bugs with as_sentences for checking end boundaries. 0.45 04 March 2010 - change QueryParser tests for range to use native dialect, not SWISH. 0.44 24 Feb 2010 - fix locale test case comparison for UTF-8 (RT#54941 reported by John Napiorkowski) 0.43 06 Feb 2010 - fix bug with Search::Query::Parser method name (error() not err()). - fix doc bug in Snipper. - refactor QueryParser internals to work with latest Search::Query 0.07. 0.42 03 Feb 2010 - fix bug in XML->tag_safe that disallowed XML namespaces. - add XML->tidy method. 0.41 01 Feb 2010 - move SWISH::Prog::Utils perl_to_xml() feature to Search::Tools::XML. 0.40 31 Jan 2010 - added ignore_length() feature to Snipper. - added treat_phrases_as_singles() feature to Snipper. 0.39 23 Jan 2010 - switch from Search::QueryParser to Search::Query::Parser. This change means that some methods in Search::Tools::Query and Search::Tools::QueryParser were added, removed or modified. Please check the documentaiton. 0.38 22 Jan 2010 - add support for wildcard at start of term in addition to end of term. - added Windows-1252 (cp1252) encoding helpers. - added Encoding::FixLatin as a dependency. - fix off-by-one errors in find_bad_*_report and find_bad_* UTF8 functions. - add debug_bytes() to UTF8 class. 0.37 06 Dec 2009 - fix blead perl REGEXP change for Perls >= 5.11. [r2330] 0.36 3 Dec 2009 - add FUNCTION__ definition for those Perls (<5.8.8) that lack it. 0.35 30 Nov 2009 - add UTF::byte_length() function just like bytes::length() - some attempts to compile under Win32 (programming a bit blind with nothing to test on...) 0.34 22 Nov 2009 - make the bigfile test optional and make it use the 'offset' snipper to reduce mem use by 60%. 0.33 19 Nov 2009 - switch default Snipper type to 'offset' to optimize for large target texts. - add Tokenizer->get_offsets() method in C/XS. - fix Snipper->show feature to work as the author expected it to. Do not return anything if no match. - refactor is_ascii C code and is_sentence_start() to return false if match on UPPER as opposed to Upper. 0.32 31 Oct 2009 - fix mem leaks - optimize normalize_whitespace regex 0.31 14 Oct 2009 - add missing dTHX; macro to st_malloc per RT #50509 0.30 13 Oct 2009 - do not prefix ellipse to snippets in Snipper when as_sentences is true. - add attribute support to XML->start_tag(). - bump Rose::ObjectX::CAF req version to catch bad param names and fix a couple. - fix as_sentences feature in HeatMap where $end offset was overrunning the tokens array length. 0.29 11 Oct 2009 - tweek snippet sorting to value higher unique term frequency. - add XML->strip_markup as alias for no_html() - added as_sentences experimental feature to Snipper and supporting classes. 0.28 29 Sept 2009 - add missing dTHX macro for 5.10 build. 0.27 29 Sept 2009 - optimize XML->escape() and remove %XML::Ents as public variable. escape() is now in C/XS, borrowed from mod_perl. - add query_class() to QueryParser to allow subclassing Query. 0.25 19 Sept 2009 - add missing $VERSION back to Keywords.pm to satisfy CPAN.23 17 July 2009 - change utf8_safe() XML method to change low non-whitespace ascii chars to single space. This makes them XML-spec compliant. 0.22 22 Jan 2009 - continue fixing Transliterate bug exposed in version 0.20 0.21 22 Jan 2009 - fix bug in init of Transliterate map that was triggered when multiple instances are created in a single app 0.20 16 Dec 2008 - refactor Transliterate->convert(). now 244% faster. 0.19 16 Dec 2008 - more tests - clarify use of ebit in Transliterate docs 0.18 02 Dec 2008 - add more debugging to to_utf8() function. - make Text::Aspell optional, since it has non-CPAN dependency 0.17 22 May 2008 - fix typos in S::T::SpellCheck - refactor some remaining classes to use Search::Tools::Object class 0.16 22 Nov 2007 - refactor common object stuff into new Search::Tools::Object class - change behaviour of XML escape()/unescape() to return filtered values instead of in-place 0.15 - fix t/09locale.t to skip if UTF-8 charset not available via setlocale() 0.14 - fixed <version> in Makefile.PL 0.13 - added File::Slurp to requirements, since tests use it. - changed 'use <version>' syntax to be portable. 0.12 - change tests to force locale for spelling dictionaries, or skip if not found 0.11 - fix bug in UTF8.pm where latin1 was flagged internally as UTF-8 and so fooled the native Perl checks. - rewrite is_latin1() and find_bad_latin1() as XS. - refactored is_valid_utf8() to use internal Perl is_utf8_string() plus is_latin() and is_ascii() checks to help reduce ambiguity. - hardcode locale into some tests so that latin1 is not magically upgraded to utf8 by perl. 0.10 - fix bug in Tools.xs where NULL was being returned as SV* instead of &PL_sv_undef 0.09 - separated the UTF8 checking into Search::Tools::UTF8 and use XS to check valid utf8. Among other things, fixes the string length bug on is_valid_utf8() that previously segfaulted if the string was longer than 24K. 0.08 - fixed bug with S::T::XML utf8_escape() with escaping a literal 0 - changed required minimum perl to 5.8.3 for correct UTF-8 support. - kris@koehntopp.de suggested changes to the default character map in S::T::T to better support multiple transliteration options. This resulted in per-instance character map and no more package %Map. See doc in S::T::T for map() method. 0.07 - added more utf8 methods to S::T::Transliterate. - added $sane threshold to prevent segfaults when checking for valid_utf8 in long strings (like file slurps) - changed example/swish-e.pl to use SWISH::API::Object - fixed subtle regex bug with constructing word boundaries wrt ignore_*_chars 0.06 - Kezmega@sbcglobal.net found a bug when running under -T taint mode. fixed in S::T::Keywords. 0.05 - added spellcheck() convenience method to S::T - added t/11synopsis.t test - changed POD to reflect new methods - added query() accessor to S::T::SpellCheck - thanks to Kezmega@sbcglobal.net for the above suggestions - fixed POD example in S::T::HiLiter 0.04 - added S::T::SpellCheck - fixed (finally I hope) charset/locale/lang issue by making it global accessor and checking for C and POSIX - reorged default settings in S::T::Keywords to set in new() rather than each time in extract() 0.03 - fixed charset/locale issue in S::T::Keywords reported by Debbie Jones 0.02 - added example/ scripts - fixed S::T::K SYNOPSIS to reflect reality - POD fixes - added is_valid_utf8() method to S::T::Transliterate along with valid utf8 check in convert() - rewrote S::T::Keywords logic to: * correctly parse stopwords (all are compared with lc()) * return phrases as phrases * additional UTF-8 checks * parse according to RegExp character definitions - changed default UTF8Char regexp in S::T::RegExp - changed default WordChar regexp in S::T::RegExp - begin_characters and end_characters are no longer supported since they were logically just the inverse of ignore_*_char plus word_characters. The entire regexp construction was refactored with that in mind. - @Search::Tools::Accessors now provides (saner) way for subclasses to inherit attributes like word_characters, stemmer, stopwords, etc. - S::T::RegExp kw_opts is no longer supported - stopwords are intentionally left in phrases, as are special boolean words - added ->phrase accessor to S::T::R::Keyword - S::T::HiLiter now higlights all phrases before singles so that any overlap privileges the phrase match. Example would be 'foo and "foo bar"' where the phrase "foo bar" should receive precedence over single word 'foo'. 0.01 2006-06-22T08:06:59Z - original version | https://metacpan.org/changes/distribution/Search-Tools | CC-MAIN-2015-35 | refinedweb | 2,641 | 67.35 |
\y -> \x -> (x + y) (Function)This is a function that returns a function. If you pass '4' to this function, you get back another function that adds '4' to its input. If you passed 'z' to this function, you get a new function that adds 'z' to its input:
(\y -> \x -> (x + y)) 4 => \x -> (x + 4) (\y -> \x -> (x + y)) z => \x -> (x + z)I.e., one has essentially bound the 'y' to '4' or 'z'. Of course, excepting the possible advantages regarding LazyEvaluation and Futures/Promises, there isn't much reason to keep 'y' around... at least not in a "pure" functional language where one can always pass-by-value. However, in an "impure" functional language, the binding structure matters a great deal more. Consider a 'setf' function that takes a variable and a value and mutates the variable to have the value:
\y -> \x -> (setf y (x + y)) (Function with Side-Effects) (\y -> \x -> (setf y (x + y))) z => \x -> (setf z (x + z))Here, the binding must really refer to the variable 'z', not to anything else. This would allow one to, for example, map this impure function across a list of numbers and have, as a consequence, the resulting sum stored in 'z'. In traditional functional programming languages, there is no means to decompose functions in order to access their innards and modify them (or create new functions based upon them). This can be considered a significant weakness in some cases; it can make many forms of aspect-oriented programming difficult to impossible, and it can make optimizations based on context of usage a great deal more difficult. E.g. suppose you had a function to apply to a collection that will return all tuples starting with ("Bob",x,y). If function decomposition was possible, the collection to which you pass this function could peek inside and say: "Oh! I have everything indexed by that first parameter in each tuple! Lemme go grab all the 'Bob's." then do so without iterating over the collection... as might have originally been specified by the function. Fortunately, the lack of decomposition is not intrinsic to functional. There is no fundamental reason that functions cannot be decomposed as well as composed. In practice, the cost would be memory and database/filesystem space... which will be significantly greater if functions are stored twice (once in compiled form and once in decomposable form). Dynamic strings are a bit different in both usage and properties. One can construct dynamic strings in some incredibly arbitrary manners... e.g. "y)" and "(x +" - neither of which are syntactically acceptable by themselves - can together form the string "(x + y)" - which might then be evaluated. Indeed, one can perform arbitrary transformations over strings (both composition AND decomposition) prior to evaluating or executing them. It should be noted that dynamic strings accept inputs via the naming of free variables; thus dynamic strings are, essentially, dynamically scoped. They may also describe complete anonymous functions (if the language supports those, too), so not all variables need to be free. It should be noted that if there are never any free variables, there is no reason to use dynamic strings over the various alternatives.
global x := 1 global y := 2 global z := 3 function myFunc(string) let x = 4 let y = 5 return evaluate(string) procedure myOp(string) static local x := 3 execute(string) assert(9 = myFunc("(x + y)")) myOp("x := 8") assert(1 = x) myOp("z := x + z") assert(11 = z) myOp("z := z - myFunc(""x"")") assert(7 = z) assert(16 = myFunc("(x + y + z)") assert(10 = (x + y + z)) procedure test(string) local z := 10 assert(19 = myFunc("(x + y + z)")) assert(13 = (x + y + z)) assert(49 = (evaluate("\x -> (x + z)") 42)) (string evals to function, hybrid language)The cost of DynamicScope is the same here as it is in any other language that supports it: variable names must be kept around, runtime lookups will occur where the variable choice can't be determined statically, there are potential type-issues if you don't know every place a procedure with free variables might be applied, and it makes variable-usage non-obvious such that massive confusion can occur when a programmer decides to refactor or modify a program and eliminates, introduces, and renames various variables. These aren't issues unique to dynamic strings, though it should be said that "functions whose scope can inherit caller's scope" have more in common with dynamic strings than they do with prototypical "functional". One might reasonably argue that half the DynamicStringsVsFunctional debate should be moved over to StaticVsLexicalScoping. One can conceivably have 'higher order dynamic strings': dynamic strings that, when evaluated, return dynamic strings. Unlike impure functional, it is incredibly... I will call it "inelegant"... to bind particular variables into a dynamic string. E.g. instead of "(setf z (x + z))" for a particular 'z', you'll need something more akin to "(setf var@0x5551212 (x + var@0x5551212))". It can be done; you just need some canonical means of referencing variables in the system. Of course, those of you who are security-conscious or who concern yourselves with garbage-collection are probably cringing just looking at that. I am, too... though security is a problem for dynamic strings for various other reasons. This flexibility of Dynamic Strings can be powerful, and (like many powerful things) it can be abused. Unfortunately, the power of Dynamic Strings is also difficult to control. To start, their flexibility of creation and dynamic scoping diminish the effectiveness of various forms of static code analysis, including type checking - something programmers often utilize to help control and verify correctness of their code. Dynamic scoping also introduces its own set of gotcha's: unless the set of legal "free variables" is very well documented and enforced, a vector exists to organically increase coupling with neither oversight nor locality of reference (e.g. where one piece of 'dynamic string' is relying upon variable names supposedly "hidden" deep in the implementation code), which breaks principles of encapsulation and makes refactoring more difficult (both encapsulation and refactoring are mechanisms utilized by programmers to help control code complexity). Finally, while the ability to construct and transform strings is quite powerful, it also creates a gaping security hole for various forms of code-injection attacks... which can be wholly eliminated only by magnificent efforts at code-sanitization or by abstaining from external sources of dynamic strings. Unfortunately, those few places where dynamic strings display an objective feature-advantage over a combination of functional & macros are also those places where they become most difficult to control: access to arbitrary free variables and rather arbitrary construction utilizing strings from external sources. For example, if you DO choose to enforce discipline regarding the "free variables" accessible with dynamic scoping, you lose the feature-advantage over using one function and passing to it every legal variable (or an object containing them). Similarly, if you DO choose to eliminate external sources of code, you lose the feature advantage of wholly arbitrary construction of expressions - anything you might choose to do would be doable with function composition. And, while you might progress with code-sanitization, it will be a non-trivial effort to simultaneously make it happen and maintain any real feature-advantage over functional. The above discussed mostly feature differences. If one chooses to focus, instead, upon efficiency and computation issues, functional comes out clearly ahead of dynamic strings. Dynamic strings make difficult the sort of static code-analysis necessary for many compile-time optimizations, the dynamic scoping requires extra code-space and execution costs to track and lookup variable-names, there is parsing overhead, and there may be significant construction overhead (especially if one decides to leverage 'higher-order dynamic strings' or starts trying to do variable binding and injecting stuff like "var@0x5551212" into strings). Comparatively, functional offers far greater opportunities for optimization. Note, also, that macros were listed in combination with functional. Anyone can look up and see this topic isn't named: "DynamicStringsVsFunctionalWithMacros". However, it was mentioned below that dynamic strings offer some opportunity to introduce new control-structures. E.g. one can create a function called "until", offer strings to it for a body and condition, and implement it using evaluate, execute, and a while-loop. Given a little precision regarding control of the evaluation context (e.g. 'uplevel'), one can even go about introducing some control variables that don't interfere with the 'free variable' set utilized by 'evaluate'. That sort of feature can also be implemented in many functional programming languages, but relies upon either laziness or syntax extension. Macros are the most common example of syntax extension. Depending on how well the language performs partial-evaluation and optimization of static strings (as are available in the "until" example), macros might be of greater or equal efficiency than use of dynamic strings to obtain the same results. My own tendency would be to reject support for dynamic strings in a future programming language in favor of support for functional and syntax extension. These have much better properties for optimizations and static code analysis, has better security properties for distributed code involving participants with fractured trust, and still possesses the vast majority of the usable power of dynamic strings. However, I would add one more component: make part of a standard library a 'language' component that can parse strings into a common construct and evaluate them (within a limited and specified context) into usable values. Also make every first-class object in the language 'showable' as a cross-platform compatible unicode string - even functions and procedures. This isn't quite true to 'dynamic strings', but it does suffice for the purpose of making it easy to save and restore even complex functions, and is of value as a default encodec/decodec for code transmission in a distributed system. This sounds like the typical and expected response of a strong-typing/compiler proponent, of which I am not. I am a type-free/scripty/dynamic fan (it may also depend on the environment/domain of the software). However, some kind of limited expression evaluation system may be sufficient for most uses, such as in the PayrollExample (where formulas are stored in the DB, something HOF's have a hard time with). But having a language parser for a different language in a language kind of seems redundant and feature-happy, sort of an AbstractionInversion. If security is really the problem, there are potential ways to allow "Eval" to manage/control them without adding a language to a language. -- top I like provable correctness, high scalability languages (HW drivers to Web services and AI scripting), distributed systems, distributed programming, and security. The desire for correct and secure code that can be safely distributed causes me to especially shy away from 'scripty/dynamic' stuff. High-scalability also demands compilation-capability, but I'd favor a language that is also readily utilized in an REP loop. It should be noted that 'limited expression evaluation' is something for which I have stated my support. A language that comes with excellent support for parsing strings into expression-structures and functionally evaluating these structures in a programmer-defined environment would receive a thumbs-up from me. One might even call the evaluator "eval". Ultimately, however, such an "eval" is just a PlainOldFunction if it lacks access to a dynamic environment. Having builtin parsers for different languages within a language standard library would be a rather poor abstraction. Better would be to add language-support that allows you to "build-a-parser with-this-syntax" to produce a full parser for any language for which you can define the syntax. I.e. Regular Expressions on Steroids. I like syntax extension, and I like more than just macros and operator overloading. It probably wouldn't be possible to define just one static parser for MyFavoriteLanguage, and it really would be redundant to define a bunch of different parsers for the same language.
x = 1; until {x > 10} { x = x + 1; foo(x); } function until(condition, loopBody) { // define Until loop var reloop = true; while {reloop} { execute(1, loopBody); if {eval(1, condition)} { reloop = false; } } }Here, curly braces more or less quote strings that are passed to functions (a TCL convention). Control structures are simply functions which process strings as blocks. The "1" parameter in Eval and Execute tell it to execute in the variable scope one level "up" the scope (call) stack. One can see that the Until loop in the first part has the following structure:
until {condition} {loopBody}This is just a function call that matches the function definition signature. (Some languages don't need to make a distinction between Eval and Execute.) -- top Performance: If eval has to fully parse and interpret loopBody and condition each iteration, then this will likely be unsuitable for any non-trivial looping. Flexibility: The hypothetical "1" argument to eval is very inflexible. Consider a case where the loopBody and condition are specified as arguments to an intermediate routine. Now the number should be "2", not one. There is no binding between the context that creates the strings, and the strings themselves, so this sort of problem is endemic and difficult to solve in a general way.
(defun until-1 (condition body) (funcall body) (while (not (funcall condition)) (funcall body))) (setq x 1) (until-1 (lambda () (> x 10)) (lambda () (incf x) (print x)))I've replaced "foo" with a statement to print x. This prints out 2 through 11, as your example would with the same modification. Note that the code fragments passed to until-1 have to be functions. The lambda expressions create unnamed functions. They're defined in the same context as x, so there's no concern about "levels". This is the magic of closures. Although the lambdas add a lot of "noise", they do serve the useful purpose of immediately and obviously marking the contained code as something executable. Using strings, one would have to examine the code more carefully to distinguish between mere data and data-to-be-evaluated. A more attractive, better-performing, and easier-to-use solution uses a Lisp macro:
(defmacro until-2 (condition &rest body) `(progn ,@body (while (not ,condition) ,@body))) (setq x 1) (until-2 (> x 10) (incf x) (print x))This solution is obviously easier to use. In defmacro, the first argument is bound to "condition". The keyword &rest tells Lisp to collect the remaining arguments into a list, and bind that list the parameter "body". Because until-2 is a macro, Lisp doesn't evaluate the arguments before calling until-2; however, the Lisp reader has converted the source code text to sexprs that represent it. These sexprs refer to x in the same context where the expressions were read - again, no concerns about scope here. until-2 creates and returns new code, but doesn't execute that code (this is expected, because it's a macro). Now, assume we typed the call to until-2 in the REPL (the read-eval-print-loop which you typically work in when developing or debugging Lisp). The Lisp evaluator will immediately execute the code returned by until-2. Had we been compiling rather than evaluating, then the code returned by until-2 would be compiled - the binary need not contain any trace of the original call, just the resulting macro-generated code. The progn expression is just a way of packaging several expressions together into a single expression - it evaluates each in turn, and returns the value of the last ("nth") expression. In other words, it makes a block, like curly braces do in many other languages. (This could be written without the progn, by using a Lisp looping construct other than "while", or even Lisp's equivalent of gotos, but I figured that a while-loop is nice and intuitive.) The back-quote, comma, and comma-at symbols in until-2 are syntactic conveniences for common things that people do in macros. They make your example and mine look quite similar. Basically, the back-quote says "the next expression is a template". The comma says "replace the next expression in the template with its value" - the value of condition is the (< x 10) in our example. The comma-at is similar, except that it assumes the value will be a list, and splices it in place. (Note that "body" will always be bound to a list, even if you just passed one expression for the body, because the &rest keyword always collects the trailing arguments into a list. This works even if there are one, or none.) For our purposes, they sort of obscure the fact that the macro is actually operating on sexprs, so let's see what until-2 would look like without them:
(defmacro until-3 (condition &rest body) (append (list* 'progn body) (list (list* 'while (list 'not condition) body))))"list" forms a list of all its arguments, list* prepends elements onto the last argument (which is usually a list), and "append" concatenates its arguments (which are usually all lists). The single-quote prevents the following expression from being evaluated. The language keywords that we want to embed in the result - progn, while, not - are all quoted. They're just symbols, we don't want them to be evaluated. until-3 looks quite unwieldy to define, but it shows how a macro can manipulate lists to build code. The back-quote technique is much preferred for simple cases like this example, but some macros do very complicated things with code - looping through expressions, picking out symbols and examining their characteristics, rearranging the order of things, replacing symbols with bits of code, etc. etc. All this can be done without having to parse any strings. The Lisp reader has already broken everything down into (mostly) symbols and lists for you. It's as if you had direct access to the output of the lexer in many other languages. Say for instance, in your original example, that we wanted to time each statement in the body of your loop. This would be fairly easy in Lisp. There's a macro named "time" which evaluates its argument, prints the elapsed time out, and returns whatever the evaluation returned. So we want to wrap each expression of "body" in a call to "time". Easy:
(defmacro timed-until (condition &rest body) (let ((timed-body (mapcar (lambda (expr) (list 'time expr)) body))) `(progn ,@timed-body (while (not ,condition) ,@timed-body))))"let" introduces new local variables; in this case, timed-body. "mapcar" applies a function to each element of a list, and returns all of the result as a list. So the line starting with "(let ..." defines and initializes "timed-body" to the results of calling mapcar. In this case, the list that mapcar operates on is the value of "body", containing all the expressions that go in the body of our "until" loop. The function is a lambda form that takes one argument, and wraps it in a list with time at the head - that's the representation of a function call to time. The rest of the "let" is just the back-quoted "progn" form of until-2, except that we use "timed-body" instead of "body". To do this using strings, you'd have to pick apart the body string. Search for statement separators? Sure, that might work most of the time - except when the statement have embedded strings in them that happen to contain statement separators. To avoid that sort of problem, you'd actually have to do a full lexical analysis of the string! That's exactly why working with the code as structured data is advantageous - the system has already done that work for you. (Since I'm very much a part-time Lisper, corrections to any of the above are welcome. I tested the code using AllegroCL.) -- DanMuller As a person with a reasonable amount of Tcl experience, and only a small amount with Lisp, I figured it might be useful for me to jump in with some Tcl code. The following implements an until control structure. It runs the body it's passed in the context of its caller until the condition (also evaluated in the context of its caller) proves to be false.
proc until {condition body} { uplevel 1 $body while {![uplevel 1 [list expr $condition]]} { uplevel 1 $body } }I made the above code simple, so it's easy to read. In the real code, there would probably be only one uplevel (since it would be faster). Using the above, the following code would print out 0 through 10.
until { $x >= 10 } { puts $x incr x }As a minor note, neither the body nor the condition of the until call need to be reparsed each iteration of the loop. They are parsed once when it starts to run and converted to bytecode. The bytecode is then used for each iteration. -- RHS Thanks for the contribution! Can you comment on some of the other issues brought up on this page, e.g. the possible fragility of a this Tcl "until" function, or the feasibility of writing "timed-until" in Tcl (or other languages that rely on evaluation of strings for language extensions and closure-like features)? -- DanMuller
some code... logging(on=true) some complex deeply nested code... logging(on=false) some more code...Or even:
some code... logging(on=true) some complex deeply nested code... if some-condition logging(on=false) end-if some more code...-- Top Yes, this is pretty standard stuff for people learning macros in Lisp, and there are numerous implementations of exactly these sorts of things. This particular example you'd actually do by wrapping the function in another function; you can't actually modify the internals of an existing function, because the standard doesn't require the function to be stored or implemented as a list; in fact, the standard doesn't require a Lisp implementation to have an interpreter. Some implementations don't have one, and always compile functions to byte code or machine code. (CormanLisp? is an example of the latter, on Windows.) This doesn't detract from the power of macros, however, which execute between the reader and the compiler-or-interpreter. It also doesn't mean that you lose access to EVAL; it will just compile-then-run on such implementations. The TRACE function happens to be standard in Lisp, but you could write one easily as follows. (Well, pretty easily - I spent some time looking things up in the standard at my current level of Lisp fluency.)
(defun my-trace (fun) (let ((original-function (symbol-function fun))) (setf (get fun 'traced-function) original-function) (setf (symbol-function fun) (lambda (&rest args) (format *trace-output* "Entering function ~A~%" fun) (apply original-function args) (format *trace-output* "Finished function ~A~%" fun)))) (values)) (defun my-untrace (fun) (setf (symbol-function fun) (get fun 'traced-function)))It took me about ten minutes to get this right. Someone who knows Lisp well could've done it without waking up. You pass a symbol to MY-TRACE, which gets the original function and stores it on the property list of the symbol. (Every symbol has a property list; they're not used much, but can be handy for this sort of thing. You could also store it away somewhere else, for instance in a global hash.) It then redefines the function associated with the symbol, very simply. The new function is a closure; notice that it uses two variables in lexical scope, FUN and ORIGINAL-FUNCTION. No bookkeeping required to reference these; Lisp knows that the function object we create with the LAMBDA form needs them, it does the bookkeeping. I used the VALUES form to make MY-TRACE return nothing, since we run it for side-effect; I wasn't consistent about this in MY-UNTRACE. MY-UNTRACE undoes the damage. Example run, using AllegroCL, with the above functions defined:
CL-USER> (defun foo (a) (format t "Yop! => ~A~%" a)) FOO CL-USER> (foo "Haha") Yop! => Haha NIL CL-USER> (my-trace 'foo) ; No value CL-USER> (foo "Haha") Entering function FOO Yop! => Haha Finished function FOO NIL CL-USER> (my-untrace 'foo) #<Interpreted Function FOO> CL-USER> (foo "Haha") Yop! => Haha NIL-- DanMuller Let me see if I got this. You are doing this by having a function that wraps the target function with a closure that provides start and end behavior and redefines the name. Then removing this wrapper for the end-trace operation by repointing the function name to the original, which was stored in a map associated with the function (symbol) upen trace-on? What I really had in mind is having it automatically performing a standard "sandwich" for every operation, not just those explicitly designated. If a language provides hooks (events) for start and end of such, it should be fairly strait-forward. But I was wondering if there was a way to do it without relying on built-in language hooks. -- Top Yes, you got it. Wrapping each expression can't be done "after the fact", i.e. after a function has already been defined, at least not in standard CommonLisp, because the standard doesn't guarantee that a function which is already defined is represented in any way that you can pick apart. In other words, the thing returned by (symbol-function 'foo) is opaque. This flexibility allows implementations to generate efficient code. If by "automatic", you mean for every statement in every already-existing function, then I can't imagine how that could ever be done without some sort of built-in language hook that ran before or around each statement. And that would place limitations on the form that such code could be stored and executed in, since optimized code doesn't preserve the original statement boundaries. You might be able to wrap individual statetents of many functions by reloading or recompiling them in the right environment, though. Reloading or recompiling a function or file is usually pretty trivial, so that's a minor inconvenience. The usual mechanism for defining named functions is the standard macro DEFUN, and the standard doesn't allow conforming programs to redefine it (for obvious reasons). However, it does allow most standard symbols to be defined temporarily as symbol macros, which might allow you to hook into the definitions of functions and manipulate them this way before they're defined. (I'm not familiar with the use of symbol macros yet.) Also, I suspect many implementations would let you get away with creating a shadowing definition of DEFUN. And finally, you could write a MY-DEFUN to use instead of DEFUN, and have it response to a global setting to either define a function normally, or first process it however you see fit. So, yes, there are ways to do it. However you end up hooking into the function definition process, the basic code you'd write to do the wrapping would look much like the earlier example using TIME. If you wanted to reach more deeply into nested code for wrapping, though, then the code would get a little bit more complex. Such systems are called "code walkers" in the Lisp world. I haven't look at any in detail, but I think if the wrapping needs are simple, then the code walkers can be, too. They would have to expand any macros that they encounter (the system provides functions to do this), and recognize special operators (like IF) to know when and when not to descend into the code tree. Code walkers are not much different in form from Lisp evaluators written in Lisp, examples of which you can find in numerous tutorials on Lisp. (The topic is related that homoiconic thing that DougMerrit? was going on about.) -- DanMuller I would suggest AdvantagesOfExposingRunTimeEngine for the times when you need to get at the guts of something that the language builders had not anticipated. Further, if the run-time engine had the usual RDBMS features, then one could set a trigger on any variable to monitor its reference and changes. In short, we don't need to complicate the language for something like the tracing request; simply make a back-door available for the few times when we need strong meta features. Of course some propose they are more than "occasional use", but I have yet to see sufficient UseCase occurrences to justify a high frequency claim. -- top
# Protect escaped newlines set script [string map [list \\\n { }] $script] # Break script into commands split $script "\n;"} proc time-do {args} {
if {[llength $args] > 0} { set time [time {uplevel 1 $args}] puts "$args took $time" }} proc timed-until {condition body} {
set timedbody "" foreach cmd [splitScript $body] {append timedbody "time-do $cmd\n"} uplevel 1 $timedbody while {![uplevel 1 [list expr $condition]]} { uplevel 1 $timedbody }} And the test: set a 12 timed-until {$a < 0} {
puts "In loop" incr a -1 puts "and again?..."} Not exactly rocket science. The Lisp version is easier and cleaner but that's the design decision they went for. Neil. So the above challenge was to automatically wrap each and every command in the until-block with a timer function? I did not catch the automaticness when I read it. I wonder something: if one cannot tell whether a list is a command or data except by how it is used in Lisp, then is data timed also? -- top Cool! I'm impressed, up to a point. I poked around for Tcl documentation, but I couldn't find anything that had an explanation of variable scopes that made sense to me in the small amount of time that I spent at. The addition of an uplevel command at each level of function call seems like a neat solution, although I still wonder how you would deal with it if timed-util needed another local variable, as described earlier (but not shown in any of the examples so far). (Maybe that's what upvar could be used for, as opposed to uplevel? If so the comment about "simply use [uplevel 1]" is deceptive.) I also wonder how robust the splitScript function is in the general case - he does seem to imply that in the general case, you'd have to parse the strings. (I wonder what he had in mind with the Script and Command data types?) I might feel compelled to install a Tcl interpreter to try this. Oh wait, I already have tclsh on my Linux system! (Some time later...) This example works, too:
set a 3; timed-until {$a < 0} {puts "In loop: $a"; incr a -1; puts "and again?"}I'm not sure how splitScript breaks up the string, but it did even when I put it all one line. Interesting. Even this example, with extra braces in the literal string, seems to work:
set a 3; timed-until {$a < 0} {puts "In loop: {$a}"; incr a -1; puts "and again?"}To answer your question regarding code and data: Well, no, because you don't expect data in this context. The call to mapcar iterates through the topmost level of the nested list that represents the statements (forms or expressions in Lisp jargon) to be executed in the body. If one of these forms is data, that would be an error on the part of the caller. mapcar won't descend inside these forms, so any data (or nested calls to other functions) embedded there is not wrapped. timed-until is an exceedingly simple example of a class of functions called "code walkers" in the Lisp world. I'm impressed with what can be done with Tcl's string-based approach. I still find the Lisp macro approach conceptually simpler for code-walking applications, but the difference is smaller than I expected. I'm not sure if the Tcl example is more like the dynamic (lamda/closure) method or the macro method. Perhaps the distinction doesn't apply to Tcl's execution model. In Lisp, the easy creation of closures makes things a bit simpler than the uplevel stuff, I think - but the proof would be in more complex examples involving closures over variables in different lexical scopes. Also, closures can capture variables in scopes that can then go away, but the variables nonetheless remain valid when the closure is used later. (I hope I have the terminology right here. I can explain that in more detail if you're interested.) I would be quite surprised (again!) if Tcl can do something like that. Hmmm... I just looked up "TIP 187". (See.) This is apparently not a description of a technique, but a draft of a proposed change to Tcl to support lambdas (anonymous functions). No mention is made of closures. TIP 194 describes how to do something similar without extending the language - but mentions a patch to make it work efficiently. Interesting. I don't understand the part about namespaces in TIP 194, but I guess I'll have to read more about Tcl scoping. Thanks! Overall, I still think Lisp's approach is easier and more comprehensive, but I can sort of intuit how a string-based approach might provide similar functionality, with the addition of some bells and whistles to make the creation and application of closures easier. And you've gotten me curious about Tcl now. Always fun to be introduced to a new language! -- DanMuller As the "Neil" who wrote the above (on the comp.lang.tcl newsgroup, as it happens), I'd like to add a couple of points. Firstly, it may not be clear from what I wrote, but I do consider that Lisp macros are a more elegant technique. Tcl owes a lot of it's heritage to Lisp. For instance, the reason the splitScript procedure is so short (and yet should work for any script) is because Tcl is very regular. A script is a list of commands separated by (unescaped) newlines or semicolons. Tcl could still learn some more tricks from Lisp (and Scheme). Lambdas will help with that. Tcl currently can't do some of the fancy closures stuff that Lisp and others can do (at least, not without considerable effort). I would urge looking at Tcl, if only because of its fantastic implementation (things like cross-platform networking etc). -- Neil Madden Re: Tcl currently can't do some of the fancy closures stuff that Lisp and others can do (at least, not without considerable effort) What is an example, and how practical is it? Some of us (an AnonymousChoir) suspect that many closure/HOF tricks are mostly MentalMasturbation, or just minor improvements of a few percent code reduction. -- AnonymousDonor? Lexical closures are used ubiquitously and effortlessly in typical CommonLisp code - nothing contrived about it at all. Every time you see a LAMBDA form, a lexical closure is being created if there are any references from inside that form to variables in the enclosing lexical scope. A trivial toy example off the top of my head: Sum the remainders of dividing each of a list of numbers by a given divisor:
(defun sum-remainders (divisor numbers) (reduce (lambda (x y) (+ x (rem y divisor))) :initial-value 0))Contrast this with explicit looping and accumulation:
(defun sum-remainders (divisor numbers) (let ((y 0)) (dolist (x numbers y) (incf y (rem x divisor)))))Not terribly different in effort to write, but these are simple examples. The first is much quicker to come up with, after a little practice. The first example has a LAMBDA form that creates an anonymous function to pass to the built-in higher-order function REDUCE (). The lexical closure is important because this anonymous function references divisor directly. You don't even really have to think about this, it just happens. LAMBDA forms are used very often as arguments to higher-order functions, often to avoid writing tediously trivial (and error-prone) looping code over and over again. If you consider such things "tricks" and MentalMasturbation, I'm sorry for you, because you'll be missing out on fun and easy ways to write programs. I guess so. I don't see the practicality in it. Sorry, I just don't. Looping code is usually not the bottleneck of complexity I encounter day to day. Maybe when somebody figures out how to simplify challenge #6 using FP, my interest may be triggered again. Some interesting side notes on the example: I believe that in Graham's mysterious Arc language (envisioned as a successor to Lisp), the LAMBDA form would be less verbose and look something like this:
[+ _ (rem _ divisor)]As an interesting note to the interesting side note, someone has already written code to add similar syntax to Common Lisp, using the built-in standard capabilities of the language! () -- DanMuller
<html> <body> <h1>Interesting Results</h1> <p> We created a time capsule by building a ship bigger on the inside than on the outside. Here are the results. </p> <h2>Data</h2> <table> [ foreach result [getResults] { append htmlData <tr>$result</tr> \n } set htmlData ] </table> </body> </html>Parsing this is trivial in Tcl. Assuming the data is in the htmlData variable, here we display it on stdout:
puts [subst $htmlData]OK, all of this might be just as easy in Lisp, I really do not know, but perhaps it gives you some ideas of the benefits of scripts-as-strings. Another is you can do replace operations on template code for meta-programming purposes with special parts marked with your own reserved tags. Just use the normal string handling functions you would use for any other data, followed by a suitable call to eval. -- KristofferLawson Look up the term "closure" and study some examples in a language that supports them in order to understand the difference.
set x 5 set script [list doSomething $x] # ... some time later eval $scriptThis doesn't offer everything that closures do, but is a step in the right direction. And what of the benefits from having a string model that were mentioned above? -- KristofferLawson Note that a closure doesn't just get a snapshot of the state of variables in its environment; it captures references to bindings of variables (the association of a value with a variable) in its lexical environment. (And note also that it's about the lexical environment, not the stack.) In a language that is not purely functional, like Lisp, the values associated with the bindings can change on each execution of the closure, and can be modified by the closure. That's more complex and powerful than simply getting a snapshot of the environment state. (With respect to purely functional languages, I'm not sure if this distinction holds.) Closures aren't always needed - but the problem is that when you do want them, they're pretty difficult and ugly to simulate in most languages. TCL's technique gives you some of the benefits of a closure, but the lexical environment of a string has to still be active when the string is evaluated; I don't know enough about tickle to know if there's any way of working around it, but I would expect it to be very complex. I haven't see any benefits of a string-based model described on this page that I agree with. The representation used for function objects is actually immaterial to the issues raised at the top of the page. The question of whether strings are good or bad methods of representation has more to with other issues. E.g. the ease or difficulty of writing of macros, storing and loading code as data, and execution performance. Perhaps a system that provides enough tools for working with its preferred representation of source code can be relatively convenient to use regardless of what that representation is. -- DanMuller We need better UseCases to really compare.
Class StateHolder? -parameter myState # Strictly this isn't necessary as myState is defined as a parameter: automatic accessor method provided. StateHolder? instproc changeState {newState} { [self] myState $newState } proc setupStateMap {} { set st1 [StateHolder? new] set st2 [StateHolder? new] bindEventHandler event-A [list $st1 changeState newState-A] bindEventHandler event-B [list $st1 changeState newState-B] bindEventHandler event-C [list $st2 changeState newState-C] }Then, sometime later, the event occurs and things happen as you might expect. -- KristofferLawson
#define the manager class (defclass state-holder () (my-state :accessor my-state)) #define method for changing the manager's state (defmethod change-state ((holder state-holder) new-state) (setf (my-state app-manager) new-state))Now you have many state-holder where each will be changed differently according to some event
(defun setup-state-map () #create many state-holders (let ((st1 (make-instance 'state-holder)) (st2 (make-instance 'state-holder))) #register event for invocations (*ev-map is global) (bind-event-handler *ev-map* 'event-A (lambda () (change-state st1 "new-state-A"))) (bind-event-handler *ev-map* 'event-B (lambda () (change-state st1 "new-state-B"))) (bind-event-handler *ev-map* 'event-C (lambda () (change-state st2 "new-state-C")))))I did this because I don't want client to have direct reference to state-holders (may be security issue or something). But I allow them to invoke the event by
(notify-event map 'event-1)the code flow goes like this
(defvar ev-map nil) (defun main () (setup-state-map) (infinity-loop (handle-client map))) #somewhere, some thread in handle-client will call notify-eventBy the time I'm in the stack call of client calling NOTIFY-EVENT, the stack of SETUP-STATE-MAP is already gone. Using lambda, this is possible because lambda captures its environment (variables ST1 and ST2). How would you use uplevel or eval for this? It might be helpful to the readers to state the requirements in English instead of in code. My understanding is that you want to execute arbitrary event strings without risking messing up the global scope? I don't know about TCL in particular, but evaluation does not necessarily have to inherit any scope. In other words, ideally the scope in which evaluation happens should be fully controllable such that one can pick a level in the stack (TCL can), or select no scope. (I don't think anyone claimed that TCL was the pinnacle of dynamic code strings. Maybe it is, I am a newbie in it.) As a backup plan, one could always simply not use sensitive global variables. Wrap them in functions. Or, is hiding function scope also part of the requirements? If so, how can it use library functions then? The ideal executor operation may look something like:
result = executeString(codeString, variableScope=foo, functionScope=bar)AKA GreenSpunning Are you suggesting that LISP allows easy and infinitely flexible management of scopes? I don't think so. The ultimate abstraction would be AdvantagesOfExposingRunTimeEngine because one could make operations that access/limit the scope in any damned way they please. For example, one might want to add/use a column on functions to indicate which ones to allow event snippet code access to and which to not. Sure, such is probably doable in code, but code is ugly to work with (in my opinion. CodeAvoidance). Seems we're back to our classic GreencoddsTenthRuleOfProgramming versus GreenSpunning HolyWar. ["If so, how can it use library functions then?" Arrrrgh. Top, I keep telling you, go read up on it. Wading through something like Guy Steele's "Common Lisp - The Language" (the precursor to the standards document) would teach you so much about language design that you can't afford not to learn if you're really interested in this topic. It's not light reading, but it's eye-opening. Just learning the terminology used to talk about variable scopes, binding, and environments is an enlightenment in terms of learning and refining distinctions that are important to how a programming language works. No amount of speculative example code and metaphors can replace actually studying these concepts. You may or may not like Lisp when you're done, but at least you might have some better reasons for your preferences. -- DanM] BookStop. I have no desire to further pursue Lisp. There are higher-priority things to explore. A language that hard to explain is most likely F'd anyhow in my experience. Necessary verbosity is a yellow alert. [Again, and always, you miss the point. You'd learn about language design, not just Lisp in particular. But it's already more than evident that you'd rather wing it than base your investigations on available knowledge. Good luck to you while reinventing all those wobbly wheels. -- DanM, out for good on Topist topics] You won't be missed. I prefer participants who are articulate enough to not need BookStops. Oh please, Top, you prefer ignorance, and everyone who reads all these pages who actually took the time to study and understand these topics, knows it. Dan is trying to help you, because you express interest in language design. Guess what, you have to study existing languages to grasp these concepts, which go far beyond syntax, something you can't seem to get past. It's amazing really, watching you completely not grasp feature after feature, one example after the next, no matter who tries to show you. You will never be any good until you grasp and understand the basic power of features like closures, lambda expressions and higher order functions, to continue and argue against them is akin to jumping up and down and loudly screaming your ignorance, and to think evaluating strings are somehow on par with closures, is simply laughable. For someone who calls himself topmind, you have surprisingly little of it. Delete this after you read it, I won't be getting into a conversation about this, I know better, but sometimes people need to be told when they're a BlubProgrammer? and don't realize it, and you are the very definition of a BlubProgrammer?. A year ago I might have considered your blub insult worth looking into. But after you FP fanatics failed to back your claim of "significant code reduction", your credibility went into the flusher. There you had an opportunity to provide clear evidence (less code) of FP claims made by you or your peers. You couldn't use the "use don't understand it" Weasel Escape because the claim was code-size, not vaguer metric-elusive stuff like "better organization". Why should I believe you after that major coupe? I caught you guys red handed. FP stands for "Full of Puffery" and "Failed Proof". -- top Then may be TOP stands for "Tell Other to Proof" (cause top never proof his TOP benefit, nevertheless, shift the burden of proof to other to require proof for him to believe), or "Tons of Poof" :). I don't claim that my pet techniques are objectively better, only that they are not objectively worse (overall). You should know this after all these years. However, you guys are claiming that FP is superior to string-oriented techniques, so the burden of evidence (and clarity) is on you. You dug the hole, not me, so don't put this all on my back. With all regards, I admit that some of Top's point is correct (code can be reduce if using even nimble database). But I found it hard to believe anything further for someone who doesn't even get the benefit of HOF over Eval. Does Top even understand what closure is? He even claim that using HOF SORT function is no use because SQL has SORT BY keyword, where as IMHO SORT BY is like inferior version of HOF SORT. Then show it with semi-realistic code instead of ArgumentFromAuthority and ArgumentByIntimidation?. Note that I never said SORT BY was always better, just that it greatly reduced the need for the sorting shortcuts some of you talked about. Thus, it wouldn't add much to your total simplification score. Remember, you claimed "significant reduction in code size", not clipping nasal hairs here and there. Top, every time someone shows you some code, you don't understand it, then you object that it's better, then you post some inferior crap that uses a table and suddenly you think you've proven them wrong. It is your fault. Primary requirements should be in English, not in code. Not much of a programmer if you don't see code as a concrete way to state requirements. Most wiki readers are not going to know every language known to man. It is rude to me and them to use code instead of English to state requirements. (I have been guilty of it myself in the past, I admit.) It's a pointless exercise to show you anything, because you don't seem to have the faculties to even grasp the implications of the differences in your approach and the given approach. You try and turn everything into a turing battle rather than simply trying to grasp the expressive implications of the functional approach. Until you can grasp and understand what a closure is, and why it's important, and why it's better than evaling a string, you simply aren't capable of having this discussion. You don't have the base knowledge required to talk intelligently about the issue. And you don't understand objectivity and requirements documentation. Show me closures simplifying REAL and COMMON stuff, not vague talk about their linguistical esthetics. I want a dishwasher, not a painting. -- top Sure, as soon as you demonstrate that you actually understand what a closure is, because all evidence available in your writings says you don't, and you can't demonstrate something to someone who doesn't even know what it is you are talking about. Evaling a string is not equivalent to executing a closure, until you grok that, you aren't worth talking too. I never said it was equivalent, only that it solves similar kinds of problems. Those are not the same thing.
(defun main () (setup-state-map-then (infinity-loop (handle-client map)))) #some where,some thread in handle-client will call notify-eventwhich is coded so that the scoped of st1, st2 variable is still in stack when client call invoke-event. But I am not required to do that. The Eval approach, however, REQUIRES the variable to be in the stack so that you can use uplevel to get the binding of st1 and st2. Actually, generally with event coding in Tcl, the value of the variable at that point is what is interesting, not the variable itself. There is not as much difference as you seem to be claiming in passing the values of variables to the event handling code (essentially this is a snapshot) and getting a complete snapshot of the environment to use. I get the feeling we are arguing about ever-smaller areas. It is clear both approaches offer a lot of flexibility which many other languages do not enjoy. -- KristofferLawson This is one good response, thank you some reasonable guy here. If you can see in my lisp example. In the event handling code, what I did was CHANGING the value of the variable. You can not get the snapshot of the variable's BINDING to use in Eval. That's what closure does when it was created. Well, as you might see from the example I wrote above in Tcl, there's not that much difference, mostly as a result of objects being represented as commands (indeed, you could call Tcl a CommandOrientedLanguage?). True, not quite the same as closures but you can do surprising things with it. -- KristofferLawson Before we get any further, the requirements for an event code handling system should be made explicit (hopefully using English, not Lisp this time). Candidate requirements include:
//pseudo-code def foo x = 10 rec_eval(random(100), "x = 20") end def rec_eval(num, expr) if (x = 0) eval expr, ?what's value of uplevel here? else rec_eval(num-1, expr) end endOr am I required to keep track of stack level just to use this so convenient eval? If I am not mistaken (I'm a Tcl newbie), it is based on the stack level, not the function itself. One can look at any level on the call stack. (Special symbolic identifiers select the very top of the stack if raw integers are not convenient.) In this way, TCL is a step closer to AdvantagesOfExposingRunTimeEngine. -- top And the question is how do you know how deep are you from the stack call of "foo". I think one recursive call count as one Stack deeper also, right? And I am not saying that at what level of stack that "foo" get called also, it could get call in 100 stack deep from top stack. So what's the benefit? I am confused here. Specifically what do you want to know? Are you looking for some kind of "callLevels()" function/method that tells how deep we are at a given point? In short, Yes. If you don't know that, how would you get the variable you want to access? At the recursive eval above. the stack level of foo and stack level of rec_eval where expr is evaluated could be many stack call away. How do you determined that number to use in uplevel? You can just uplevel the var in the recursive function. That brings that var to that level, then when it is called recursively again the variable that was 'brought in' a level up is once again brought a level up. It shouldn't be a problem. -- AnonymousDonor Isn't that like going through the hoop just to do that? Where is "Conceptually simpler" and "Syntactically simpler" pros left? And your function have to be aware that's it's written in recursive style. And I suppose that if it is mutually recursive or involve callback-to-callback-to-callback-to-eval then each function involve must know that it's in the process of recursive function call doesn't it? So now what's the benefit of Eval left? (I know Eval has its use, everyone that try to debates with top knows benefit of eval. What i'm trying to say is that HOF is safer and simpler to use in most cases. Only in some rare case I would prefer Eval over HOF). As far as "conceptually simpler", what practical objective are you trying to achieve? I don't necessarily dispute that such techniques make emulating closers difficult (although don't know enough Tcl to say for sure), but that is only an issue in practice if it cannot be used to solve real problems. In other words, focus on how it solves real problems instead of how it emulates your favorite internal language features. -- top Having to make sure to get binding of variable correctly even when using just a simple recursion should not be consider non-practical? While focusing on the problem, having to not to forget to always uplevel recursive variable is a new problem, isn't it? Using eval this way can be use to solve real problem but it's also at the same time a problem in itself. I thought the argument here was about what should be use, so why should I forgive Eval approach if it require work around to solve problem? You have not shown the practical context for such recursion. I don't know what UseCase it is solving. Maybe recursion itself is the wrong or long approach. I can't tell without seeing it in a UseCase. -- top May be SAX parser, Lexer, you never do depth first search? how do you do it? No recursion? What would you do when you recurse into a specific node? Not evaluating an expression passed in by user? Ideally a depth-first search should be built into a CollectionOrientedProgramming set of operations so that one does not have to keep using recursion to traverse trees. But I still have not seen a need to look "up" the stack. Besides, I don't know many people who write formal parsers for a living. It is a rather small niche. Besides, such a recursive loop may be in only one or few places in a program. Thus, if an eval solution takes a bit more code, the total difference is minor. There is no large-scale economic savings being shown here, but rather what seem to be PerfectStorm-like examples. By the way, are you saying whatever that you don't do is not the practical solution? Why must Eval limit me to code in the limited way. All you say till now if to ask me not to do this, not to do that, that's not practical, I would not want to do that. All of that just to forgive the limit and lacks of power of Eval. I thought the discussion here is about "what is more useful", not "Why I should just use Eval and forgive all its disability". I understand why you don't see different between HOF and Eval. When all one writes is "Hello world" there is no different between C or Lisp. By the way, I will never mind if Top say "That's not a useful use case, I need more specific use case". Cause reading in many pages. Every time someone shows him a practical and useful application of anything Non-Top, Top always say "We need more specific use cases to discuss" or "that's not my domain". Why don't you just be more aggressive and raise a case where Eval is more appropriate, instead of defense for Eval. What is the connection of you with Eval anyway, Cousin? In practice, I don't need to use Eval that often either. It is mostly toy examples where I find a use for it because the real world is not regular enough for it to be of much use. In other words, toy examples exaggerate the regularity of certain patterns. As far as "not my domain", what can I say? I want a solution for my domain, not for systems programmers. If FP is great for systems programming, that is wonderful for them, but does not necessarily make its benefits universally applicable. The fact that FP examples keep being drawn back toward systems programming like a baby bird finding its mother suggests something.
FunctionalProgramming excels in addressing algorithmically complex domains.Got it? This is not the same as systems programming; as a matter of fact, the common wisdom is that FP might not to be fit for systems programming at all (until somebody shows otherwise) for a couple of reasons that are not worth going into details here. I don't have any practical experience writing formal parsers. They had a course in it when I was in college, but it was not required by my particular minor (graphics/CAD). If you have questions about how Tcl handles issues in writing parsers, then take it to a Tcl group. I was hoping an FP fan could find a biz example of their braggings, but asking for such seems to make them bleed from the ears for a very unknown reason.
Systems Software <--> Applications Software <--> End User(I suppose end users directly use the OS, but actually they see things such as file browsers and EXE launchers, not really the OS.) Then is DB Software == Application Software in your view? (you seemed to imply so) ''Is WindowsMediaPlayer Application Software? How is the media decoder/encoder done? Would it be easier to code encoder/decoder with HOF? How should layout of GUI be code? would it be easier if GUI layout is close to declarative form, which is possible because the use of HOF? By the way, If HOF makes programming more concise and easier, I don't see the reason why they should only be used in System Software. HOF used in Application Software will also improve the program's quality. As for it may not improve more than 10-20% of code, well YMMV. Productivity depends on people, give monkey Assembly, Java and Lisp, it will makes no more significant improved code. HOF is not a living cell in itself, it still need the developer's brain to makes uses of it. The point is there to enable the brain to be used, not limit the use of our ability. And it's not about only the number of line of code that is reduced, while only 10% of code lines may be reduced, the comprehension of code improve. IMHO, even 10% improvement of code comprehension is a significant improvement. The following code:''
for(int i = 0; i < employees.length; i++){ if (employees[i].is_senior()){ count++; } }and
(count #'is-senior employees)Lisp version may reduce only one or two lines of code, but I would say it significantly improve the comprehension of codes. Don't try to measure the Quantity of code, measure its Quality, instead.
select count() from employees where is_seniorAs far as code-size, CollectionOrientedProgramming languages are hard to beat for these kinds of problems. See DotProductInManyProgrammingLanguages. Unfortunately, it is tough to objectively measure and define "quality" and "comprehension". If you accuse somebody's code of not being "comprehensible", they could claim that you are just not smart/experienced enough to comprehend it fast. Again, MostHolyWarsTiedToPsychology.
select count() from employee where is_seniorand you don't think you are exposing the benefit of Higher order function here? You can do this because SQL have that syntax. Why don't you show me that so terse and compact code in C++/java/Pascal? I can show you HOF function that can do declarative GUI and then you will show me another Declarative GUI framework; but that framework is surely not going to be in SQL, or integrate naturally with it. I can show you another Unification library that mainly use HOF in it's code and interface and then you will show me another Unification Expert system; but that system would, in no way, work in the same syntax as previous two libraries. Where is one single language that do all that? I don't mean in terms of something inherently in the language specs, just the language that enable those all language to mix together naturally. HOF enable you that. Don't you see that when I show the example of COUNT function that's so resemble to SQL, that you can't do that in C++/java/Pascal? What if the items I want to count can't be stored in DB - I must suffer with writing that for loop? So you have inconsistency of doing two ways of counting depending on whether you can put the data in DB or not? Imagine if I have A gui event system that will stream me series of event and I want to filter out some event; using HOF, I could do. (do-event-from #'next-stream
:filter #'is-interest :handler #'handle-event)Event is real time, how do you use your SQL to help here? By the way, since the topic of this page is DynamicStringsVsFunctional, you can use eval in this case, since eval is interpreting it can do anything. You can even make a language that use syntax "f$%)($^$@@" to result in report creation. But then you are on your own to writing your own parser for such syntax. You are right that the SQL issue is perhaps off topic, but it does in practice reduce the need to write the kind of loops you show. But getting back to eval, I don't know why custom parsing is allegedly needed. Some ExBase examples of passing filter criteria and a function name to be eval'd were shown in ArrayDeletionExample. As far as streams, I don't use them that often, practicing SeparateIoFromCalculation most of the time, so I'll have to think about that some more. Rough pseudocode for the prior example:
foo = newArray[....]; .... print(count(foo, "struct.is_senior")); //---------- function count(myArray, myFilter) { cnt = 0; for(int i = 0; i < myArray.length; i++){ struct = array[i]; if (eval(myFilter)) { cnt++; } } }Or if we want the reference name to also be dynamic:
foo = newArray[....]; .... print(count(foo, "struct", "struct.is_senior")); //---------- function count(myArray, refName, myFilter) { cnt = 0; for(int i = 0; i < myArray.length; i++){ &refName = array[i]; if (&myFilter ) { cnt++; } } }In the second one, "&" means literal string substitution, which is borrowed from ExBase syntax. Note how the "refName" variable kind of resembles a closure argument. [Marker 00001] My point still remain about what if you don't have SQL though. Ok so let's make summary of my understanding about eval and HOF okay? So that we can discuss more. It will hard to talk if you don't know how I view your word. Here's my list:
// Example "Martha" function event_button7_click(info_array) { if (info_array['mouse_coord_x] > 38) { message = 'You clicked the button too far to the edge'; setGuiAttrib('formX.myTextBox.value', message); refresh_all(); } }Or
// Example "Martha 2" function event_button7_click(info) { if (info.mouse_coord_x > 38) { message = 'You clicked the button too far to the edge'; formX.myTextBox.value = message; refresh_all(); } }The first example can use a more language-neutral API, while the second is a bit more compact. Since I don't like "tree paths" in GUI's, the middle line can perhaps be:
setGuiAttrib('formX','myTextBox','value', message);Or in Tcl:
setGuiAttrib formX myTextBox value $messageBut I suppose I am wondering a bit off topic. I am just hoping to create some material to use as examples. Which you are required to declare formX as global variable. Here is one way I would code my GUI
def setUpGUI() formX = Form.new button7 = Button.new //add some component in to form bindEvent(button7, :click) do |info| if (info.mouse_coord_x > 38) message = 'You clicked the button too far to the edge' formX.myTextBox.value = message refresh_all() end end end | http://c2.com/cgi-bin/wiki?DynamicStringsVsFunctional | CC-MAIN-2015-27 | refinedweb | 11,012 | 61.06 |
Is problem in function or main?You right!. There could be anything in memory right after that char variable.
Dang! I never thought ...
Is problem in function or main?I see. Thank you.
So than this modification:
[code]
char c = 'h';
char* b = &c;
[/code]
would make y...
can anyone code this...........Prime number is a number that can be divided evenly only by 1 or itself.
2 is a prime number.
3 is a...
Is problem in function or main?Wouldn't one also use strncmp()?
Does == work in C++ for comparing strings or letters?
Getting into "Object Oriented" spirit.Yep. That's what I got.
stack.h
[code]
#ifndef STACK_H_
#define STACK_H_
#include <stdio.h>
#inclu...
This user does not accept Private Messages | http://www.cplusplus.com/user/Stremik/ | CC-MAIN-2015-48 | refinedweb | 125 | 80.99 |
Outdated INTEL update tool
Hi guys,
i'm new in the UP community and on the first day i got an annoying Issue:
I wanted to setup Ububtu on my UP-Squared following these instructions .
The first steps were quite simple but then i tried to update the grafics driver... The Update tool is outdated and in the "manual" way i don't know which grafic driver to choose.
Is there anybody who can tell me, what i need to install so the grafics is running with the latest driver?
A second question: Is there a way to check the functionality of the Intel grafic chipset? I performed some little KERAS scripts with TENSORFLOW as backend and the speed is, hm like on an really old Laptop without GPU...
NOTE: My board is
Thanks in advance for a reply!
Hi @TimoK93 ,
Maybe you could try with the following instructions:
You can install Intel graphics firmware from:
wget
Change directory to “file path”
$ cd /
install gdebi package manager:
$ sudo apt-get install gdebi
Install the package with the following command: for Ubuntu* 16.04(64bits)
$ sudo gdebi intel-graphics-update-tool_2.0.2_amd64.deb
Add PPA repository for stable mesa-utils
$ sudo add-apt-repository ppa:paulo-miguel-dias/pkppa && sudo apt-get update
Once installed, you can find the Intel® Graphics Update Tool for Linux* OS in your application dashboard. Just look for the Intel® logo, or begin typing ‘Intel’.
Or, if you are a power user, you can open a terminal and execute:
$ intel-graphics-update-tool
Follow the instructions to install the driver and wait for the installation. After that, you must to reboot the board.
@TimoK93 ,
To show you if you are using CPU or GPU, you could add this line in your script:
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
Note, despite you are using KERAS with TENSORFLOW backend, maybe you don't have Tensorflow lib installed .
Check it or install:
pip (or pip3) install tensorflow.
@ccalde Thanks for your reply!
Your explanation fixed my issue.
A Question to tensorflow:
"To show you if you are using CPU or GPU" -> Is up-board technology a GPU? Till now i thought it isn't. Then I need to install tensorflow-gpu (for python 3.x). I tried that but therefore i need to install cuda. Does that makes sense?
Greetings
Timo
Hi @TimoK93 ,
Yes, to install tensorflow-gpu seems it needs Cuda libs.
If you install it, tensowflow will work without problem, but cuda lib doesn't work with Intel-gpu, because Cuda is property of Nvidia.
Cheers!
Hmmm,
but why should one install tensorflow-gpu on up-board?
I red, that up-board is supporting Tensorflow, but it seems that it doesnt... It's generating problems if i try to use versions above 1.5.
Do you have experience about tensorflow on up-board squared?
Greetings,
Timo
HI @TimoK93 ,
I have worked with Tensorflow 1.5 version, only.
I haven't tried with versions above 1.5, sorry.
Did you try already with Tensorflow 1.5??
Cheers!
Yes! I got problems using this net:
I reported a new issue and try build from source.
I believe UP is supporting TensorFlow in regards to their AI Core product line:
These mPCIe add-on devices have the Movidius Myriad 2 VPU that allows you to use Intel's OpenVINO Toolkit SDK for TensorFlow (not Nvidia's CUDA libs).
They plug right into the UP^2 mPCIe port as well as the new UP Core Plus carrier boards and us a max of 2 Watts.
I am ordering one of those vision kits soon myself so I can experiment with TensorFlow as well for some facial recognition.
Also note that they are coming out with M.2 versions of AI Core X soon:
Though, the large M.2 2280 one won't fit exactly on the UP^2 - it will however on most newer desktop motherboards.
Though this is not confirmed (and I will be one to try it asap!), you may be able to combine both the AI Core mPCIe AND the AI Core X M.2 2230 on a single UP Squared device. I don't know if the OpenVINO SDK supports multiple devices. We'll see!
EDIT: The UP^2 can only fit a M.2 2230 device, not 2242. Sorry.
Eric Duncan - UP Evangelist - My thoughts are of my own free will
Answered? Please remember to mark the posted answered to highlight it for future visitors!
UP Squared and AI Core support TensorFlow and Caffe via Intel OpenVINO toolkit.
You can follow the tutorial here. if you already have AI Vision Dev. Kit.
We are working on making the image available for UP Community download. There are still some license agreement to be discussed with Intel...
Thanks for the update @Aling !
@Aling
Thanks for your comment!
I have a single UP-Squared, not the Dev. Kit. I cannot find a tutorial on your link... Am I able to install OpenVINO with my single UP-Squared?
Greetings
Timo
Hello Timo!
Yes, you can install the OpenVINO Toolkit on most modern Intel CPUs with Linux or Windows 10. Normally you would only be able to use CPU functionality unless you have a supported GPU or VPU.
^- see the left side for setting up the dev environment
In addition, the UP Squared you have also has an iGPU, the Intel HD Graphics 505, which is listed on the OpenVINO website as a support GPU as well! Meaning, you can run GPU hardware acceleration with this board.
The catch is they only list the HD 505 GPU supported under Linux, not Windows.
Now rather that is enabled for Tensorflow GPU, I cannot verify. My UP Squared only has Windows at the moment; but, i do plan on testing Tensorflow GPU out under Linux when I get the AI Vision Kit.
Eric Duncan - UP Evangelist - My thoughts are of my own free will
Answered? Please remember to mark the posted answered to highlight it for future visitors!
@eduncan911
Thanks very nice!
I'm installing OpenVINO right now. There is a OpenVINO Version supporting Intel FPGAs:
Do you know if i need to install the FPGA version or the standard?
From UP2 Datasheet i mentioned "CPLD/FPGA: Altera Max 10"
First i try will the standard version, but i would be very happy about a hint
Greetings
Timo
The FPGA in the UP Squared is not accessible. It is used to connect the 40-pin gpio to the Intel chipset.
So just install the standard OpenVINO version - which should allow you to install the GPU parts as well.
Let us know how it works out!
Eric Duncan - UP Evangelist - My thoughts are of my own free will
Answered? Please remember to mark the posted answered to highlight it for future visitors!
Okay i installed OpenVINO.
One Problem: Installing the Tensorflow prerequisites, because tf above 1.5 is not working except you are installing it from source.
We're discussing that issue in
Hi. I have just installed OpenVINO and found that tensorflow does work. Since it's quite long since the last post of this thread, do we have any new from installing tensorflow with UP ? Do I still need to install it from source ? We can not install it using pip ?
Thank you in advance, | https://forum.up-community.org/discussion/comment/9021/ | CC-MAIN-2020-16 | refinedweb | 1,227 | 74.9 |
Question 1 :
Which of the following statements should be used to obtain a remainder after dividing 3.14 by 2.1 ?
fmod(x,y) - Calculates x modulo y, the remainder of x/y.
This function is the same as the modulus operator. But fmod() performs floating point divisions.
Example:
#include <stdio.h> #include <math.h> int main (){ printf ("fmod of 3.14/2.1 is %lf\n", fmod (3.14,2.1) ); return 0; }
Output:
fmod of 3.14/2.1 is 1.040000
Question 2 :
What are the types of linkages?
External Linkage-> means global, non-static variables and functions.
Internal Linkage-> means static variables and functions with file scope.
None Linkage-> means Local variables.
Question 3 :
Which of the following special symbol allowed in a variable name?
Variable names in C are made up of letters (upper and lower case) and digits. The underscore character ("_") is also permitted. Names must not begin with a digit.
Examples of valid (but not very descriptive) C variable names:
=> foo
=> Bar
=> BAZ
=> foo_bar
=> _foo42
=> _
=> QuUx
Question 4 :
Is there any difference between following declarations?
1 : extern int fun();
2 : int fun();
extern int fun(); declaration in C is to indicate the existence of a global function and it is defined externally to the current module or in another file.
int fun(); declaration in C is to indicate the existence of a function inside the current module or in the same file.
Question 5 :
How would you round off a value from 1.66 to 2.0?
/* Example for ceil() and floor() functions: */ #include<stdio.h> #include<math.h> int main(){ printf("\n Result : %f" , ceil(1.44) ); printf("\n Result : %f" , ceil(1.66) ); printf("\n Result : %f" , floor(1.44) ); printf("\n Result : %f" , floor(1.66) ); return 0; } // Output: // Result : 2.000000 // Result : 2.000000 // Result : 1.000000 // Result : 1.000000 | http://www.indiaparinam.com/c-programming-language-question-answer-declarations-and-initializations | CC-MAIN-2019-18 | refinedweb | 313 | 71.61 |
Drop 'public' not 'var'!
Bikeshedding time!
A PHP RFC vote has started to deprecate the
var keyword in PHP 7.1 and
remove it in PHP 8. At the time of writing, there 23 who say it should be
removed, and 18 who say it should not.
I suspect that most people in the “no” camp, feel that way about
var because
- There’s not a big maintenance burden or overhead to keep
var.
- They feel that it will break BC, with no strong benefit.
The “yes” camp feel mainly feels that it makes the language a bit cleaner.
I’d like to offer a different opinion: I think people should be using
var
instead of
public. I realize that this is as controversial as tabs vs.
spaces (as in: it doesn’t really matter but conjures heated discusssions),
but hear me out!
Over the last year I’ve been going through all my open source projects, and
I’ve been removing all the
public declarations where this was possible.
This means that if my class first looked like this:
<?php class ImConformist { static public $prop1; final public $prop2; public function __construct() { } public function getFoo() { } public function setFoo(Foo $foo) { } abstract public function implementMe(); final public function dontTouchMe() { } static public function iShouldntBeHere() { } }
Now, I write it like this:
<?php class HateGenerator { static $prop1; final $prop2; function __construct() { } function getFoo() { } function setFoo(Foo $foo) { } abstract function implementMe(); final function dontTouchMe() { } static function iShouldntBeHere() { } }
This change has generated interesting responses. A lot of people think it’s
ugly. Some think I should just follow PSR-2. I also heard from a whole
bunch that weren’t aware that ommitting
public actually has no effect on
your code!
Yes,
public is completely optional.
public is the default, so adding it
does nothing to your code.
Actually, it does do something. It makes your lines longer! Typically when someone has to read your source and figure out what a class is doing, it will take them slightly longer to read the method signatures.
So there’s a small mental tax for a human to read and intereprete the entire line. It’s generally accepted that shorter lines are better, and when ‘scanning’ a file from top to bottom, it’s better that the important keywords (the name of the function) are closer to the left-edge of the screen.
That’s my main argument for dropping the
public keyword where possible. I
believe that the people who are against it, generally fall in three camps:
#1: Everyone doing the same thing is good
This is the conformist argument, and the best one. There is definitely a case to be made that if most PHP code-based look similar, it is easier for new maintainers to get a jump start in the project.
I suspect that these people are also the most likely to use PSR-2 when they can.
I think it’s good to be similar, but then, I think it’s also good to sometimes break the mold and do something new. If we always keep doing things the same way, we never progress. Being consistent is good, but it’s also good to be open to change, especially if that change has real benefits.
#2: It’s ugly!
The aesthetics argument! It’s not an invalid one. I care quite a bit about how my source ‘looks’. I’m an “API over Implemenation” guy, and I think the aesthetics of your source influence the ease of how someone can start using it.
The case I want to present to you today though is that you think it’s ugly, because it’s foreign to you. I felt the same way at first, and found out pretty quickly that I got used to the new status quo and as soon as I lost my earlier biases, the new way actually feels a lot more natural now.
Whenever I look at sources that use the
public keyword, the feelings it’s
giving me is “dense”, “corporate” and “java-esk”.
#3: The
public keyword is useful to convey intent
On the surface this argument sounds reasonable, but ultimately I think people who hold this opinion are suffering a form of denial. A new idea was presented to them, they don’t like it, so their brain works overtime to find a possible counter argument that just coincidentally matches their existing biases and world-view.
I believe that the people who feel this way are actually just in camp #2, but lack self awareness.
The truth is that many languages popular scripting languages don’t need the keyword (ruby, javascript), or lack the functionality altogether (python).
Furthermore, did you ever wish you could
public in front of your
class,
interface or
namespace, or your regular non-class functions? What about
public const? Be honest! Have you really ever missed
public there? Really?
Hey I do think it would be great to have private classes and private namespace
functions, but I definitely haven’t “missed” being able to tack on the
public symbol to a constant.
public is already implied.
So given those arguments, why does everyone add
public everywhere?
This is a theory: Back in the day when we all made the transition from PHP 4 to 5, everybody was just really really excited.
PHP 5 introduced a new object model that was just so much better than the PHP 4 one. The visibility keywords are just a small aspect of that. The game changer was in how references are dealt with!
The object model was the number 1 selling point for PHP 5. There was some criticism that PHP became too Java-like, but the vast majority of the community was thrilled.
However, it took a bit before people could drop PHP 4 support. Only around PHP 5.2, the 5.x series became really good and people started migrating en-masse.
Using
public,
protected and
private was, aside from the feature, almost
worn like a badge of honor. This is PHP 5 code and we love PHP 5. It says loud
and clear that PHP 4 is unsupported and behind us.
But since then, the PHP 5 object model became the new normal, and we’ve never
gone back and challenge the use of
public.
But then, what about
var?
The
public keyword can be removed from every function and
every property declaration, with one exception: A non-static, non-final public
class property:
<?php class Foo { public $ew; // These are all fine: final $yay; static $whoo; }
It’s the last place you’ll find
public in my source. So I’ve been toying
with the idea of ditching
public there too and just start using
var:
<?php class Foo { var $ninetiesThrowback; }
The arguments for this are a bit weaker, but I think they’re still valid:
- If public is not anywhere else, it would be nice if you also don’t need it for properties. If ditching
publicbecomes the status-quo, then needing
publicfor properties might actually become confusing.
- I think
varis more useful to convey intent than
public. It’s a “variable on a class” not “a public” in the same way that the
functionkeyword is useful. I think this is better for newcomers to the language.
However, I haven’t made this change from
public to
var yet, given the
uncertain nature of it’s future. So my plead to the PHP project is: “Keep
var and promise to maintain it. There’s no overhead and it’s a nice little
keyword. And to everyone else, consider ditching
public too!
I’ve written a little fixer for php-cs-fixer that automatically removes it for you. If you are able to recognize your biases and join me, I guarantee that it won’t take you long to be happy you made this change! | https://evertpot.com/drop-public-not-var/ | CC-MAIN-2016-30 | refinedweb | 1,313 | 72.05 |
I recently used Locust, a load testing tool that lets you write intuitive looking Python code to load test your web applications. I did not follow Locust’s install guide and instead just tried a ‘pip install locustio’. I ended up running into some issues that were not easy to Google about. So I thought I would document the problems I faced along with there solution over here.
Getting setup with Locust on Windows
If you have not already tried installing Locust, follow this short and handy guide. It will help you avoid the problems I faced.
1. Use Python 2.7.x where x >=4. I upgraded my Python to 2.7.11.
2.
pip install pyzmq
3.
pip install locustio
4. Test your installation by opening up a command prompt and typing
locust.--help You should see no errors or warning – only the usage and help should be printed out on your console.
Locust install issues and solutions
When I installed Locust for the first time, I missed steps 1 and 2 in the section above. So I ran into a couple of errors.
1. ImportError: DLL load failed
from gevent.hub import get_hub, iwait, wait, PYPY
File “c:\python27\lib\site-packages\gevent\hub.py”, line 11, in
from greenlet import greenlet, getcurrent, GreenletExit
ImportError: DLL load failed: The specified procedure could not be found.
I got this error because I had Python 2.7.2 (python –version) and Locust needs at least Python 2.7.4. To solve this issue, upgrade your Python version. I ended up installing Python 2.7.11.
2. UserWarning: WARNING: Using pure Python socket RPC implementation instead of zmq
c:\python27\lib\site-packages\locust\rpc__init__.py:6: UserWarning: WARNING: Using pure Python socket RPC implementation instead of zmq.
I got this warning when running the command ‘locust –help’ to test my setup. The warning comes with a helpful recommendation to install pyzmq. I installed pyzmq (
pip install pyzmq) and the error went away.
3. pip install locustio gives error: Microsoft Visual C++ 9.0 is required (Unable to find vcvarsall.bat).
I got the below error when install locust using pip install locustio
building ‘gevent.corecext’ extension
error: Microsoft Visual C++ 9.0 is required (Unable to find vcvarsall.bat).
I tried installing “gevent” alone using pip install gevent, but got the same error
After bit of searching i installed “gevent” from unofficial windows Binaries for Python Extension Packages.
Download the whl file as per your os combination. I downloaded gevent-1.1.1-cp27-cp27m-win32.whl file.
open command prompt in the directory where you have downloaded whl file and run the below command.
After that i was able to install locust successfully.
My thoughts on Locust as of May, 2016
A random collection of my thoughts having explored Locust for a little bit.
1. This tool is worth exploring deeper. This is the first load testing tool that I found intuitive.
2. Locust lets me reuse the API checks that I anyway write as part of testing a web application
3. I liked Locust’s documentation. I found the documentation very, ummm, Pythonic! Clearly they have put in effort into making the documentation useful.
4. I got a useful, working prototype for a real world problem in less than a weekend
5. I don’t know how powerful their http client (the one you use to make requests) is … so we may trip up at login for some clients.
6. I hated the way they do not have an automatic ‘end’ to any test – but that is a minor complaint
7. With respect to the resources (memory, CPU) on the client machine, locust swarms scale so much better than Qxf2’s map-reduce solution (think 25:1)
8. There is a limit of 1024 locusts per swarm that maps to the maximum number of files that can be open on Windows. But their documentation warns you about this beforehand. You can increase this number, if needed, on your OS.
9. Their reporting is not persistent or stored
Qxf2 will be exploring this tool over the coming months. I want to try more complex login scenarios, nested tasks and distributed swarms..
27 thoughts on “Setup Locust (Python/load testing) on Windows”
I just confrmed my system type is x64. I believe that everything that I installed was donwloaded as x64. I can’t understand why I’m receiving this error.
C:\Users\jovan\Downloads>python -m pip install gevent-1.5a2-cp38-cp38-win_amd64.whl
ERROR: gevent-1.5a2-cp38-cp38-win_amd64.whl is not a supported wheel on this platform.
I already downloaded visual studio and visual c++ build tools with visual studio.
but I’m still receiving this error:
Microsoft Visual C++ 14.0 is required. Get it with “Microsoft Visual C++ Build Tools”:
Jovan,
Can you try installing the previous version of gevent and see if you are able to install?
gevent‑1.4.0‑cp37‑cp37m‑win_amd64.whl Or you can also try below steps:
1. Create a new python 3 environment and activate it
2. Download the latest binary for gevent from the unofficial windows Binaries.
3. python -m pip install gevent-1.5a2-cp38-cp38-win_amd64.whl
Hope this helps !
but doing this… what python version will I be working with?
right now, I’m able to work with Pythin 7 and locust 0.14.5
but I can feel locust is not working fine to me.
I think that locust executes auto-update to this latest version, when installed…
I don’t think that installing Python 3, will work better than it’s working now to me.
Hi, Can you try using a Virtual Environment and specifically try with Python 3 (its been tested and worked with that). Also, you try pip install locust==0.14.5 to install a selected version. | https://qxf2.com/blog/setup-locust-python-windows/ | CC-MAIN-2022-05 | refinedweb | 981 | 67.86 |
>>>>> On Mon, 6 Nov 2006, Giorgos Keramidas wrote: >On 2006-11-03 22:28, Chong Yidong <address@hidden> wrote: >>G.) > Sorry for taking so long to follow up about this. I tried looking > at the sparc v9 specification for hints about why this change is > needed on 64-bit SPARC platforms, but I couldn't come up with > anything in a reasonable amount of time. I am not that proficient > with SPARC64 assembly, but I will ask our FreeBSD/sparc64 people for > details. > I'll follow up again when I get a reply from the people who know more > about sparc64 assembly :) Following up to this message from two years ago. ;-) Gentoo also maintains the above as local patch, both for FreeBSD and GNU/Linux. Our Sparc team says that the patch is sane, and without it Emacs cannot be built on Sparc/FreeBSD. See Gentoo Bug 159584 for further details: <> Below is a slightly updated patch that will apply to the currect CVS trunk. It would be nice if it could make it into Emacs 23. Ulrich --- emacs-orig/src/alloc.c +++ emacs/src/alloc.c @@ -4573,7 +4573,11 @@ needed on ia64 too. See mach_dep.c, where it also says inline assembler doesn't work with relevant proprietary compilers. */ #ifdef __sparc__ +#ifdef __sparc64__ + asm ("flushw"); +#else asm ("ta 3"); +#endif #endif /* Save registers that we need to see on the stack. We need to see | http://lists.gnu.org/archive/html/emacs-devel/2008-12/msg00707.html | CC-MAIN-2015-32 | refinedweb | 237 | 71.55 |
CodeProject
Alright,. Right? The first problem couldn't possibly be hard..
So now we have to find all the numbers under 1000 that satisfy the same requirement. And then sum those numbers.
At first thought, I suppose it would be easy enough to create a loop that goes through each possibility and adds the number to a list if it passes the divisible test.
Something like this:
List<int> multiples = new List<int>();
for (int i = 1; i < 1000; i++)
{
if ((i % 3 == 0) || (i % 5 == 0))
{
multiples.Add(i);
}
}
return multiples.Sum();
Simple enough right? Well yes, and for this simple type of problem, it won't really have any effect on performance. However, I wanted the least amount of lines as possible. I had recently discovered the wonderful world of lambda expressions and I feel that I should use them as often as I can (for better or for worse).
So here is my solution:
return Enumerable.Range(1, 999).Where(i => (i % 3).Equals(0) || (i % 5).Equals(0)).Sum();.
Range
int
Where
Sum
Or if we want to get really fancy, we could create a lambda function delegate to make things a bit more readable:
Func<int, int, bool> IsMultiple = (int i, int j) => (i % j).Equals(0);
return Enumerable.Range(1, 999).Where(i => IsMultiple(i, 3) || IsMultiple(i, 5)).Sum();
However, the previous solution will do in this case.
Simple and concise. KISS principle in action.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
def Euler(n):
return n * (n+1) / 2
def Summation(n):
return 3 * Euler(n/3) + 5 * Euler(n/5) - 15 * Euler(n/15)
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/239125/Project-Euler-Problem-sharp1?fid=1645255&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None | CC-MAIN-2017-47 | refinedweb | 315 | 66.03 |
I have installed both php5.6 and php7.0 from PPA on Ubuntu according to this manual
But I didn’t get how to install extensions using
pecl for php5.6 or php7.0.
For example I have already installed version of
libevent or
amqp in php5.6.
Now when I type
pecl install libevent and my active php version is php7.0 (using
update-alternatives --set php /usr/bin/php7.0),pecl
returns message thatlibevent` already installed.
But it was installed only for php5.6 (when this version was active) and now I want to do it for php7.0.
Which commands could help me?
UPD
I have found this commands for switch pecl to php7.0 and packet them to executable bash scripts:
#!/bin/bash sudo update-alternatives --set php /usr/bin/php7.0 sudo pecl config-set php_ini /etc/php/7.0/cli/php.ini sudo pecl config-set ext_dir /usr/lib/php/20151012/ sudo pecl config-set bin_dir /usr/bin/ sudo pecl config-set php_bin /usr/bin/php7.0 sudo pecl config-set php_suffix 7.0
and for php5.6
#!/bin/bash sudo update-alternatives --set php /usr/bin/php5.6 sudo pecl config-set php_ini /etc/php/5.6/cli/php.ini sudo pecl config-set ext_dir /usr/lib/php/20131226/ sudo pecl config-set bin_dir /usr/bin/ sudo pecl config-set php_bin /usr/bin/php5.6 sudo pecl config-set php_suffix 5.6
But they are not help, pecl still gives me
list of already installed extensions to php5.6, even if I switched to php7.
pecl list Installed packages, channel pecl.php.net: ========================================= Package Version State amqp 1.7.1 stable libevent 0.1.0 beta stats 1.0.3 stable
It should be empty for php7.0 !
How to solve the problem?
UPD
For amqp I have just installed php-amqp package without using pecl.
apt-get install php-amqp
And libevent still not exists for php7.
But I hadn’t found a way to switch pecl installation between 5.6 and 7 version, so question is still open.
Here’s what worked best for me when trying to script this (in case anyone else comes across this like I did):
$ pecl -d php_suffix=5.6 install <package> $ pecl uninstall -r <package> $ pecl -d php_suffix=7.0 install <package> $ pecl uninstall -r <package> $ pecl -d php_suffix=7.1 install <package> $ pecl uninstall -r <package>
The
-d php_suffix=<version> piece allows you to set config values at run time vs pre-setting them with
pecl config-set. The
uninstall -r bit does not actually uninstall it (from the docs):
[email protected]:~$ pecl help uninstall pecl uninstall [options] [channel/]<package> ... Uninstalls one or more PEAR packages. More than one package may be specified at once. Prefix with channel name to uninstall from a channel not in your default channel (pecl.php.net) Options: ... -r, --register-only do not remove files, only register the packages as not installed ...
The uninstall line is necessary otherwise installing it will remove any previously installed version, even if it was for a different PHP version (ex: Installing an extension for PHP 7.0 would remove the 5.6 version if the package was still registered as installed).
Answer:
When pecl throws error is already installed and is the same as the released version
Switch to required php, php-config, phpize versions before installing from pecl
Just run it installation with force flag
sudo pecl install -f <package-name>
Answer:
I ran into this same issue while updating my Vagrant box with XHGui, as XHGui requires mongodb. I wanted to be able to support profiling on both PHP 5.6 and 7.0.
I dug into the pecl source code, and found that there’s a
metadata_dir config option. That is a path to a directory where the current state of installed packages. Unfortunately, that isn’t already namespaced per PHP version. If you try and set it with
pecl config-set, you get an opaque ‘failed’ error. It turns out that setting isn’t whitelisted as being configuable in the
\PEAR_Config class:
/** * Configuration values that can be set for a channel * * All other configuration values can only have a global value * @var array * @access private */ var $_channelConfigInfo = array( 'php_dir', 'ext_dir', 'doc_dir', 'bin_dir', 'data_dir', 'cfg_dir', 'test_dir', 'www_dir', 'php_bin', 'php_prefix', 'php_suffix', 'username', 'password', 'verbose', 'preferred_state', 'umask', 'preferred_mirror', 'php_ini' );
In PECL’s world, ‘global’ means it can only be set at install time, and not after.
There’s an issue in the PPA tracker over at github:
The final suggestion there is to build the extension manually for alternate PHP versions. I ended up using pecl for PHP 7 extensions, and manual builds for 5.6. Make sure you run
update-alternatives for php-config and phpize, and not just php before building:
update-alternatives --set php /usr/bin/php5.6 update-alternatives --set php-config /usr/bin/php-config5.6 update-alternatives --set phpize /usr/bin/phpize5.6
Then, extract the extension and build it. These steps from the above issue worked for me with the mongodb extension:
phpize5.6 && ./configure --with-php-config=php-config5.6 && make && sudo make install | https://exceptionshub.com/ubuntu-how-to-install-php-extension-using-pecl-for-specific-php-version-when-several-php-versions-installed-in-system.html | CC-MAIN-2021-39 | refinedweb | 860 | 59.5 |
Beginning C#: Making Decisions
Last month, we looked at variables in more detail. This month, we're going to learn how to use the data you've got stored in the variables to allow your programs to make decisions.
Decisions in C# are quite simple to do and involve the well-known "if", "then", "else" structure that most languages have.
An if statement is known as a "truthy" statement. What this means is that it doesn't actually pass or fail based on the condition of its operators, but rather it passes or fails depending on the operation on its operands producing a true or a false result.
I realise that the previous paragraph may not make much sense (it didn't when I first learned to program), but bear with me. I'll explain what it means.
Computers are not inherently able to know how similar something is to something else, and so cannot answer something like:
"Dark Brown is similar to Light Brown."
What they can do, and do very fast, is to say that something is not equal. So, taking the previous statement, a computer can say that:
"Dark Brown is absolutely not equal to Light Brown."
Showing this as a "truthy" statement means we can definitely say:
"It's TRUE that Dark Brown is absolutely not equal to Light Brown."
Or
"Its FALSE that Dark Brown is absolutely equal to Light Brown."
Note the "TRUE" and "FALSE" here. It's not the response to the comparison we're interested in; it's what we can determine about the comparison that's the interesting part.
Taking a more classic comparison:
"is Name equal to 'Peter'"
If the variable called Name contains 'Peter', then from a truthy point of view
"it's TRUE that the variable Name is equal to 'Peter'"
Again, it's not the comparison that matters; it's the fact that the operation of comparing the contents of the variable called Name to the string 'Peter' yields us a TRUE result.
Practice it a few times, and it'll start to make sense.
If we start to put this into an actual if/then statement, you'll also start to understand it better.
Start a console mode program, and add the following code to your main program.
using System; namespace NutsAndBolts { class Program { static void Main() { string Name = "Peter"; if(Name == "Peter") { Console.WriteLine("Your name is peter"); } } } }
The if statement, as you can see, is using '==' as its operation. '==' means "Is Equal to" and is used to perform an equality operation between the two operands, in this case the contents of the variable 'Name' and the string "Peter".
If you observe, the 'if' statement operation is enclosed in standard brackets, and this strengthens the fact that the result of anything inside those brackets should give either TRUE or FALSE as its result.
If you run the program, you should see the following:
Figure 1: Output from our first if statement
You also can check if something is the inverse by using the "Is NOT Equal to" operation, which looks like this:
using System; namespace NutsAndBolts { class Program { static void Main() { string Name = "Peter"; if(Name != "Peter") { Console.WriteLine("Your name is not peter"); } } } }
The exclamation mark negates the operation to be the opposite of what it would normally be.
If you're comparing numbers, you can make other decisions too, such as "Is greater than" which is represented by '>' and "is less than", represented by '<'. These are used as follows:
using System; namespace NutsAndBolts { class Program { static void Main() { int Number = 30; if(Number > 20) { Console.WriteLine("Number is larger than 20"); } if(Number < 50) { Console.WriteLine("Number is smaller than 50"); } } } }
Understanding the "Truthy" nature of an if statement is very important once you start dealing with equations. Each side of the operation doesn't have to be a single variable or a static value; you can include complex math equations in there if you wish, too; for example:
using System; namespace NutsAndBolts { class Program { static void Main() { int Number = 30; if(((Number * 10) / 15) > ((Number * 15) / 20)) { Console.WriteLine("I'm not even going to try and work that one out :-)"); } } } }
Is perfectly valid, and, as you can see, very self-contained on each side of the operation.
There's also a second part to the if statement, and that's the "else" part. The "else" part contains the code to execute if the original decision doesn't resolve to a "TRUE" condition. For example:
using System; namespace NutsAndBolts { class Program { static void Main() { string Name = "Peter"; if(Name == "Peter") { Console.WriteLine("Your name is peter"); } else { Console.WriteLine("Your name is not peter"); } } } }
If you compile and run this version, and change the 'string Name = ' line, you should be able to get the decision logic to switch between the two cases.
To finish off this month's post, I encourage you to go back and recap the previous posts, and put together a small program that uses 'ReadLine' to read in a name, and then print different messages depending on different tests. Next month, we look at 'if' in more detail, where I'll explain how to use different operators to combine several tests into one decision, using Boolean logic.
There are no comments yet. Be the first to comment! | http://www.codeguru.com/columns/dotnet/beginning-c-making-decisions.html | CC-MAIN-2017-30 | refinedweb | 889 | 66.47 |
Best way to implement recurring task
Hi there,
I'm currently working on a project what should call an API about every minute (the exact timing is not that important). In order to do it I have a while loop wich is paused after each loop for 60 seconds.
def loop(): while True: update() time.sleep(60) loop()
Is there any better way to do this? It seems, that when I'm running the script and the script is actually paused you can't stop the script with tapping on the [x]. It seems to be waiting for the sleep to end and is terminating after that.
Right now,
sleepcan't be interrupted. If you want to sleep for such a long time, I would recommend calling it multiple times instead:
for i in xrange(60): time.sleep(1)
If this is from a ui, you could use ui.delay at the end of an function, which calls itself. Cancel would be handled by
ui.cancel_delays(), either by button, or from
will_close, or by checking
on_screenetc. | https://forum.omz-software.com/topic/1345/best-way-to-implement-recurring-task/1 | CC-MAIN-2019-09 | refinedweb | 176 | 82.85 |
QDialog compilation error
- matthewkerr
Hi. I'm sure this is a really simple question.
I've created a Qt Widget Application.
I then created a QDialog which is to be an About dialog. I used the Qt Designer Form Class and went with a Dialog with Buttons. I only wanted the OK button, and couldn't work out how to remove the Cancel button, so I deleted the buttons and added an OK button.
So far so good.
I then made the following changes to my QMainWindow header file:
Forward declared my Ui::About class.
Added a Ui::About member variable.
I then made the following changes to my QMainWindow CPP file:
#include "about.h"
Added to the constructor body:
about = new Ui::About(this);
When I compile I get the error:
'Ui::About' : class has no constructors
It's as if I'm using the wrong #include in the QMainWindow CPP file.
If I also add #include "ui_about.h" I get a different error:
'Ui::About::About(const Ui::About &)' : cannot convert argument 1 from 'TargetController *const ' to 'const Ui::About &'
Reason: cannot convert from 'TargetController *const ' to 'const Ui::About'
No constructor could take the source type, or constructor overload resolution was ambiguous
I think I'm misunderstanding the something simple.
Matt.
- mrjj Qt Champions 2017
Hi
Normally you would just include the AboutDialog.h
and
AboutDialog dia;
dia.exec()
to show it.
The Ui::About is used internally in the Dialog in the constructor.
So you just construct an instance of your dialog class and it should just work.
- kshegunov Qt Champions 2017
ui = new Ui::Notepad; ui->setupUi(this);
Here there's a full example: | https://forum.qt.io/topic/84371/qdialog-compilation-error | CC-MAIN-2018-43 | refinedweb | 279 | 66.44 |
Rusty said:> Well, introduce an EXPORT_SYMBOL_INTERNAL(). It's a lot less code. But you'd > still need to show that people are having trouble knowing what APIs to use.Might the recent discussion on the exporting of sys_open() andsys_read() be an example here? There would appear to be a consensusthat people should not have used those functions, but they are nowproving difficult to unexport.Perhaps the best use of the namespace stuff might be for *future*exports which are needed to help make the mainline kernel fully modular,but which are not meant to be part of a more widely-used API?jon-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2007/11/27/172 | CC-MAIN-2015-22 | refinedweb | 131 | 64.91 |
XML Best Practices for Microsoft SQL Server 2005
Sh.
Contents
Introduction
Data Modeling
Data Modeling Using XML Data Type
Usage
Data Binding in Queries
Catalog Views for Native XML Support
Introduction:
- Data modeling:
You may find one storage option more suitable than others,.
Hybrid Model.
Untyped, Typed, and Constrained XML Data Type.
Internal Storage of XML).
For ease of exposition, queries are described for XML data instances such as the following:>')
The stored size in bytes of the XML instances in the XML column can be found using the DATALENGTH() function:
In-Row and Out-of-Row Storage:
Secondary XML Indexes.
Full-Text Index on XML Column:
- Filter the XML values of interest using SQL full-text search.
- Query those XML instances, which uses XML index on the XML column.:
Once the full-text index has been created on the XML column, the following query checks that an XML instance contains the word "Secure" in the title of a book:.
Support for Different Languages in Full-Text Index on XML Column.
Property Promotion.
Computed Column Based on XML Data Type Property Table
Suppose you want to promote first name of authors. Books have one or more authors, so that first name is a multivalued property. Each first name is stored in a separate row of a property table. The primary key of the base table is duplicated in the property table for back join.
Example: Create.
Example: Create Triggers to Populate Property Table
Insert trigger—Inserts rows into the property table:
Delete trigger—Deletes rows from the property table based on the primary key value of deleted rows:.
XML Schema Collections.
Multityped Column.
Lax Validation Disallowed in Wildcard Sections:
Using xs:datetime, xs:date, and xs:time
Values of type xs:datetime, xs:date, and xs:time must be specified in ISO 8601 format and include a time zone. Otherwise, the data validation for these values fails. Thus, 2005-05-27T14:11:00.943Z is valid as a value of type xs:datetime, but the following are not: 2005-05-27 14:11:00.943Z (missing date and time separator "T"), 2005-05-27T14:11:00.943 (missing time zone) and 2005-05-27 14:11:00.943 (missing time separator and time zone). Similarly, 2005-05-27Z is a valid xs:date value but 2005-05-27 is not since no time zone is specified.
Untyped XML data may contain date, time, and datetime values that an application may wish to convert to the SQL types dateTime or smallDateTime. These date, time and datetime values may not conform to ISO 8601 format or contain a time zone. Similarly, typed XML may contain such values as types other than xs:date, xs:time, and xs:dateTime .
Bulk-Loading XML Data.
Non-Binary Collations and Type Inference.
Error Model.
Singleton Checks.: In SQL Server 2005, the expression /age/text() returns static error for any simple typed <age> element. On the other hand,
fn:data(/age)returns integer 12, while
fn:string(/age)yields the string "12"..
value(), nodes(), and OpenXML():
In this example,
nodes('//author') yields a rowset of references to <author> elements for each XML instance. The first and last names of authors are obtained by evaluating value() methods relative to those references.:
The view V contains a single row with a single column xmlVal of XML type. It can be queried like a regular XML data type instance. For example, the following query returns the author whose first name is "David":.
Example: Applying XSL Transformation
Consider a CLR function TransformXml() that accepts an XML data type instance and an XSL transformation stored in a file, applies the transformation to the XML data and returns the transformed XML in the result. A skeleton function written in); //. More information can be found in SQL Server 2005 and Visual Studio "Whidbey" books online.
Data Binding in Queries values from a relation column in your XQuery or XML DML expression. for Native XML Support
Catalog views exist to provide meta-data information regarding XML usage. A few of these are discussed below.
XML Indexes
XML index entries appear in the catalog view sys.
Space usage of XML indexes can be found in the table-valued function sys.dm_db_index_physical_stats() . It yields the number of disk pages occupied by the XML index idx_xCol_Path in table docs across all partitions. Without the sum() function, the result would return the disk page usage per partition.
Retrieving XML Schema Collections(schemaName, XmlSchemacollectionName, namespace-uri) yields an XML data type instance containing XML schema fragments for schemas contained in an XML schema collection, except. | https://msdn.microsoft.com/en-us/library/ms345115.aspx | CC-MAIN-2015-35 | refinedweb | 765 | 55.54 |
1 2 3 delta
1 2 3 apricot
1 2 3 charlie
1 2 3 bravo
1 2 3 echo
1 2 3 fox
[download]
var leg = {
aa: "a label",
ab: "another label",
....
};
var map = {
aa : ["R1"],
ab : ["AA1"],
ac : ["AA1", "AA2"],
....
};
var monorows = [
["R6", "c", "1", "CB16", "CB15" ... codes that are listed in map],
...
];
[download]
use JSON;
use Data::Dump "dd";
my $leg_str =<<'END';
{
"aa": "label1",
"ab": "label2",
...
"er": "last label"
}
END
my $leg = decode_json($leg_str);
[download]
use WWW::Mechanize::PhantomJS;
my $js = WWW::Mechanize::PhantomJS->new();
$js->get_local("my_js_file.js");
#print $js->content; I get the content ok here.
my ($val, $type) = $js->eval_in_page('function(){JSON.stringify(leg);}
+');
[download]
Thanks
frazap
There are a number of examples of creating multipart form data for sending to a HTTP server.
I found very few, server side parsing examples. This ranged from getting a message that CGI.pm is delisted in the standard perl suite, to finding an HTTP::MultiPartParser library function, etc.
However when using the latter, HTTP::MultiPartParser functions, a 'boundary' pattern was required to be given when creating the object.
So, upon receiving content, how does one find the 'boundary' to then use HTTP::MultiPartParser to then parse the file?
Could someone point me to an example where perl is used to receive the file selected via the multipart form, and then parsed out to filename, and file content segments.
gene complement(<5504..>6553)
/locus_tag="OT_ostta02g00030"
/old_locus_tag="Ot02g00030"
/db_xref="GeneID:9832574"
[download]
$infile = "GCF.gbff";
$outfile = "a1.txt";
open(IN, $infile) or die "Failed to open $infile\n";
open (OUT, ">$outfile") or die "Failed to open $outfile\n";
$n=0;
while ($line = <IN>)
{
chomp ($line);
$line =~ s/\r$//;
if ( $line =~ /<\d+..>\d+/ )
{ ++$n; my ($start, $stop) = ($1, $2);
print OUT "$start, $stop, $header";
}
}
print STDERR"$n";
sub header
{
@geneID = split /\s+/, $start, $stop;
$header = "$fastaprompt$geneID[1]";
}
close (IN);
close (OUT);
[download]
use strict;
use Text::Template;
my $user_info = 'u1';
my $site = 'a';
my $string = './{$user_info}.{$site}.txt';
my $template = Text::Template->new(TYPE =>'STRING',SOURCE=>$string);
$string = $template->fill_in({site=>$site,user_info=>$user_info});
print "$string\n";
[download]
Hi,
Something similar has been asked here before, but It wasn't solved
Of late, the XCode command line tools on Mojave MacOs do not remember the default framework path, which i can however pass to cc by adding -F /Library/Frameworks.
I can of course edit the Makefile.PL, but I would rather prefer to pass it from the command line, because it scales better
Any Ideas?
greetings, el
Unfortunately i ended up getting into the same situation.
I am new to perl and python and through a cgi calling a perl script and python code is embedded in Perl script.
Tried the same code but still no luck.
Below is the code :
#----------------------------
# Defining The constants
#---------------------------
*SUBMIT = \'submit';
*TRUE = \1;
*FALSE = \0;
*DISABLED = \'DISABLED';
sub store_policy() {
print "Started\n";
use Inline::Python qw(py_new_object py_call_method);
use Inline Python => 'DATA',
DIRECTORY => '/usr/local/webmin/gehc_password_management/_Inli
+ne/';
Inline->init();
print "\nGoing to create Object\n";
my $obj = py_new_object("gehc_password_management","gehc_password_mana
+gement","passwordManagement");
#print "hello".py_call_method($obj,"testPrint")."\n";
print("\nCreated the Object");
#my $flag = $obj->getPasswordConfigFilePath();
#print($flag);
print "\nClosed";
}
1; # Return true, required for libraries
__DATA__
__Python__
from passwordChangeNew import PasswordManagement as passwordManagement
[download]
Started HTTP/1.0 500 Perl execution failed Server: MiniServ/1.840 Date
+: Tue, 11 Jun 2019 22:58:49 GMT Content-type: text/html; Charset=iso-
+8859-1 Connection: close
Error - Perl execution failed
Marker '__Python__ from passwordChangeNew import PasswordManagement as
+ passwordManagement ' does not match Inline 'Python' section.at ./pol
+icy-lib.pl line 36.
[download]
Need it as asap
use strict;
use warnings;
use Data::Dump 'dd';
my $s = "a aa aaa aaaa";
$s =~ /(?<a>a+) (?<a>a+) (?:(?<a>a+)bbb)?/;
dd \@{^CAPTURE}; # all captured groups
dd $+{a}; # leftmost defined "a"
dd ${^CAPTURE}{a}; # ditto
dd $-{a}; # all defined "a"'s groups
dd ${^CAPTURE_ALL}{a}; # ditto
__END__
["a", "aa"] # correct
"a" # correct
undef
["a", "aa", undef]
"a"
[download]
Documentation is not very verbose about new %{^CAPTURE}, %{^CAPTURE_ALL} variables, they are listed as if they are English synonyms to old %+ and %-, but they are obviously not, they look plain wrong to me.
The "aaa" was deleted from @{^CAPTURE} array (or was not even added to begin with), when rightmost cluster failed to match, and deleted as array element from @{$-{a}}, but the $#{$-{a}} was not changed from wrong 2 to expected 1, hence unexpected undefined element in @{$-{a}}.
Update: Actually, w/r/t %-, re-reading the docs, there's no phrase "all defined "a"'s groups" as I stated above.
To each capture group name found in the regular expression, it associates a reference to an array containing the list of values captured by all buffers with that name (should there be several of them), in the order where they appear.
Yes, they say "all buffers with that name", but can undef be said to be "captured"? Can failed sub-expression "capture"? It's ambiguous.
Update 2: Mixing these "CAPTURE" things is broken:
#dd \@{^CAPTURE};
dd \%{^CAPTURE};
[download]
is OK, but un-commenting 1st line results in empty unblessed hash in the 2nd.
I want to copy content from one excel file to another retaining the format too. Here by format I mean fill color, border, bold, italic, etc. I have written a code for this where I extract the value and format number from one excel file and simply write in the other excel sheet. I have referred the following link for that.
The problem is that it is not retaining the format. I think the problem is that the format number is not universal which means that a format number means two different things in two different excel files. When I run this code I get the error
Use of uninitialized value in hash element at /pkg/qct/software/perl/5.18.0_test/lib/site_perl/5.18.0/Spreadsheet/ParseExcel.pm line 2383.
According to me, it means that the extracted format number doesn't mean anything to other file. Please let me know solution to this problem
Basically what my problem is that I have modified two xls file using Spreadsheet::ParseExcel::SaveParser and I want to merge those two xls files using Perl. Please suggest a way of merging two xls files using Perl in any other way such that formatting is retained. Please suggest non-Perl way too using some other coding language.
This question has been cross-posted on stackoverflow :
#!/usr/bin/perl
use strict;
use warnings;
use Spreadsheet::ParseExcel;
use Spreadsheet::ParseExcel::SaveParser;
# Open an existing file with SaveParser
my $parser = Spreadsheet::ParseExcel::SaveParser->new();
my $template = $parser->Parse('template.xls');
my $parser1 = Spreadsheet::ParseExcel::SaveParser->new();
my $template1 = $parser1->Parse('test_perl.xls');
my $worksheet11 = $template->worksheet(0);
$template1->AddWorksheet('New Data');
my $worksheet22 = $template1->worksheet(0);
my $cellz; my $valua;my $format_number;
for (my $i = 0; $i < 400; $i++) {
for (my $j = 0; $j < 20; $j++) {
$cellz = $worksheet11->get_cell( $i, $j );
if($cellz){
$valua = $cellz->unformatted(); $format_number = $cell
+z->{FormatNo};
$worksheet22->AddCell($i, $j, $valua,$format_number);
}
}
}
my $workbook;
$workbook = $template1->SaveAs('newfile1.xls');
[download]
my ($DICT, $first, $last);
open $DICT, "<", "/usr/share/dict/words";
while (<$DICT>) {
#FIXME
$first = $1 if ?(^neuro.*)?; # <- syntax error here
$last = $1 if /(^neuro.*)/;
}
close $DICT;
print "first : $first, last : $last \n";
print "-" x 10, "\n";
#
[download]
Hi All,
I am trying to use LWP ( i dont mind using something else ) to do some posts and get from a website API
But i am failing at the first step. According to their docs you need to login with your broswer, get the cookie AUTH_TOKEN and get the and then use that output for the curl command authorization. Is there way to do this in perl ?
The curl would look like
curl -X GET "<URL> -H "accept: application/json" -H "authorization:<to
+ken>"
[download]
my $cookies = HTTP::Cookies->new();
my $ua = LWP::UserAgent->new( cookie_jar => {} );
$req -> authorization_basic ( 'cloud-bespoke' );
my $rc = REST::Client->new( { useragent => $ua } );
my $headers = {Content-type => 'application/json'};
my $client = REST::Client->new( { useragent => $ua });
my $res = $client->POST('URL',
'{"username": "username, "password":"password"}', {"Content-type" => '
+application/json'});
chkerr($client->responseCode());
print $client->responseContent();
print "\n" . $cookies->as_string;
[download]
Given string with text, I need to create n-grams of predefined lengths. I came up with the following. Any suggestions on how to improve it (being speed an important factor in my process?). The sentence, i.e. the array will contain typically 5-15 elements.
use strict;
use warnings;
my $sentence = "this is the text to play with";
my @string = split / /, $sentence;
my $ngramWindow_MIN = 2;
my $ngramWindow_MAX = 3;
for ($ngramWindow_MIN .. $ngramWindow_MAX){
my $ngramWindow=$_;
my $sizeString = (@string) - $ngramWindow;
foreach (0 .. $sizeString){
print "START INDEX: $_ :";
print "@string[$_..($_+$ngramWindow-1)]\n";
}
}
[download]
In a recent post, I wanted to reference the perl documentation for the open function. The following code links to the right page, but does not go the section on 'open'.
open
Am I overlooking some detail in the FAQ. (I am not even going to try to create a link to this today)
I believe that this example is typical of several links that I have posted. The same solution will probably apply to all of them, but lets concentrate on this one for now.
Yes
No
Results (76 votes). Check out past polls. | https://www.perlmonks.org/index.pl/?node=the%20Monastery%20Gates | CC-MAIN-2019-26 | refinedweb | 1,559 | 61.36 |
Important: Please read the Qt Code of Conduct -
[SOLVED]QML+QtMultimediaKit to play audio on n900
I have been trying to get a project going to play audio files through my n900 based on qml. It took me a good long time to get to where I am at, and currently I am at my wits ends. Here is a rundown of what I have done and what I am using. I am using the Qt SDK 1.1 Technical Preview. It took me a while but I learned that you have to enable the Mobility in the .pro file by doing this.
CONFIG += mobility
MOBILITY += multimedia
After that I learned that the libs on my phone are obviously out of date. So I did an apt-get install libqtm-11-* so that I could get the 1.1 QtMobility libs on my n900 per this post - As stated there I went in and put
CONFIG += mobility11
MOBILITY += multimedia
As long as I don't have the "11" I can compile in the Simulator. It does give me the known error about connecting to .com.nokia and I can see the rectangle and the text. I build and install on the n900 and I get a blank screen. I have read pages and pages of docs on the qml audio tags and such and I have done what I think is correct. Please help me if you can. Here is my qml code as well.
@
import Qt 4.7
import QtMultimediaKit 1.1
Item {
id: whole
Rectangle {
width: 360
height: 360
Text {
text: "Click Me...And Listen?"
anchors.centerIn: parent
}
Audio {
id: playMusic
source: "example.mp3"
}
MouseArea {
id: playArea
anchors.fill: parent
onPressed: { playMusic.play() }
}
}
}
@
Am I missing some #includes that I need in the .cpp file? I really don't know what to do. Days of this have yielded no results. I thank you all in advance for reading this and especially to anyone who answers. Have a nice day.
Anyone, I'm also interested to know how to play audio with QML?
Since someone actually replied, I will detail what I had to do to get this working.
- You will need root access to your n900. rootsh is the program for that. In terminal now do the following
sudo gainroot
apt-get install libqtm-11
That's it for the libraries. There are posts out there that say to move stuff. Those are outdated. The newer versions install in the correct directories.
2.I had to remove the Item "parent" in my code. The n900 doesn't like an undefined width and height parent. Now since you should have
@
viewer.showExpanded();
@
in your .cpp what you actually define it won't matter because this will set it to the size of the phone.
- To make sure you get to the libraries you need to add this to your .cpp file
@
#include <QtDeclarative>
@
and inside your viewer loader add
@
viewer.engine()->addImportPath(QString("/opt/qtm11/imports"));
@
- Now on to your .pro file. One quick change here lets you make the code for Symbian(as I have read, because I don't have a Symbian phone to test my app on I can only relay what I have read, man I want one though to see if it works) you need to add this.
@
maemo5 {
CONFIG += mobility11
} else {
CONFIG += mobility
}
MOBILITY += multimedia
@
- Now at the start of your .qml file just put
@
import Qt 4.7
import QtMultimediaKit 1.1
@
and you are dialed. Have fun! I know I am now that I sorted a million posts out and got this working. Hopefully I can save some other starting folks some time.
Hey, thanks!!
I'll try that out, and report what's happening with N8 also :)
What type is that viewer? I can't do like that with
@
QDeclarativeView view;
view.engine()->addImportPath(QString("/opt/qtm11/imports"));@
it says error: invalid use of incomplete type 'struct QDeclarativeEngine'
Well that was to point to the libqtm-11 on n900. If you are getting that error on the N8 it might be possible to do the same maemo5{ view.engine()->addImportPath(QString("/opt/qtm11/imports")); }
I don't know if that will work or not.
I made that installation to my n900, but that's build error, not device error?
and QDeclarative view doesn't have showExpanded ?
Just checking to make sure you have the #include <QtDeclarative> at the top of the .cpp file. If not only have one more idea.
Yeah, i missed some include, but now it's saying .qml:3:1: module "QtMultimediaKit" is not installed.. so the apt-get install didn't install all the needed files then :/
It is saying that on the n900 or is it saying that when you build in QtCreator?
It's saying that in the QtCreator application output window, when it is trying to launch the program in the N900.
Hi,
I am facing similar issue “module “QtMultimediaKit” is not installed “ on Centos Linux. I have my libs ,imports and plugins in /usr/local/lib on my embedded target
I have the following env variables set
export QT_PLUGIN_PATH=”/usr/local/lib/plugins”
export QML_IMPORT_PATH=”/usr/local/lib/imports”
In Imports Folder, i have the following
ls /usr/local/lib/imports
QtMultimediaKit
ls /usr/local/lib/imports/QtMultimediaKit/
libdeclarative_multimedia.so qmldir
ls /usr/local/lib/plugins/
accessible imageformats playlistformats webkit
bearer kbddrivers qmltooling
gfxdrivers mediaservice script
iconengines mousedrivers sqldrivers
I tried with the following qml
import Qt 4.7 import QtMultimediaKit 1.1 Rectangle { id: page width: 300; height:200 color: “lightgray” Text { id: helloText text: “Hello world!” y: 30 anchors.horizontalCenter: page.horizontalCenter font.pointSize: 10; font.bold: true } Audio { id: playMusic source: “audio.wav” } MouseArea { id: playArea anchors.fill: parent onPressed: { playMusic.play() } } }
In the PRO file , I have the following
QT += declarative multimedia
+CONFIG += mobility
+MOBILITY = multimedia
Inspite all these changes I am seeing the issue “module “QtMultimediaKit” is not installed”.
QT – v4.7.4
Mobility – v1.2
Can Anyone help me what has gone wrong ?
If you have Mobility 1.2 installed then it is safe to say that you are declaring the wrong version of QtMultidemiaKit. Try
@import QtMultimediaKit 1.2@
Sometimes it is just the version. Hope that helps. | https://forum.qt.io/topic/3548/solved-qml-qtmultimediakit-to-play-audio-on-n900 | CC-MAIN-2022-05 | refinedweb | 1,043 | 68.06 |
Registry in C# article was slightly changed, general descriptions where improved and a correction about the Flush() method has been added.
Note that there might be a change in Flush() method from .NET FW version 1, and SP2, seems like SP2 deals better with the MSDN saying that Flush is rarely neede...but that is just a general thinking
The?
The article 'Properting and Indexing' has been succesfully retrieved from backup, and been placed at the articles section (its now at the links on the right)
There are cases when we want our application to be activated only once, if a second instance is opened we want to terminate it, but before that we might want to show the first instance.
A great example is hown here at codeproject, it also good to note that this example is very good at showing how to easily interface with Win32 api's.
Site has been slightly updated, to provide extra space for text.
The Articles section now includes all articles that have been posted for the last two months, for extra space and greater archiving.
The Weblog itself will start from now to include only updated on articles added, and/or short C# explanations...meaning that all LOOOONG code examples, or explanations will auto-become articles.
i'd love getting feedback.
You...
An article about unsafe coding in C# has been added to the links section (right).
[...] unsafe code in C# means usage of pointers, what the CLR does behind the scenes when we use a managed code is part of it, and by stating that we have an unsafe code, we declare a function/line to be 'able' to reach memory directly[...]
An easy and fast way to create unique ID's for your application is GUIDs,
A GUID is a 128-bit integer (16 bytes) that can be used across all computers and networks wherever a unique identifier is required. Such an identifier has a very low probability of being duplicated.
in C# we create a new guid like so:
System.Guid guid=System.Guid.NewGuid();
this will generate a guid out of your local MAC (network adapter physical address which itself is unique) your local time, and more...if you run this more then once: guid=System.Guid.NewGuid(); you are guaranteed to NEVER get the same guid again.
String interning...what does it mean? catch a new article (right) that explains that exact issue.
[...]Defenition: The CLR maintains a table (the "intern pool"), which holds one instance of each unique string declared in a program, as well as any unique instance of string you programmatically added[...]
This you will all love...i hope.
I have wrote an application which make use of the easy class in C# - WebClient.
the application logs into a playboy pics site, and downloads ALL pictures from the site to your computer...now notice that its about 600 pics!!!
its on the articles section to the right.
A very nice,easy and smart way to run a console application as a second process from your console application, and read its output into string buffer of your application.
Articles links to the right.
Abstract abstract abstract, what is it?
well read it all on the new article about C# Abstract Classes.
In my last Hashtable & serialization example, you my have noticed a Hashtable addition that has a different approach then the retrival.
for addition we used: myCollection.add(mn,ad);
for retrival we used: ad = myCollection[mn];
where myCollection is the hashtable, mn and ad are structs.
i got an email saying that an addition could be performed as such: myCollection[mn]=ad; and that is correct and excellent, but lets point the differences between property addition and add method:
While using a property addition (eg myCollection[mn]=ad;) we can add new items, However, if the specified key already exists in the Hashtable, setting the Item property overwrites the old value. In contrast, the Add method does not modify existing elements, and if the item exists, an exception is thrown.
hope that clears things up.
Just added a new **article** to the articles section (left).
this article shows a full usage of the hashtable data structure in C# and a way to serialize it into a binary file.
the full application is a phone-like book, you can freely change it and use it as you wish.
any questions regarding that article will be hapily answered
This has been a question that i saw couple of times at the Microsoft news groups...and it goes something like that: "Is there a way to search a given string for a specific sub-string and replace that with another, but the string to be searched will be case insensitive?"
And i decided to sit down and hit the keyboard until a nice function will come up, and it works,,,check the articles link (right) for the 'Find & replace Insensitive' article...
Check out my first article, on the 'articles' link to the left (below the 'home' link).
the article shows how to transit to C# form C++ and not be afraid from the strings...and chars....and ....well read the damn article.
Q: How can i convert an integer to HEX?
A: One way would be: string str = System.Convert.ToString(integer, 16);
Basically, i feel that System.Convert namespace holds everything i need for any conversion needed..
There is something that we can all agree to hate, the incessant buzzing of a mosquito flying around our heads. it can really drive us nuts.
An amazing new use for the pocket pc has come to the rescue..:the Mosquito killer app, i dont know ho good it is, or maybe bears come instead of mosquito's, but its a damn good idea.
XBox Goes Live... a dream comes true for many XBox players...ohhh ya.
A?
This is a very good article, with a depth explanations on how to build (write) a C# chat application.
what is unique, is that this application is non blocking, and teaches the secrets of C#-Sockets.
source [CSharpFriends]
Lets implement a string reverse in C#
in most sites you will see an implementation using a recursion:
public static string Reverse(string str)
{
/*termination condition*/
if(1==str.Length)
{
return str;
}
else
return Reverse( str.Substring(1) ) + str.Substring(0,1);
}
but in C# Crawler, i will show you a nicer version (i think) that uses
the old trick - triple Xor.
private string Rev(string x){;}
private
{;
}
I have been asked about access levels avilable in C#, there are 5 of them, and could be remembered as PPIPP!!!
the top level is Public, which is no restricion.
the second one is protected, which allows access for the containing class, or lass that inherit it.
the third one is internal, which allows access only for the corrent project.
the forth one is protected internal (the only combination available), which allows access for the corrent project, or derived types derived from this class.
the last one is private, and its the most restricted one, allows access only for the containing type.
clear?
...from CSharpHelp.com
Often when surfing web site, we get frustrated with all the small pop-up windows and advertisements that appear on the desktop. Well I got so frustrated with those, that I thought of writing a small application that can automatically close such windows [...] by Shripad Kulkarni
Someone asked me, are there any things that might be confusing in c# if i was a C++ programmer?
I say yes, but i like to call it simplicity.
Simplicity in C# comes to break apart all the annoying things in C/C++. for example, have you ever wrote an if statement in C/C++ that looks like this:
if (x = 0) { . . . }
and wondered for days what went wrong? (if you have'nt noticed, this simple if statement actually assign x with 0, and not comparing it with 0).
In C# you cant do that, C# compiler will complain that this (x=0) statement is not a bool one, and cannot be converted to bool, which means that you must explicitly state every if/while/... as bool, that cannot be converted to other types.
Another thing is Switch statements...a very common place for fall downs;
In C/C++ something like this is ok:
switch(x) {
case 1:
case 2:
[...something...]
break;
case 3:
case 4:
What it means is that case 1 and 2 are "fall through". when x is 1, the switch statement will fall through 2 and up until it hit the break.
in C# every case has a break, this line on code (6 lines if you will) will cause the compiler of c# to generate a break after EVERY case. and if we wanted fall through? well consider this:
goto case 2;
goto case 4;
to gain the same effect.
(c)sagiv
I will be happy to get comments this line of articles.
Have you asked yourself that?
Well i have, and i have the answer for you: Ref and Out are two seperate animals, they both might sound the same, but they eat difrerantly :)
When a function is set to have one or more of its variables as ref, it means that the caller MUST assign a value to that ref variable, and the callee will change it (does not have to). making a long sentance short - ref means that the caller must assign a value (or the compiler will for you) before calling the function.
On the other hand, an Out parameter means that the caller is not required to assign a value to that out variable, but the callee must ,and will, assign a value before returning the variable to the caller.
Very Interesting....must read!
How Al-Qaida Site Was Hijacked. Using tracking tools, a public Web service and impeccable timing, an American hacker explains how he managed to take over al-Qaida's dot-com domain. By Patrick Di Justo. [Wired News]
Check this out: im not really sure, but it seems that the C# compiler (considering pure managed code) is far more fast than the VC++ comppiler..or is it?
please email me if you think i missunderstood it.
Well, after waiting for your responses...here is the answer for the riddle:
the riddle was: [....
given this story..how many coins do you need in order to set the 3 lables to thier correct places?
the answer is 1...yes 1.
if you write down all posibilities for each machine you will come to this:
apple machine (label is incorrect) - could be oranges or mix.
oranges machine (label is incorrect) - could be apples or mix.
mix machine (label is incorrect) - could be apples or oranges...
now, we have one coin, so we put it in the mix machine, if we get apples, then its an apples machine, if we get an orange then its an oranges machine.
after that, we look at the 2 machines there were left, we can eliminate the machine that we found from one of them, and its left to be a mix...and the third one is... what's left.
cool?
far more simplified version, now using the .ToCharArray() method. thanks to Joe Labrock.
using
namespace
{
{
a^=b;
b^=a;
}
{
go(list,0,x);
}
{
Console.Write (list);
Console.WriteLine (" ");
}
{
swap (
go (list, k+1, m);
}
}
}
{
{
Permute p =
p.setper(c2);
}
}
I have managed to write an application that writes down the permutations of a string.
if somebody has a better way to 'swap' two chars in a string... let me know.
pos1^=pos2;
pos2^=pos1;
temp = str.Substring(0,pos1);
temp += str.Substring ((pos2),1);
temp += str.Substring (pos1,1);
temp += str.Substring ((pos2+1));
str =temp;
Console.Write (list [i]);
Console.WriteLine (" ");
swap (
go (list, k+1, m);
Permute p =
p.go(c, 0, 3);
yes, another great link, with an exception!
this C# Link has some great lessons, and great tutorials on c#...another worth a peek
Do you know C#?
i have a question...C# of course...
what if i told you i want a simple C# function that will get a string as an input, and print out (console if you will) all the permutations of that string...can you do that?
give me an email...ring ring
CSharp Friends (of mine)
Just added a cool new link (to the right)...CSharp Friends is worth a look, a very solid place for the beginners and the experts...
Riddle
i have been asked a cute riddle.. lets see who email me the answer..
Got_Dot_Net
C#_Help
C#_Organization
DEVX
I_CSHARP_Code
Google
Microsoft
CSharp_Friends
C_Sharp_Center | http://radio.weblogs.com/0111551/ | crawl-001 | refinedweb | 2,110 | 73.27 |
SYNOPSIS#include <sys/types.h>
#include "lfc_api.h"
int lfc_registerfiles (int nbfiles, struct lfc_filereg *files, int *nbstatuses, int **statuses)
DESCRIPTIONlfc_registerfiles registers a list of files with their corresponding replica entry. If the lfn is already registered, the guid is optional and only the replica is added (after checking that filesize and possibly checksum match). If the lfn is not registered yet, the guid is mandatory for the LFC. The lfn and the replica get registered.
- nbfiles
- specifies the number of files in the array files.
- files
- is a pointer to an array of lfc_filereg structures provided by the application.
struct lfc_filereg { char *lfn; char *guid; mode_t mode; u_signed64 size; char *csumtype; char *csumvalue; char *server; char *sfn; };
-
- A component of lfn prefix does not exist or lfn is a null pathname.
- E2BIG
- Request too large (max 1 MB).
- ENOMEM
- Memory could not be allocated for marshalling the request or unmarshalling the reply.
- EACCES
- Search permission is denied on a component of the lfn prefix or the file does not exist and write permission on the parent directory is denied or write permission on the file itself is denied.
- EFAULT
- files, nbstatuses or statuses is a NULL pointer.
- EEXIST
- The sfn exists already.
- ENOTDIR
- A component of lfn prefix is not a directory.
- EISDIR
- The lfn exists already and is not a regular file.
- EINVAL
- nbfiles is not strictly positive, the length of one of the guids exceeds CA_MAXGUIDLEN or the length of server exceeds CA_MAXHOSTNAMELEN or lfn and guid are both given and they point at a different file.
- ENOSPC
- The name server database is full.
- ENAMETOOLONG
- The length of lfn exceeds CA_MAXPATHLEN or the length of an lfn component exceeds CA_MAXNAMELEN or the length of sfn exceeds CA_MAXSFNLEN.
- SENOSSERV
- Service unknown.
- SEINTERNAL
- Database error.
- SECOMERR
- Communication error.
- ENSNACT
- Name server is not running or is being shutdown.
AUTHORLCG Grid Deployment Team | http://manpages.org/lfc_registerfiles/3 | CC-MAIN-2020-45 | refinedweb | 312 | 57.87 |
Version 0.9.4 released
21 April 2014 Dominik Picheta
The Nimrod development community is proud to announce the release of version 0.9.4 of the Nimrod compiler and tools. Note: This release has to be considered beta quality! Lots of new features have been implemented but unfortunately some do not fulfill our quality standards yet.
Prebuilt binaries and instructions for building from source are available on the download page.
This release includes about 1400 changes in total including various bug fixes, new languages features and standard library additions and improvements. This release brings with it support for user-defined type classes, a brand new VM for executing Nimrod code at compile-time and new symbol binding rules for clean templates.
It also introduces support for the brand new Babel package manager which has itself seen its first release recently. Many of the wrappers that were present in the standard library have been moved to separate repositories and should now be installed using Babel.
Apart from that a new experimental Asynchronous IO API has been added via
the
asyncdispatch and
asyncnet modules. The
net and
rawsockets
modules have also been added and they will likely replace the sockets
module in the next release. The Asynchronous IO API has been designed to
take advantage of Linux’s epoll and Windows’ IOCP APIs, support for BSD’s
kqueue has not been implemented yet but will be in the future.
The Asynchronous IO API provides both
a callback interface and an interface which allows you to write code as you
would if you were writing synchronous code. The latter is done through
the use of an
await macro which behaves similar to C#’s await. The
following is a very simple chat server demonstrating Nimrod’s new async
capabilities.
import asyncnet, asyncdispatch var clients: seq[PAsyncSocket] = @[] proc processClient(client: PAsyncSocket) {.async.} = while true: let line = await client.recvLine() for c in clients: await c.send(line & "\c\L") proc serve() {.async.} = var server = newAsyncSocket() server.bindAddr(TPort(12345)) server.listen() while true: let client = await server.accept() clients.add client processClient(client) serve() runForever()
Note that this feature has been implemented with Nimrod’s macro system and so
await and
async are no keywords.
Syntactic sugar for anonymous procedures has also been introduced. It too has been implemented as a macro. The following shows some simple usage of the new syntax:
import future var s = @[1, 2, 3, 4, 5] echo(s.map((x: int) => x * 5))
A list of changes follows, for a comprehensive list of changes take a look here.
Library Additions
- Added
macros.genSymbuiltin for AST generation.
- Added
macros.newLitprocs for easier AST generation.
- Added module
logging.
- Added module
asyncdispatch.
- Added module
asyncnet.
- Added module
net.
- Added module
rawsockets.
- Added module
selectors.
- Added module
asynchttpserver.
- Added support for the new asynchronous IO in the
httpclientmodule.
- Added a Python-inspired
futuremodule that features upcoming additions to the
systemmodule.
Changes affecting backwards compatibility
- The scoping rules for the
ifstatement changed for better interaction with the new syntactic construct
(;).
OSErrorfamily of procedures has been deprecated. Procedures with the same name but which take different parameters have been introduced. These procs now require an error code to be passed to them. This error code can be retrieved using the new
OSLastErrorproc.
os.parentDirnow returns “” if there is no parent dir.
- In CGI scripts stacktraces are shown to the user only if
cgi.setStackTraceStdoutis used.
- The symbol binding rules for clean templates changed:
bindfor any symbol that’s not a parameter is now the default.
mixincan be used to require instantiation scope for a symbol.
quoteIfContainsWhitenow escapes argument in such way that it can be safely passed to shell, instead of just adding double quotes.
macros.dumpTreeand
macros.dumpLisphave been made
immediate,
dumpTreeImmand
dumpLispImmare now deprecated.
- The
nilstatement has been deprecated, use an empty
discardinstead.
sockets.selectnow prunes sockets that are not ready from the list of sockets given to it.
- The
noStackFramepragma has been renamed to
asmNoStackFrameto ensure you only use it when you know what you’re doing.
- Many of the wrappers that were present in the standard library have been moved to separate repositories and should now be installed using Babel.
Compiler Additions
- The compiler can now warn about “uninitialized” variables. (There are no real uninitialized variables in Nimrod as they are initialized to binary zero). Activate via
{.warning[Uninit]:on.}.
- The compiler now enforces the
not nilconstraint.
- The compiler now supports a
codegenDeclpragma for even more control over the generated code.
- The compiler now supports a
computedGotopragma to support very fast dispatching for interpreters and the like.
- The old evaluation engine has been replaced by a proper register based virtual machine. This fixes numerous bugs for
nimrod iand for macro evaluation.
--gc:noneproduces warnings when code uses the GC.
- A
unionpragma for better C interoperability is now supported.
- A
packedpragma to control the memory packing/alignment of fields in an object.
- Arrays can be annotated to be
uncheckedfor easier low level manipulations of memory.
- Support for the new Babel package manager.
Language Additions
- Arrays can now be declared with a single integer literal
Ninstead of a range; the range is then
0..N-1.
- Added
requiresInitpragma to enforce explicit initialization.
- Exported templates are allowed to access hidden fields.
- The
using statementenables you to more easily author domain-specific languages and libraries providing OOP-like syntactic sugar.
- Added the possibility to override various dot operators in order to handle calls to missing procs and reads from undeclared fields at compile-time.
- The overload resolution now supports
static[T]params that must be evaluable at compile-time.
- Support for user-defined type classes has been added.
- The command syntax is supported in a lot more contexts.
- Anonymous iterators are now supported and iterators can capture variables of an outer proc.
- The experimental
strongSpacesparsing mode has been implemented.
- You can annotate pointer types with regions for increased type safety.
- Added support for the builtin
spawnfor easy thread pool usage.
Tools improvements
- c2nim can deal with a subset of C++. Use the
--cppcommand line option to activate. | https://nim-lang.org/blog/2014/04/21/version-094-released.html | CC-MAIN-2018-51 | refinedweb | 1,013 | 59.09 |
Routify 2 - Pet Peeve Edition
Written by jakobrosenberg 08/20/2020
- Why Routify 2
- Getting Started
- What’s new
- Migration Guide
Why Routify 2?
In short, our new
hash based routing support required a breaking change - and when you have one breaking change, you might as well break everything. So we did. Gone are all the pet peeves.
Path resolvement of
$url was never quite right and as much as I hate to admit it,
rixo was right from day one. From now on
$url paths are resolved as traditional file paths and
./ refers to the folder and not the file.
basepath was another pet peeve. It should have been an
urlTransform function but by the time I realized,
basepath had already been added.
Almost lastly,
_layout.svelte makes very little semantic sense. It handles guards, scopes, redirects, folder node etc. In short, the file represents the folder and folder-scope. Therefore a
_folder.svelte alias has been added.
Lastly,
@sveltech was a pain to type so we moved everything to
@roxi which just happens to be the home for our upcoming framework, Roxi 🤫
Getting Started
To try the new Routify 2 beta, open a terminal and type
npx @roxi/routify init --branch 2.x
To migrate an existing project, refer to the migration guide below
What’s new
New package scope
Routify has moved from
@sveltech/routify to
@roxi/routify. This is both easier to type and aligns with our upcoming Routify-powered framework Roxi..
basepath example
config.urlTransform = { apply: url => `/my-base${url}`, remove: url => url.replace('/my-base', ''), }
_folder.svelte
An alias for
_layout.svelte.
Migration Guide
- Search and replace
@sveltech/routifywith
@roxi/routify
$url()no longer treats non-layout files as folders. If you’re using relative paths replace the first
../in
$url()with
./.
$basepathwas deprecated in favor of
urlTransform. Please refer to
urlTransform’s basepath example.
- The Routify rollup plugin, now has to be imported from
@roxi/routify/plugins/rollup
routifyDirhas been defaulted to
.routify. Update to
import { routes } from '../.routify/routes'in
App.svelteor set
routifyDirto
node_modules/@roxi/routify/tmp/routes. (Thanks @flayks)
Writing good documentation that is up-to-date is difficult. If you notice a mistake, help us out. | https://routify.dev/blog/routify-2 | CC-MAIN-2021-21 | refinedweb | 367 | 68.87 |
Migrating from a Node App to Serverless
For a while now I’ve been thinking about how I would go about migrating a “traditional” Node application to a serverless one. All I’ve needed is a good example - and last week I found one. While going through the apps I had set up on Bluemix, I remembered that I had a Node server running to power my Twitter bot,.
I blogged about this project over a year ago (Building a Twitter bot to display random comic book covers) and while looking at the code again, I realized it would be a perfect candidate for rewriting using a serverless framework. Let’s begin by reviewing the old application.
Version One - Traditional Node App
I’ve already linked to the blog entry where I went into detail about the application, so I’ll just cover the high points here. Let me start off by saying that this isn’t necessarily the best Node app out there. Ok, honestly, it’s probably pretty crappy. But it works - and I’m still learning - so I pretty much expect any code I look at that is a year old is going to have a few issues. You can find the entire code base on the Github repo, but let me share the main application file.
/*eslint-env node*/ var request = require('request'); var express = require('express'); var credentials = require('./credentials.json'); var Twitter = require('twitter'); var client = new Twitter(credentials.twitter); var marvel = require('./marvel'); marvel.setCredentials(credentials.marvel.private_key, credentials.marvel.api_key); // cfenv provides access to your Cloud Foundry environment // for more info, see: var cfenv = require('cfenv'); var app = express(); app.use(express.static(__dirname + '/public')); //); }); var MONTHS = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']; function tweetRandomCover() { console.log('First, we get a random cover.'); marvel.getCover(function(res) { console.log('back from mavel'); console.dir(res); var tweet = res.title + ' published '+(MONTHS[res.date.getMonth()])+' '+res.date.getFullYear() +'\n'+res.link; console.log('Now going to fetch the image link.'); request.get({url:res.url,encoding:null}, function(err, response, body) { if(!err && response.statusCode === 200) { console.log('Image copied to RAM'); client.post('media/upload', {media: body}, function(error, media, response) { if(error) { console.error('Error from media/upload: '+error); return; } // If successful, a media object will be returned. console.log('Image uploaded to Twitter'); var status = { status: tweet, media_ids: media.media_id_string } client.post('statuses/update', status, function(error, tweet, response){ if (!error) { console.log('Tweeted ok'); } }); }); } }); }); } app.get('/forceTweet', function(req, res) { tweetRandomCover(); res.end('Done (not really)'); }); var cron = require('cron'); var cronJob = cron.job('0 6,12,18 * * *', function() { console.log('do the cover'); tweetRandomCover(); console.log('cron job complete'); }); cronJob.start();
There’s a few things to note here.
- First off, I still stuggle with “how much code goes in my main app file versus includes”, and you can see I’ve got a mismash of stuff here. I put the Marvel API logic in a module, but the Twitter stuff is not. Since this isn’t a traditional web app and I don’t have a lot of routes (more on that in a second), I’m kinda ok with it, but this could definitely be organized a bit nicer.
- I didn’t even notice it till this week - but I’m using Express. I love Express. But the app has a grand total of one public route, and it’s not even meant to be used - it’s just a way for me to test. So I loaded an entire framework for no good reason. Hell I even set up a static directory that I never ended up using.
- And then the biggest thing to note here is - my code tweeted 4 times a day, but ran 24 hours a day. Cost wise that could have been a huge waste of money. (It really wasn’t, but you get the idea.)
Version Two - Serverless Version
In designing my new version, I split up the job into the following actions.
- The first action handles selecting a date.
- The second action handles searching Marvel.
- The third action simply selects the random comic.
- The four action “prepares” the tweet.
- The fifth and last action fires off the Tweet.
Let’s look at these components. I began with the date selection code.
function getRandomInt (min, max) { return Math.floor(Math.random() * (max - min + 1)) + min; } function main(params) { //get random values let year = getRandomInt(1960, new Date().getFullYear()-1); let month = getRandomInt(1,12); let monthStr = month<10?"0"+month:month; let daysInMonth = new Date(year, month, 0).getDate(); let beginDateStr = year + "-" + monthStr + "-01"; let endDateStr = year + "-" + monthStr + "-" + daysInMonth; let dateString = beginDateStr+','+endDateStr; console.log('dateString is '+dateString); return { limit:100, format:"comic", formatType:"comic", dateRange:dateString } }
Nothing too interesting here, but note the kinda cool logic to get the end of the month. If you use day 0 for a month, it really means day minus one. I found this trick on StackOverflow of course. The rest of the code is basically setting up parameters to use with the Marvel API. Speaking of - here is the action.
const request = require('request-promise'); const crypto = require('crypto'); const API = '?'; function main(args) { let url = API + `&apikey=${args.api_key}`; //Add optional filters if(args.limit) url += `&limit=${args.limit}`; if(args.format) url += `&format=${encodeURIComponent(args.format)}`; if(args.formatType) url += `&formatType=${encodeURIComponent(args.formatType)}`; if(args.dateRange) url += `&dateRange=${args.dateRange}`; //lots more go here let ts = new Date().getTime(); let hash = crypto.createHash('md5').update(ts + args.private_key + args.api_key).digest('hex'); url += `&ts=${ts}&hash=${hash}`; return new Promise((resolve, reject) => { let options = { url:url, json:true }; request(options).then((result) => { resolve({result:result}); }) .catch((err) => { reject({error:err}); }); }); }
This is a new package I created specifically for the Marvel API. If you’ve read my blog for a while now you know I like to play around with comics, so I created a new package just for Marvel. Their API supports a lot of different end points and this just covers one, and I barely touched upon the supported arguments. But what’s cool here is that I can now use this action in other applications in the future. You can too - I forgot to actually share the package, but just ask and I’ll do so. (Yeah, that’s a bit weird, but I’d like to know if anyone actually wants to use it before I make it public - and of course the code is up on Github.)
As a package I plan on making public, I created a bound copy of it with my Marvel credentials. This lets me use the action with no authentication required.
The next action handles selected a random comic book. (I named the file, “selctCover”, but technically it is selecting a comic. This bugs me, but not enough to rename the file.)
const IMAGE_NOT_AVAIL = ""; function getRandomInt (min, max) { return Math.floor(Math.random() * (max - min + 1)) + min; } function main(args) { let comics = args.result.data.results; console.log('before filter - '+comics.length+' comics'); /* first, filter the array by comics that have a cover */ comics = comics.filter((comic) => { return (comic.thumbnail && comic.thumbnail.path != IMAGE_NOT_AVAIL); }); console.log('after filter - '+comics.length+' comics'); let selectedComic = {}; if(comics.length) { selectedComic = comics[getRandomInt(0, comics.length-1)]; /* remove a crap ton of stuff as we don't need everything */ delete selectedComic.characters; delete selectedComic.collectedIssues; delete selectedComic.collections; delete selectedComic.creators; delete selectedComic.description; delete selectedComic.diamondCode; delete selectedComic.digitalId; delete selectedComic.ean; delete selectedComic.events; delete selectedComic.format; delete selectedComic.id; delete selectedComic.images; delete selectedComic.isbn; delete selectedComic.issn; delete selectedComic.modified; delete selectedComic.pageCount; delete selectedComic.prices; delete selectedComic.series; delete selectedComic.stories; delete selectedComic.textObjects; delete selectedComic.upc; delete selectedComic.variantDescription; delete selectedComic.variants; } return { comic:selectedComic } }
I begin by filtering out comics without a thumbnail (or the default “no picture available”) and then just pick one by random. I also decided to remove a lot of extra data. I wrote this code last night, and looking at it now, that feels wrong to me. Yes, this action is specifically for this new application and yes, I know I don’t need all that data, but I think I should have left the data as is. How about we pretend I didn’t do that?
The next action then prepares information for the Tweet. Basically this is where I craft the text I want to use on each one. Here is an example of how a tweet looks:
"Avengers West Coast (1985) #56" published March 1990 pic.twitter.com/mU6y726Ep2— Random Comic Book (@randomcomicbook) August 14, 2017
My god - the neck on her is insane. Anyway, here is the code:
const MONTHS = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']; function main(args) { //initialize to now just in case... let saleDate = new Date(); //get the onsale date args.comic.dates.forEach((dateRec) => { if(dateRec.type === 'onsaleDate') saleDate = new Date(dateRec.date); }); //get the right link let link = ''; args.comic.urls.forEach((urlRec) => { if(urlRec.type === 'detail') link = urlRec.url; }); //get the cover let cover = args.comic.thumbnail.path + '.' + args.comic.thumbnail.extension; console.log(args.comic); // Create the text based on the comic data let tweet = '"'+ args.comic.title + '" published '+ (MONTHS[saleDate.getMonth()])+' '+saleDate.getFullYear() +'\n'+link; return { status:tweet, image:cover } }
For the most part, I’m just digging into the comic data and finding the right values. Nothing special.
Alright, so for the final part - I just need to send a Tweet. I built, and released, a Twitter package for OpenWhisk earlier this year: A Twitter Package for OpenWhisk. But at the time, I didn’t support sending tweets. I added support for that later on, but it didn’t support uploading media. The last time I wrote code for sending tweets that including media, I noticed that the Twitter API requires two calls. First, you upload the media, then you make your tweet and attach the media. I thought - why not just make that simpler. Check it out below:
const Twitter = require('twitter'); const request = require('request'); /* I send a tweet. i need: args.status (the text) args.image (url of an image) }); /* Special branching for images. Since images require a two step process, we split up the code into two paths. */ if(!args.image) { client.post('statuses/update', {status:args.status}, function(err, tweet, response) { if(err) reject(err); resolve({result:tweet}); }); } else { request.get({url:args.image, encoding:null}, function(err, response, body) { if(!err && response.statusCode === 200) { client.post('media/upload', {media: body}, function(error, media, response) { if(error) { reject({error:error}); } var status = { status: args.status, media_ids: media.media_id_string } client.post('statuses/update', status, function(error, tweet, response){ if (!error) { resolve({result:tweet}); } }); }); } }); } }); } exports.main = main;
Basically - if I detect you are including an image with a Tweet (just a URL for now), I handle that logic for you. All you need to do is send me your Tweet and the action handles it. Cool.
And that was basically it. But how did I run it? First I made a new trigger with a CRON setting:
wsk trigger create randomcomicbook_trigger --feed /whisk.system/alarms/alarm --param cron "0 */3 * * *"
And then I simply made a rule that associated my trigger with a sequence that tied the 5 things above together. The OpenWhisk management system on Bluemix actually does a great job of visualizing all of this. I had to take three screen shots though so hopefully this looks ok:
If we consider the Marvel and Twitter packages as separate concerns (one was mostly done, so that seems fair), then really the code was pretty simple. Basically setting up params, selecting and then transforming data.
You can find all the code for this on my main Serverless Github repository:
Wrap Up
So, what was the net result? First - I was able to kill a server running 24 hours a day. Did this save me a lot of money? Nope. Bluemix has a free tier for Node apps using this little memory. You can see prices yourself on the calculator.
OpenWhisk also has a pricing calculator. I set up my task to run 8 times a day so I’ll say 250 times a month. It takes about 10 seconds to run, but I’ll bump that to 15. I’m using less than 256 megs of RAM. At that level, I’m also on the free tier.
But to me, the biggest benefit is the code. I’m using minimal “custom” code and I’m no longer worrying about an active server running. To be fair, I didn’t worry too much about it on Bluemix, but it was unnecessary.
If you have any questions, just let me know in the comments below! | https://www.raymondcamden.com/2017/08/14/migrating-from-a-node-app-to-serverless | CC-MAIN-2018-34 | refinedweb | 2,150 | 60.82 |
In this tutorial we're going to learn how to create, extract and view zip files in Python using the built-in module named zipfile.
What is a zip file?
It is a file that contains several compressed files and folders. Zip uses several compression algorithms to compress files. The most common one is DEFLATE.
A zip file has these benefits:
- Reduced file size.
- Saves storage space.
- Allows you to encrypt data.
zipfile module
zipfile is a built-in module in Python to work with zip files. It allows you to create, extract and modify files.
Limitations
This module comes with everything you need to work with zip files. But has the following limitations.
- It cannot handle ZIP files that are more than 4 GiB in size.
- Cannot create an encrypted zip file.
- Decompression is extremely slow compared to C.
Exceptions
An exception is an event that affects the normal flow of execution of a program. These are the exceptions that are defined in the zipfile module:
- zipfile.BadZipFile - The error raised for bad ZIP files. It was introduced in version 3.2. It's alias is zipfile.BadZipfile, for compatibility with older Python versions.
- zipfile.LargeZipFile - Raised when a ZIP file would require ZIP64 functionality but that has not been enabled.
Classes and methods
Here's a list of classes in zipfile module.
- zipfile.ZipFile - The class for reading and writing ZIP files.
- ZipFile.close() - Close the archive file. You must call
close()before exiting your program or essential records will not be written.
- ZipFile.open - (name, mode='r', pwd=None, *, force_zip64=False) Access a member of the archive as a binary file-like object.
- ZipFile.extract - (member, path=None, pwd=None)Extract a member from the archive to the current working directory; member must be its full name or a
ZipInfoobject.
- ZipFile.extractall - (path=None, members=None, pwd=None)Extract all members from the archive to the current working directory. path specifies a different directory to extract to.
- zipfile.ZipInfo - (filename='NoName', date_time=(1980, 1, 1, 0, 0, 0))Class used to represent information about a member of an archive.
- zipfile.is_zipfile -(filename)Returns
Trueif filename is a valid ZIP file based on its magic number, otherwise returns
False.
Extract a zip file
In this example, we are using the extractAll() method to extract the contents of our zip file.
from zipfile import ZipFile zip_file = 'myzipfile.zip' zip = ZipFile(zip_file, 'r') zip.extractall() zip.close() print("Files Extracted")
This will extract the content of our Zip file to the current directory. To extract the content to a different directory, specify the path in
extractall() method.
zip.extractall("D:\Extracted")
You can also use this method to open and work with zip files.
from zipfile import ZipFile zip_file = 'myzipfile.zip' with ZipFile(zip_file, 'r') as zip: zip.extractall("D:\Extracted") print("Files Extracted")
In python the
with keyword is used when working with unmanaged resources (like file streams). It allows you to ensure that a resource is cleaned up when the code that uses it finishes running, even if exceptions are thrown.
List contents of a zip file
To list the contents of a Zip file, we can use the
printdir() method.
from zipfile import ZipFile zip_file = 'myzip.zip' with ZipFile(zip_file, 'r') as zip: zip.printdir()
And here is the result.
File Name Modified Size myzip/ 2019-11-11 23:51:32 0 myzip/assets/ 2019-11-11 23:44:46 0 myzip/assets/bootstrap/ 2019-11-11 23:44:46 0 myzip/assets/bootstrap/css/ 2019-11-11 23:51:32 0 myzip/assets/img/ 2019-11-11 23:44:46 0 myzip/assets/img/1.jpg 2019-11-11 23:51:32 23358 myzip/assets/img/2.jpg 2019-11-11 23:51:32 19308 myzip/index.html 2019-11-11 23:51:32 8161 myzip/post.html 2019-11-11 23:51:32 9936
Create zip files
Using ZipFile.write() method, we can add files to a zip. In this example, we first create a list named files and stores the files to be zipped in it. Then we write the files as a zip file using the
write() method.
from zipfile import ZipFile # Files to compress files = [r'E:\file1.txt', r'E:\file2.txt'] # Path to save the zip file save_to = r"D:\compressed\myzipfile.zip" with ZipFile('myzipfile.zip', 'w') as zip: for file in files: zip.write(file)
We can change the compression algorithm by modifying the code as:
import zipfile from zipfile import ZipFile # Files to compress files = [r'E:\file1.txt', r'E:\file2.txt'] # Path to save the zip file with ZipFile('myzipfile.zip', 'w', compression= zipfile.ZIP_LZMA) as zip: for file in files: zip.write(file)
To zip all files and sub-folders of a directory, us this method.
import os import zipfile from zipfile import ZipFile # Files to compress files = [] for _root, _directories, _files in os.walk(r'D:\filestozip'): for filename in _files: # join the two strings in order to form the full filepath. filepath = os.path.join(_root, filename) files.append(filepath) print(files) # Path to save the zip file save_to = r"D:\myzipfile.zip" with ZipFile(save_to, 'w', compression= zipfile.ZIP_LZMA) as zip: for file in files: zip.write(file)
Wrapping it up
In this post, we learned to create, extract and view contents of a zip file using Python's built in ZipFile module. | https://www.geekinsta.com/working-with-zip-files-in-python/ | CC-MAIN-2021-31 | refinedweb | 902 | 69.07 |
Design Guidelines, Managed code and the .NET Framework
In response to a recent thread The Good and the Bad: Obsoletion in the Framework and LOTs of internal discussion, I have written this short whitepaper that describes the intent of using obsolete in the .NET Framework and how it affects your development projects. I believe that that paper balances some of our customer’s requirements that we continue to improve and evolve the .NET Framework with our commitment to provide backwards compatibility in order to preserve customer investments in the .NET Framework today for years to come.
I want to stress that this is just a DRAFT paper… It is not Microsoft’s official position yet. I am still in the process of getting feedback internally and, with this blog entry, externally as well. Please do give me your feedback.
Thanks
Summary:
When migrating source code from V1.0 or V1.1 of the Framework to V2.0 you may see some new compiler warnings indicating that you are using types or members that are obsolete. Generally speaking, this warning does NOT indicate that your code will not work well on version 2.0 of the .NET Framework but it does provide some suggestions for updates to your code that Microsoft recommends you make. Microsoft makes every effort to ensure that your V1.0 and V1.1 based applications run as expected on new versions of the .NET Framework. The obsolete warnings highlight where there is some new functionality in the platform that your program could benefit from using. This brief paper describes some strategies for dealing with these warnings in the least disruptive way. In summary:
Description:
Microsoft marks types and members in the .NET Framework as being obsolete (by applying the ObsoleteAttribute) to indicate that there is some new functionality in the Framework that Microsoft recommends you migrate to. When used in the .NET Framework, these obsolete warnings do NOT indicate that your source code will not function properly or that Microsoft will definitely be removing these members in the near future. Microsoft remains committed to backwards compatibility of the .NET Framework in order to ensure that investments you make on the framework today continue to pay off for many years in the future.
What does Obsolete mean in the Framework?
When used in the .NET Framework, the obsolete warnings indicate that there is some new feature of the framework which you should consider migrating to. In many cases by simply changing the functions you call in the framework to the suggested alternative you can make your application more secure, performant or reliable. However in some cases it may not be worth the development costs to move to the new functionality. You can make that determination by understanding the impact of the change on your source code and the benefits as described in the warning text and the linked help page.
In some extreme cases Microsoft will be forced to actually remove an obsolete member in a future version of the .NET Framework. This is likely to be a very rare case. When this is the intent the member will be marked with Error=True in the ObsoleteAttribute indicating that a compiler error (rather than warning) should be generated. You are strongly recommended to migrate your projects off these members to ensure they work seamlessly on future versions of the .NET Framework.
The code below shows an example of using an obsolete member. This code compiles without warnings under V1.1, but when migrated to V2.0 it generates a warning as shown.
[C#]
using System;
using System.Xml;
using System.Xml.Schema;
public class SampleOld
{
public static void Main1()
{
XmlSchemaCollection xsc = new XmlSchemaCollection();
xsc.Add(null, "schema1.xsd"); //Adds schema & compiles
xsc.Add("ns-a", "schema2.xsd"); //adds schema & compiles all schemas
foreach (XmlSchema schema in xsc)
{
Console.WriteLine("Schema in Coll: " + schema.TargetNamespace);
}
}
} // End class
[VB]Imports System.XmlImports System.Xml.SchemaPublic Module SampleOld Public Sub Main1() Dim xsc As New XmlSchemaCollection() xsc.Add(Nothing, "schema1.xsd") xsc.Add("ns-a", "schema2.xsd") For Each schema As XmlSchema In xsc Console.WriteLine("Schema in Coll: " & schema.TargetNamespace) Next End SubEnd Module
It generates these warning:
Warning 1 'System.Xml.Schema.XmlSchemaCollection' is obsolete: 'Consider using System.Xml.Schema.XmlSchemaSet for schema compilation and validation.' ObsoleteExample.cs 9 8
Warning 2 'System.Xml.Schema.XmlSchemaCollection' is obsolete: 'Use System.Xml.Schema.XmlSchemaSet for schema compilation and validation.' ObsoleteExample.cs 9 38
Reading the information on the link you will discover that XmlSchemaSet was obsoleted for these reasons:
As you see from the warning text and information on the website indicated, the fix is trivial:
using System.Xml.Schema ;
public class SampleNew
public static void Main()
XmlSchemaSet set = new XmlSchemaSet();
set.Add(null, "schema1.xsd");
set.Add("ns-a", "schema2.xsd");
set.Compile(); //Add multiple schemas and explicitly call compile
foreach (XmlSchema schema in set.Schemas())
Console.WriteLine("Schema in set: " + schema.TargetNamespace);
[VB]Imports System.XmlImports System.Xml.SchemaPublic Module SampleNew Public Sub Main() Dim xset As New XmlSchemaSet() xset.Add(Nothing, "schema1.xsd") xset.Add("ns-a", "schema2.xsd") xset.Compile()'Add multiple schemas and explicitly call compile For Each schema As XmlSchema In xset.Schemas() Console.WriteLine("Schema in Coll: " & schema.TargetNamespace) Next End SubEnd Module
Obsolete and /warnaserror+
Many development teams consider it a best practice to compile with warnings as errors. This practice ensures that all warnings are resolved and any potential issues are addressed. The VB and C# compilers generate a warning for usage of obsolete members of the framework with text that describes the new functionality that you should consider using instead. As we describe above, it may not always make sense to immediately fix these warnings. For teams that compile with warnings as errors (/warnaserror+) this posses a problem as these obsolete warnings are turned into errors which must be addressed. As such Microsoft recommends that teams using /warnaserror+ also use /nowarn:0618 (for C#) or /nowarn:40000 (for VB) to suppress the obsolete messages. Although explicitly turning off all warnings of a certain class is not generally considered good development practice in this case it is the preferred way to handle this situation.
You will of course want to review your usage of obsolete members and decide which ones to fix. Under this plan the preferred way to do that is to use FxCop which contains a rule “ConsiderNotUsingObsoleteFunctionality”. By running FxCop over your project with this rule on you are able to triage the issues out of band of the mainline development process.
This code sample shows how this process would work. This code compiles without warnings under V1.1 and continues to under V2.0 if compiled as shown below.
Compile line:
C:\ObsoleteExample>csc /warnaserror+ /nowarn:0618 old.cs
C:\ObsoleteExample>vbc /warnaserror+ /nowarn:40000 class1.vb
Alternatively in C#, you can use the #pragma around the offending lines to keep the compiler from issuing a warning or error. As shown here:
public static void Main()
#pragma warning disable 0618
#pragma warning restore 0618
As you see, we got no warnings or errors in either case. Out of band of the official build you may want to run FxCop to flag potential problems and fix the appropriate ones.
C:\ObsoleteExample>fxcopcmd /rid:Usage# ConsiderNotUsingObsoleteFunctionality /f:old.exe /console
Warning : ConsiderNotUsingObsoleteFunctionality: SampleOld old.cs(9,8): This member is Obsolete. "Consider using System.Xml.Schema.XmlSchemaSet for schema compilation and validation."
This mechanism allows you to migrate to new versions of the .NET Framework easily while also being aware of new features of the .NET Framework that maybe worth considering.
PingBack from | http://blogs.msdn.com/brada/archive/2004/11/22/267950.aspx | crawl-002 | refinedweb | 1,274 | 58.58 |
simple file checking
two questions about simplify things in micropython like it is in normal python
- How to check if file exists?
i do not see possibility to check it simple like os.path.isfile
maybe i do something wrong but i try "all" simple posibilities and finish in this crap of code:
len([item for item in os.listdir() if fname==item])>0
- how to check file size?
i tied this:
os.stat(fname).st_size
but it do not understand st_size
and now i write exact idx like:
os.stat(fname)[6]
Hi @robert-hh :)
I forgot about this very old question and I solved it myself long ago.
But this can be helpful for the community :)
@livius Hi livius,
since the path methods are not implemente din MP, you cantry to open the file fro reading, and look on whether you run into an execption of not:
try: f = open(fname, "r") exists = True f.close() except FileNotFoundError: exists = False
Ans also, since the symbolic element names for os.stat are not implemented, os.stat[6] is the right value, giving the files size in bytes. Also, a check like:
os.stat[0] & 0o170000 == 0o040000will tell you, that the entry is a directory.
- grovalmitch last edited by
Checking for file or directory exists using Python
import os
dirname = "temp"
filename = "my_file"
#check directory exists
if(os.path.exists(dirname)):
print("Directory Exists")
else:
print("Directory does not exists")
#check file exists
if(os.path.exists(filename)):
print("File Exists")
else:
print("File does not exists")
Full source...file exist | https://forum.pycom.io/topic/196/simple-file-checking | CC-MAIN-2022-33 | refinedweb | 261 | 65.42 |
Thanks to weaverryan for the amazing Symfony4 series on KnpUniversity. Clever, fun, engaging, and the clearest Symfony4 info I've seen yet. Bravo!
Thank you for this series of tutorials. This has done wonders for my understanding of OO PHP. Truly helpful!
Brilliant. I just wasted my time reading two different tutorials on namespaces and still didn't understand it. Watched your video and got it immediately..
Journey to the Center of Symfony: it's a must have. Great job KnpUniversity
Awesome video! You totally convinced me.
Oh my god, I've just discovered Behat via KnpUniversity tutorials. This is love at first sight! :)
Great stuff. Clearly you guys spent a ton of time producing something this polished. Thank you!
Thanks! :) Your Symfony courses are top notch and I’m spreading the word.
Hey guys, your screencasts are simply awesome! I hope you come out with the next ones soon!!! Thank You!
I have been HUNTING for this exact information. Thank you so much.
Fantastic. A well thought-out screencast. A decent level of detail, without been overwhelming.
Thanks and I really appreciate what you guys are doing for the Symfony2 community
As always, Knp creates useful courses and everything is clearly explained.
I'm a beginner in symfony. and so far, You make the best tutorials! like really keep up the work :D ( plus it's fun )
Love the videos guys, really speeding ahead with Symfony2. What an awesome framework, I love it!
These videos are extremely helpful and well worth every cent.
Nice work guys, this is great!
I'm used to Laravel, but I think I just fell in love with Symfony <3
I'm using the framework since nearly 2 years now and i can't stress enough how this course made clear some parts of code i was using without fully understanding the 'how' that was behind. It really opened new horizons to me.
Learning Symfony with your tutorials at hand was so much fun, at the same time it already helped me a lot.
Nice tutorial! UserBundle looks easier to use.
Thanks for this great screencast! A very good refresh of all the basics spiced with the revealment of some Symfony2 'secrets'. Can't wait to watch the sequels with more advanced topics!
Awesome series that tells you the Symfony 3.3 story in very detailed terms. Great job @weaverryan!
the best video tutorials I've ever seen are at @KnpUniversity from @weaverryan it could help about methodology
my advice to anybody reading this: Embrace the peppy and learn Symfony without even breaking a sweat!
Excellent tutorials!! They're helping me A LOT with Symfony for my own project (a courier delivery tracking app). I've been using PHP for years (mostly backend programming) but haven't ever had the time to really dig into Symfony. Your tutorials (and Symfony of course) are going to let me kick this project out in record time.
I just wanted to say thank you for the awesome tutorials! It's been really helpful so far to learn the jargons of Symfony 3. Not only that, but it has also helped me with general programming concepts. Of course, it doesn't end here. There is always the step where I have to apply what I have learnt and learn more stuff. These tutorials have been a really great start not only for me but for everyone willing to wet their feet with Symfony 3. | https://symfonycasts.com/testimonials | CC-MAIN-2022-33 | refinedweb | 576 | 76.11 |
Introduction
In this article, I am going to look beyond just VS.NET, at the options we have for configuring a Web Service using the Web Service class and directives that are available with ASP.NET.
WebService Class
A Web Service class may inherit from a class in the System.Web.Services namespace, the WebService class. The code emitted by Visual Studio .NET inherits from this class. see here
Creating a Simple Web Service:
Use the following steps to create a web service.
We will see the following code.
public class Service1 : System.Web.Services.WebService { [WebMethod] public string HelloWorld() { return "Hello World"; } } Inheriting from the WebService class is mainly a convenience.
The WebService class has the following properties:
WebService Directive When we create an XML Web service in ASP. NET, we place the required @ WebService directive at the top of a text file with an .asmx file name extension. The presence of the .asmx file and the @ WebService directive correlates the URL address of the XML Web service with its implementation. The @ WebService directive tells the ASP.NET runtime that the .asmx file contains a XML Web service and provides information about the implementation of the Web service. Here is the example of an @ WebService directive:
<%@ WebService Language="C#" CodeBehind="Akshay.asmx.cs" Class="Akshay" %>
The directive is required for ASP.NET Web Services. If the class for the Web Service is in a codebehindfile, this line will be the only line in the .asmx file.
The directive may use the following attributes.
Language
The Language attribute is optional. It specifies the language to use to compile the Web Service. Any.NET compiler installed on the system may be specified. By default the installed compilers are C#,VB.NET and JScript.NET. These are specified using the values VB, C#, or JS.
<%@ WebService Language="C#" %>The default language is VB.NET as specified in the machine.config file, unless it is overridden in theweb.config file.
Codebehind
The Codebehind attribute is optional. It is used by Visual Studio .NET to find the code-behind file forthe .asmx file so that when you click on the .asmx file it can open the code-behind file. The attribute isonly used in Visual Studio .NET; it has no effect when the Web Service is executing.
Class
The Class attribute specifies the class to expose as a Web Service. If the class is within a namespace,the namespace is specified, as in the following example. In this example, the namespace is MyVSDemo andthe class is Akshay.
<%@ WebService class="MyVSDemo.Akshay"%>
If the class is not within a namespace, we specify the class directly:
<%@ WebService class="Akshay"%>
The class attribute may optionally also specify the assembly where the class exists:
<%@ WebService class="MyVSDemo.Akshay, MyVSDemo"%>
If the assembly is not specified, ASP.NET searches all assemblies in the bin directory for the Akshay class.
Debug
Indicates whether the XML Web service should be compiled with debug symbols; true if the XML Web service should be compiled with debug symbols; otherwise, false.
<%@ WebService Language="C#" CodeBehind="Akshay.asmx.cs" Class="Akshay" debug="true"%>
Resources
Web Service Class and Directive in ASP.NET
Expand Images On ASP.NET Web Page | http://www.c-sharpcorner.com/UploadFile/1d42da/web-service-class-and-directive-in-Asp-Net/ | crawl-003 | refinedweb | 534 | 61.43 |
This page covers Tutorial v2. Elm 0.18.
Views
After receiving the response back we store it in
players. We now want to display the list of players.
The attribute
players was a list of players before (
List Player), now is
WebData (List Player) type. Let's change src/Players/List.elm to handle the new type of
players.
Add
RemoteData import
import RemoteData exposing (WebData)
Change
view to:
view : WebData (List Player) -> Html Msg view response = div [] [ nav , maybeList response ]
Here we changed the signature and call a new function
maybeList. Add
maybeList:
maybeList : WebData (List Player) -> Html Msg maybeList response = case response of RemoteData.NotAsked -> text "" RemoteData.Loading -> text "Loading..." RemoteData.Success players -> list players RemoteData.Failure error -> text (toString error)
This function uses a case expression to pattern match on the type of
response. This types are provided by the
RemoteData package.
If
response is of type
Success we display the list of players. We call the previous
list function we already had. | https://www.elm-tutorial.org/en/06-fetching-resources/06-views.html | CC-MAIN-2019-04 | refinedweb | 166 | 60.82 |
Auto Claims Form Sample
The Auto Claims sample addresses a hypothetical scenario for an insurance assessor. The assessor's work requires him or her to visit with clients at their home or business and to enter their claim information into a form. To increase the assessor's productivity, his IT department develops a tablet application that enables him or her to quickly and accurately enter claim information through two ink controls: InkEdit Control and InkPicture Control.
In this sample, an InkEdit control is used for each text input field. A user enters the relevant information about an insurance policy and vehicle into these fields with a pen. The InkPicture control is used to add ink over an automobile image to highlight damaged areas of the automobile. The Auto Claims sample is available for C# and Microsoft® Visual Basic® .NET. This topic describes the Visual Basic .NET.
The AutoClaims class is defined as a subclass of System.Windows.Forms.Form
, and a nested class is defined for creating and managing layers of ink for different types of damage. Four event handlers are defined to perform the following tasks:
- Initializing the form and ink layers.
- Redrawing the InkPicture control.
- Selecting an ink layer through the list box.
- Changing the visibility of an ink layer.
Defining the Form and Ink Layers
Before the AutoClaims class is defined, the Microsoft.Ink namespace is imported.
Next, in the AutoClaims class, a nested InkLayer class is defined and an array of four InkLayer objects is declared. (InkLayer contains a Microsoft.Ink.Ink object for storing ink, and System.Drawing.Color
and Boolean
values for storing the color and hidden state of the layer.) A fifth Ink object is declared to handle ink for the InkPicture when all of the ink layers are hidden.
Each layer has its own Ink object. Because there are four discrete areas of interest in the claim form (body, windows, tires, and headlights), four InkLayer objects are used. In this manner, a user can view any combination of layers at once.
Initializing the Form and Ink Layers
In the AutoClaim form's Load
event handler, the Ink object and the four InkLayer objects are initialized.
emptyInk = New Ink() ' Initialize the four different layers of ink on the vehicle diagram: ' vehicle body, windows, tires, and headlights. inkLayers(0) = New InkLayer(New Ink(), Color.Red, False) inkLayers(1) = New InkLayer(New Ink(), Color.Violet, False) inkLayers(2) = New InkLayer(New Ink(), Color.LightGreen, False) inkLayers(3) = New InkLayer(New Ink(), Color.Aqua, False)
Then, the first entry (Body) in the list box is selected by default.
Finally, the ink color for the InkPicture control is set to the currently selected list box entry.
Redrawing the InkPicture Control
Every time the InkPicture control's Paint
event occurs, the ink layers are checked to determine which of them are hidden. If a layer is not hidden, it is displayed by using the Renderer property's Draw method (notice in the Object Browser that the InkPicture.Renderer property is defined as a Microsoft.Ink.Renderer object):
Selecting an Ink Layer through the List Box
The ListBox
control's SelectedIndexChanged
event handler first checks that the selection has changed and that the InkPicture control is not currently collecting ink, and then the ink color of the InkPicture control is set to the appropriate color for the selected ink layer. Also, the Hide Layer check box is updated to reflect the selected ink layer's hidden status.
' Provided that the new selected index value is different than ' the previous value... If (Not (lstAnnotationLayer.SelectedIndex = selectedIndex)) Then If (Not inkPictVehicle.CollectingInk) Then ' Set the ink and visiblity of the current ink layer inkPictVehicle.DefaultDrawingAttributes.Color = inkLayers(lstAnnotationLayer.SelectedIndex).ActiveColor chHideLayer.Checked = inkLayers(lstAnnotationLayer.SelectedIndex).Hidden
Next, the InkEnabled property is set to False before loading an Ink object, and then reset to True after the object is loaded.
Finally, the InkPicture control's Refresh
method is used to display only the desired layers within the control.
Changing the Visibility of an Ink Layer
The Hide Layer check box's CheckedChanged
event handler first checks that the selection has changed and that the InkPicture control is not currently collecting ink, and then updates the selected ink layer's hidden status.
Next, the InkPicture control's InkEnabled property is set to False before updating its Ink property.
Finally, the InkPicture control is either enabled or disabled for the particular vehicle part based on whether the Hide Layer check box is selected, and the InkPicture control's Refresh
method is used to display only the desired layers within the control.
Closing the Form
In the Windows Form Designer generated code, the InkEdit and InkPicture controls are added to the form's component list when the form is initialized. When the form closes, the InkEdit and InkPicture controls are disposed, as well as the other components of the form, by the form's Dispose
method. The form's Dispose
method also disposes the Ink objects that are created for the form. | https://msdn.microsoft.com/en-us/library/ms840400.aspx | CC-MAIN-2018-17 | refinedweb | 837 | 55.13 |
switch How we can use switch case in java program ?
Note:-Switch case provides built-in multiway decision statement.It... switch case statement. The program displays the name of the days according to user
Switch Statement with Strings in Java
Switch Statement with Strings in Java Switch Statement with Strings in Java
Java switch statement
Java switch statement What restrictions are placed on the values of each case of a switch statement
Java Switch Statement
more about the switch statement click on:
http:/...
Java Switch
Statement
In java, switch is one of the control statement which
turns the normal flow
Switch Statement example in Java
more about the switch statement in
Java. The following java program tells you... Switch Statement example in Java
... for implementing the switch
statement. Here, you will learn how to use the switch
statements. To avoid this we can use Switch statements
in Java. The switch statement... of a variable or expression. The switch
statement in Java is the best way to test....
Strings in Switch statement
Strings in Switch statement
In this section, you will learn about strings in switch statements
which is recently added feature in Java SE 7.
Using JDK 7, you can pass string as expression in switch statement.
The switch statement
Switch case in java
Switch case in java
In this section we will learn about switch case in java..., and java, switch case is used
. switch is a selection
control mechanism used... if statement, switch allow the user to select a option from
multiple options
switch Java Keyword
:
-- The switch statement in java language enables the user to select...
switch Java Keyword
The switch is a keyword defined in the java programming
language. Keywords
Java - The switch construct in Java
Java - The switch construct in Java
Switch is the control statement in java which....
In this Program you will see that how to use the switch
statement
Master Java In A Week
Master Java In A Week
Master Java Programming Language in a week.... We will use
literals in addition to variables in Java statement. While writing... if and if-else statements. To avoid this we can use Switch statements
in Java. The switch
Switch Case in Java
case will take place.
Example of Switch Statement in Java:
import java.io.... if-then and if-then-else statements. As switch statement allows in-numerous possible execution paths....
Switch statements with String cases are supported in Java SE 7 but the earlier
using if and switch stmt - Java Beginners
be assumed.
Write a program using "switch and if statement" to compute net amount...using if and switch stmt A cloth showroom has announced...();
switch(menu) {
case 1:
System.out.print("Quantity: ");
int q1 = scan.nextInt
Master java in a week
;
Class Declaration:
Class is the building block in Java...;
The main method is the entry point in the Java program
and java program can't run... is always first thing that is executed in a java program.
Here is the main method
programes on switch
=input.nextInt();
switch(choice){
case 1...; Java number of days in given month
How to use switch statement in jsp code
How to use switch statement in jsp code
switch is
a type of control statement used... for in the switch statement are met or executed.
Break
Statement : The break
save switch data into database - Java Beginners
save switch data into database switch(menu) {
case 1://add...();
switch(menu) {
case 1:
System.out.print("Enter student ID: ");
int ID...:3306/register","root","root");
Statement st=con.createStatement();
int i
Switch statement in PHP
Switch statement in PHP HII,
Explain about switch statement in PHP?
hello,
Switch statement is executed line by line. PHP executes the statement only when the case statement matches the value of the switch
java if statement
java if statement If statement in Java
switch functions - Java Beginners
switch functions I am writing a script for use in a computer based... your month of hire:")
switch(n)
{
case(n="January"):
document.write("Last... friend,
This is running code,
switch option using in javaSctipt
switch case instead of if else
switch case instead of if else How to write code in switch case instead of if else in Java
Click event
Click event hi............
how to put a click event on a particular image, so that it can allow to open another form or allow to display result in tabular form????????/
can u tell me how to do that????????/
using java swings
Switch Case - Java Interview Questions
Switch Case int x=100000;
switch(x)
{
case 100000: printf("Lacks");
}
output= Lacks
I want reason why x matches switch case although range for int is in between
something -32678 to +32676.
Plz give me ans quickly
Break Statement in java 7
Break Statement in java 7
In this tutorial we will discuss about break statement in
java 7.
Break Statement :
Java facilitate you to break the flow... of your loop (do-while, while,
for or switch statement). In switch statement we
Java - Break statement in java
Java - Break statement in java
... statement is used in many programming languages
such as c, c++, java etc. Some... statement is used in while
loop, do - while loop, for loop and also
used in the switch
Internet and Web related questions for Web Master.
Internet and Web related questions for Web Master.
Explain the difference between the Internet and Web.
What are Web browsers? Explain the two main... is Java? What are the applications of Java that you can use for Web designing
Continue Statement in java 7
Continue Statement in java 7
In this tutorial we will discuss about continue statement in
java 7.
Continue Statement :
Sometimes you need to skip block... statement in loops.
In java 7, Continue statement stops the current iteration
java swing button click event
java swing button click event java swing button click event
public void doClick()
Java Swing Tutorials
Change Background of Master Slide Using Java
Change Background of Master Slide Using Java
... to create a slide then change background of the
master slide.
In this example we are creating a slide master for the slide show. To create
slide show we
Text change and click events
Text change and click events Create user login form and apply textchange and click events in java
Java Control Statement
statement : Java supports two selection
statements, if and switch...Java Control Statement
Control flow statement is a statement that is used...;
}else
{
//statement;}
3. Switch case statement
if else statement in java
if else statement in java explain about simple if else statement and complex if else statement in java with an example
Case Java Keyword
;
The java keyword "case" is used to label each branch in the switch statement
in java program. In other word Case is a java keyword... matches value that defined
through a preceding switch keyword.
In the java
Break statement in java
Break statement in java
Break statement in java is used to change the normal control flow of
compound statement like while, do-while , for. Break...
is also used in switch case. Break statement allow a force
termination of
Java callable statement
Java callable statement What is callable statement? Tell me the way to get the callable statement
Continue Statement
about Continue Statement click on the link
http:/...
Continue Statement
The continue statement occurs only inside the iterator
statements like while
How to call java method on Button click in jsp?
How to call java method on Button click in jsp? How to call java method on Button click in jsp
conditional statement in java
conditional statement in java Write a conditional statement in Java
Logical and comparison statements in OOPs are also know as conditional statements. You can learn "How to write conditional statement in Java " from
Draw a line in a JPanel with button click in Java
Draw a line in a JPanel with button click in Java Draw a line in a JPanel with button click in Java
GOTO Statement in java
GOTO Statement in java I have tried many time to use java goto statement but it never works
i search from youtube google but e the example on net are also give compile
error.
if possible please give me some code with example
Write a java program to display the season given the month using switch case
Write a java program to display the season given the month using switch case Write a java program to display the season given the month using switch case
ELSE STATEMENT!! - Java Beginners
ELSE STATEMENT!! Hi!
I just want to know why doesn't my else statement works? Else statement is there in the code below which is:
else
JOptionPane.showMessageDialog(null, n+ " is not in the array!");//doesn't work
Java IF statement problem
Java IF statement problem Dear Sir/Madam
i am using the following code. expected to not have any output but it still showing "welcome".. please help me why it showing "welcome".
class c1
{
int a=5;
while(a>1
core java ,io operation,calling methods using switch cases
core java ,io operation,calling methods using switch cases How to create a dictionary program,providing user inputs using io operations with switch cases and providing different options for searching,editing,storing meanings
If statement in java 7
If statement in java 7
This tutorial describes the if statement in java 7. This is one kind of
decision making statement.
There are various way to use if statement with else -
If Statement :
If statement contains one boolean
Open TextFile on JButton Click - Java Beginners
Open TextFile on JButton Click Hello Sir I Want to open TextFile on JButton Click.
plz Help Me Hi Friend,
Do you want to simply open the text file or you want to open the file using JFileChooser,read that file
Java Courses
RoseIndia Java Courses provided online for free can be utilized by beginners
in Java anywhere in the world. They can use this course to master Java language..., assignment)
Selection Statements (If, Switch)
Control Statement (while, do
sontrol statement - Java Beginners
Open website on Button Click - Java Beginners
Open website on Button Click Hello sir I want to open website on button click using java swing
plz help me sir.
in my swing application one "VISIT US BUTTON"
i want to open my website on Button CLick Hi Friend
How to Open Text or Word File on JButton Click in Java - Java Beginners
How to Open Text or Word File on JButton Click in Java How to Open Text or Word File on JButton Click in Java
C Break with Switch statement
C Break with Switch statement
... the user to enter any number. The expression defined in the
switch statement finds...
If you enter the number 5, the switch statement checks the condition and
shows
iPhone Switch Changes Toolbar Color
for the Switch so that when click on Switch the color of the Toolbar changes.....What is the difference between a break statement and a continue statement
Use if, if else,for,while,do wile ad switch in same Java Class
Use if, if else,for,while,do wile ad switch in same Java Class hi how to use if, if else,for,while,do wile ad switch in same class?
very urgent... enter your choice: ");
menu = scan.nextInt();
System.out.println();
switch(menu
How to use 'if' statement in jsp page?
How to use 'if' statement in jsp page?
This is detailed java code that shows how to use 'if'
statement in jsp page. This statement is used to test
Java - Continue statement in Java
Java - Continue statement in Java
Continue: The continue statement is used in many programming languages
such as C, C++, java etc. Sometimes we do not need to execute some
While loop Statement.
While loop Statement. How to Print Table In java using While Loop
Continue statement in java
Continue statement in java
In this section we will discuss about continue statement in
java. continue is one of the branching statement used in most of the programming
languages like C,C++ and java etc. Sometime we need to skip some
Open Text or Word Document on JButton Click in Java - Java Beginners
Open Text or Word Document on JButton Click in Java How to open Word document or Text File on JButton Click in java Hi Friend,
Try the following code:
import java.io.*;
import java.util.*;
import javax.swing.
Continue and break statement
statement in java program?
The continue statement restart the current loop whereas the break statement causes the control outside the loop.
Here is an example of break and continue statement.
Example -
public class ContinueBreak
Java Swing: Draw rectangle on mouse click
Java Swing: Draw rectangle on mouse click
In this tutorial, you will learn how to draw a rectangle on mouse click.
Sometimes, there is a need of mouse clicks in swing applications instead of
buttons. To add this mouse interaction, Java
Java Break Statement
Java Break Statement
... is of unlabeled break statement in java.
In the program break statement....
Syntax for Break Statement in Java
Java
Java java program to get output on the basis of users choice of switch statement perform operations addition, subtraction and multiplication of matrix
When i click on Monitor Tomcat, it shows
installed java 7 and tomcat 7,
when i click on Monitor Tomcat it shows...When i click on Monitor Tomcat, it shows To run servlet i have seen the tutorial from
Java Error In comapitable Type in statement - Java Beginners
Java Error In comapitable Type in statement Here I want to Check...;
//Statement st;
faculty3() //constructor
{
j=new JFrame("faculty");
j1...;
PreparedStatement st;
//Statement st;
faculty3(){
j=new JFrame("faculty
java statements
java statements What is the difference between an if statement and a switch statement
PHP Switch Case
the statements until it found a break statement or the end
of the switch case block...Switch Case Control Structure:
Almost every modern language supports switch case control structure. It is
similar to if statements. Switch case is useful
Create and Show Wordpad File on JButton Click - Java Beginners
Create and Show Wordpad File on JButton Click Hello Sir I want to create wordpad or word document on JButton Click Event.which contains data which is currently on Form ie filled by User,
with following format
Java
buttons, when i click edit button i need all the values in that particular row must be edited in the next jsp page textboxes and when i modify and click save... display in textboxes then when i click delete the values shd delete, here is my
count statement that saves results on a variable in a java bean
count statement that saves results on a variable in a java bean ...;
Statement statement = null;
String SQLStr ;
String retString...:3306/dev","root","pro");
statement = conn.createStatement
count statement that saves results on a variable in a java bean
count statement that saves results on a variable in a java bean ...;
Statement statement = null;
String SQLStr ;
String retString...:3306/dev","root","anime");
statement = conn.createStatement
java
java write a program in java to print day of a week with with respect to its number .for the illustration,if you give a number 4, it will print"wednesday" in this program,switch-case statement is expected to use
Java
Java 1Wap to find the area of square and circle using switch statement
2 Wap to find the area of triangle whose sides are a,b,c which are not in equal lenght
JDBC Prepared Statement Example
JDBC Prepared Statement Example
Prepared Statement is different from Statement object, When
it is created ,it is represented as SQL statement. The advantage
syntax error in SQL Insert Statement - Java Beginners
Access Driver] Syntax error in
INSERT INTO statement.
plz Help Me Hi
Java error missing return statement
Java error missing return statement
Java error missing return statement are those error in Java that occurred
when a programmer forget to write
Insert data in mysql database through jsp using prepared statement
Insert data in mysql database through jsp using prepared statement...;
This is detailed jsp code that how to insert data into
database by using prepared statement instead of statement.
Create a database: First
create a database named
Control Statments
The if statement: To start with controlling statements in Java, lets have a recap over...
executed if and only if the test evaluates to true. That is the if statement in
Java... top to bottom. We will learn how the different kinds of
statement have
syntax error in SQL Insert Statement - Java Beginners
","","");
Statement stmt=con1.createStatement();
int i
Java AWT Package Example
Java AWT Package Example
In this section you will learn about the AWT package of the Java. Many
running examples are provided that will help you master AWT package. Example
Implementing Continue Statement In Java
Implementing Continue Statement In Java
In this Session you will learn how to use continue
statement. First all of define class "Continue"
Writing a loop statement using Netbean
Writing a loop statement using Netbean Write a loop program using NetBeans.
Java Loops
class Loops{
public static void main(String[] args){
int sum=0;
for(int i=1;i<=10;i
java
of all your for statement is not correct.
Do correction: for(double index=0.1;index!=1.0;index+=0.1)
Then do modification in switch statement:
switch(x)
{
case...)
System.out.println("index="+index);
ii)Switch(x)
{
case 1:
System.out.println
JDBC Prepared Statement Insert
JDBC Prepared Statement Insert
...
Statement Insert. In this Tutorial the code describe the include a class... that enables you to communicate between front end application in java
and database
UNICODE or SQL statement issue - JDBC
UNICODE or SQL statement issue Hi again............
I have got something new that...........
i was using MS Access as the database with my JAVA Japplet....
In my applet i used JTextArea to display the output
Java Break
Java Break
Many programming languages like c, c++ uses the "break"
statement. Java also..., do - while loop, for loop
and in switch statement.
Read more at:
http
The try-with-resource Statement
The try-with-resource Statement
In this section, you will learn about newly added try-with-resource statement in
Java SE 7.
The try-with-resource statement contains declaration of one or more
resources. As you know, prior
Java error unreachable statement
Java error unreachable statement
... a code that help you in
understanding a unreachable statement. For this we... statement in the code
prevent from the normal execution of the code. The only
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/46383 | CC-MAIN-2015-18 | refinedweb | 3,116 | 61.77 |
mold - Man Page
a modern linker
Synopsis
Description
mold is a faster drop-in replacement for the default GNU ld(1).
How to use mold
See.
Compatibility
mold is designed to be a drop-in replacement for the GNU linkers for linking user-land programs. If your user-land program cannot be built due to missing command-line options, please file a bug at.
mold supports a very limited set of linker script features, which is just sufficient to read
/usr/lib/x86_64-linux-gnu/libc.so on Linux systems (on Linux, that file is despite its name not a shared library but an ASCII linker script that loads a real
libc.so file.) Beyond that, we have no plan to support any linker script features. The linker script is an ad-hoc, over-designed, complex language which we believe needs to be disrupted by a simpler mechanism. We have a plan to add a replacement for the linker script to mold instead.
Archive symbol resolution
Traditionally, Unix linkers are sensitive to the order in which input files appear on command line. They process input files from the first (left-most) file to the last (right-most) file one-by-one. While reading input files, they maintain sets of defined and undefined symbols. When visiting an archive file (
.a files), they pull out object files to resolve as many undefined symbols as possible and go on to the next input file. Object files that weren't pulled out will never have a chance for a second look.
Due to this semantics, you usually have to add archive files at the end of a command line, so that when a linker reaches archive files, it knows what symbols are remain undefined. If you put archive files at the beginning of a command line, a linker doesn't have any undefined symbol, and thus no object files will be pulled out from archives.
You can change the processing order by
--start-group and
--end-group options, though they make a linker slower.
mold as well as LLVM lld(1) linker take a different approach. They memorize what symbols can be resolved from archive files instead of forgetting it after processing each archive. Therefore, mold and lld(1) can "go back" in a command line to pull out object files from archives, if they are needed to resolve remaining undefined symbols. They are not sensitive to the input file order.
--start-group and
--end-group are still accepted by mold and lld(1) for compatibility with traditional linkers, but they are silently ignored.
Dynamic symbol resolution
Some Unix linker features are unable to be understood without understanding the semantics of dynamic symbol resolution. Therefore, even though that's not specific to mold, we'll explain it here.
We use "ELF module" or just "module" as a collective term to refer an executable or a shared library file in the ELF format.
An ELF module may have lists of imported symbols and exported symbols, as well as a list of shared library names from which imported symbols should be imported. The point is that imported symbols are not bound to any specific shared library until runtime.
Here is how the Unix dynamic linker resolves dynamic symbols. Upon the start of an ELF program, the dynamic linker construct a list of ELF modules which as a whole consist of a complete program. The executable file is always at the beginning of the list followed by its depending shared libraries. An imported symbol is searched from the beginning of the list to the end. If two or more modules define the same symbol, the one that appears first in the list takes precedence over the others.
This Unix semantics are contrary to systems such as Windows that have the two-level namespace for dynamic symbols. On Windows, for example, dynamic symbols are represented as a tuple of (symbol-name, shared-library-name), so that each dynamic symbol is guaranteed to be resolved from some specific library.
Typically, an ELF module that exports a symbol also imports the same symbol. Such a symbol is usually resolved to itself, but that's not the case if a module that appears before in the symbol search list provides another definition of the same symbol.
Let me take malloc(3) as an example. Assume that you define your version of malloc(3) in your main executable file. Then, all malloc calls from any module are resolved to your function instead of that in libc, because the executable is always at the beginning of the dynamic symbol search list. Note that even malloc(3) calls within libc are resolved to your definition since libc exports and imports malloc. Therefore, by defining malloc yourself, you can overwrite a library function, and the malloc(3) in libc becomes dead code.
These Unix semantics are tricky and sometimes considered harmful. For example, assume that you accidentally define atoi(3) as a global function in your executable that behaves completely differently from the one in the C standard. Then, all atoi function calls from any modules (even function calls within libc) are redirected to your function instead of the one in libc which obviously causes a problem. That is a somewhat surprising consequence for an accidental name conflict. On the other hand, this semantic is sometimes considered useful because it allows users to overwrite library functions without recompiling modules containing them. Whether good or bad, you should keep this semantic in mind to understand the Unix linkers behaviors.
Build reproducibility
mold's output is deterministic. That is, if you pass the same object files and the same command-line options to the same version of mold, it is guaranteed to always produce the same output. The linker's internal randomness, such as the timing of thread scheduling or iteration orders of hash tables, doesn't affect the output.
mold does not have any host-specific default settings. This is contrary to the GNU linkers to which some configurable values, such as system-dependent library search paths, are hard-coded. mold depends only on its command-line arguments.
Options
Report usage information to stdout and exit.
- -v, --version
Report version information to stdout.
- -V
Report version and target information to stdout.
- -C dir, --directory dir
Change to dir before doing anything.
- -E, --export-dynamic
- --no-export-dynamic
When creating an executable, using the -E option causes all global symbols to be put into the dynamic symbol table, so that the symbols are visible from other ELF modules at runtime.
By default, or if --no-export-dynamic is given, only symbols that are referenced by DSOs at link-time are exported from an executable.
- -F libname, --filter=libname
Set the
DT_FILTERdynamic section field to libname.
- -Ifile, --dynamic-linker=file
- --no-dynamic-linker
Set the dynamic linker path to file. If no -I option is given, or if --no-dynamic-linker is given, no dynamic linker path is set to an output file. This is contrary to the GNU linkers which sets a default dynamic linker path in that case. However, this difference doesn't usually make any difference because the compiler driver always passes -I to a linker.
- -Ldir, --library-path=dir
Add dir to the list of library search paths from which mold searches libraries for the -l option.
Unlike the GNU linkers, mold does not have the default search paths. This difference doesn't usually make any difference because the compiler driver always passes all necessary search paths to a linker.
- -M, --print-map
Write a map file to stdout.
- -N, --omagic
- --no-omagic
Force mold to emit an output file with an old-fashioned memory layout. First, it makes the first data segment to not be aligned to a page boundary. Second, text segments are marked as writable if the option is given.
- -S, --strip-debug
Omit
.debug_*sections from the output file.
- -T file, --script=file
Read linker script from file.
- -X, --discard-locals
Discard temporary local symbols to reduce the sizes of the symbol table and the string table. Temporary local symbols are local symbols starting with
.L. Compilers usually generate such symbols for unnamed program elements such as string literals or floating-point literals.
- -e symbol, --entry=symbol
Use symbol as the entry point symbol instead of the default entry point symbol _start.
- -f shlib, --auxiliary=shlib
Set the
DT_AUXILIARYdynamic section field to shlib.
- -h libname, --soname=libname
Set the
DT_SONAMEdynamic section field to libname. This option is used when creating a shared object file. Typically, when you create SyXXX lib foo.so, you want to pass --soname=foo to a linker.
- -llibname
Search for Sylib libname.so or Sylib libname.a from library search paths.
- -m [target]
Choose a target.
- -o file, --output=file
Use file as the output file name instead of the default name a.out.
- -r, --relocatable
Instead of generating an executable or a shared object file, combine input object files to generate another object file that can be used as an input to a linker.
- -s, --strip-all
Omit
.symtabsection from the output file.
- -u symbol, --undefined=symbol
If symbol remains as an undefined symbol after reading all object files, and if there is an static archive that contains an object file defining symbol, pull out the object file and link it so that the output file contains a definition of symbol.
- --Bdynamic
Link against shared libraries.
- --Bstatic
Do not link against shared libraries.
- --Bsymbolic
When creating a shared library, make global symbols export-only (i.e. do not import the same symbol). As a result, references within a shared library is always resolved locally, negating symbol override at runtime. See Dynamic symbol resolution for more information about symbol imports and exports.
- --Bsymbolic-functions
Have the same effect as --Bsymbolic but works only for function symbols. Data symbols remains being both imported and exported.
- --Bno-symbolic
Cancel --Bsymbolic and --Bsymbolic-functions.
- --Map=file
Write map file to file.
- --Tbss=address
Alias for --section-start=.bss=address.
- --Tdata=address
Alias for --section-start=.data=address.
- --Ttext=address
Alias for --section-start=.text=address.
- --allow-multiple-definition
Normally, the linker reports an error if there are more than one definition of a symbol. This option changes the default behavior so that it doesn't report an error for duplicate definitions and instead use the first definition.
- --as-needed
- --no-as-needed
By default, shared libraries given to a linker are unconditionally added to the list of required libraries in an output file. However, shared libraries after --as-needed are added to the list only when at least one symbol is actually used by an object file. In other words, shared libraries after --as-needed are not added to the list of needed libraries if they are not needed by a program.
The --no-as-needed option restores the default behavior for subsequent files.
- --build-id
- --build-id=[none | md5 | sha1 | sha256 | uuid | 0xhexstring]
- --no-build-id
Create a
.note.gnu.build-idsection containing a byte string to uniquely identify an output file. --build-id and --build-id=sha256 compute a 256-bit cryptographic hash of an output file and set it to build-id. md5 and sha1 compute the same hash but truncate it to 128 and 160 bits, respectively, before setting it to build-id. uuid sets a random 128-bit UUID. 0xhexstring sets hexstring.
- --chroot=dir
Set dir to root directory.
- --color-diagnostics=[auto | always | never]
- --color-diagnostics
- --no-color-diagnostics
Show diagnostics messages in color using ANSI escape sequences. auto means that mold prints out messages in color only if the standard output is connected to a TTY. Default is auto.
- --defsym=symbol=value
- --compress-debug-sections=[none | zlib | zlib-gabi | zlib-gnu]
Compress DWARF debug info (.debug_* sections) using the zlib compression algorithm.
- --defsym=symbol=value
Define symbol as an alias for value.
value is either an integer (in decimal or hexadecimal with ‘0x’ prefix) or a symbol name. If an integer is given as a value, symbol is defined as an absolute symbol with the given value.
- --default-symver
Use soname as a symbol version and append that version to all symbols.
- --demangle
- --no-demangle
Demangle C++ symbols in log messages.
- --dependency-file=file
Write a dependency file to file. The contents of the written file is readable by
make, which defines only one rule with the linker's output file as a target and all input fiels as its prerequisite. Users are expected to include the generated dependency file into a Makefile to automate the dependency management. This option is analogous to the compiler's
-MM
-MFoptions.
- --dynamic-list=file
Read a list of dynamic symbols from file. Same as --export-dynamic-symbol-list, except that it implies --Bsymbolic.
- --eh-frame-hdr
- --no-eh-frame-hdr
Create
.eh_frame_hdrsection.
- --emit-relocs
A linker usually "consumes" relocation sections. That is, a linker applies relocations to other sections, and relocation sections themselves are discarded.
The --emit-relocs instructs the linker to leave relocation sections in the output file. Some post-link binary analysis or optimization tools such as LLVM Bolt need them.
mold always creates RELA-type relocation sections even if the native ELF format is REL-type so that it is easy to read addends.
- --enable-new-dtags
- --disable-new-dtags
By default, mold emits DT_RUNPATH for --rpath. If you pass --disable-new-dtags, mold emits DT_RPATH for --rpath instead.
- --exclude-libs=libraries...
Mark all symbols in the given libraries hidden.
- --export-dynamic-symbol=sym
Put symbols matching sym in the dynamic symbol table. sym may be a glob, with the same syntax as the globs used in --export-dynamic-symbol-list or --version-script.
- --export-dynamic-symbol-list=file
Read a list of dynamic symbols from file.
- --fatal-warnings
- --no-fatal-warnings
Treat warnings as errors.
- --fini=symbol
Call symbol at unload-time.
- --fork
- --no-fork
Spawn a child process and let it do the actual linking. When linking a large program, the OS kernel can take a few hundred milliseconds to terminate a mold process. --fork hides that latency.
- --gc-sections
- --no-gc-sections
Remove unreferenced sections.
- --gdb-index
Create a
.gdb_indexsection to speed up GNU debugger. To use this, you need to compile source files with the
--ggnu-pubnamescompiler flag.
- --hash-style=[sysv | gnu | both]
Set hash style.
- --icf=[none | safe | all]
- --no-icf
It is not uncommon for a program to contain many identical functions that differ only in name. For example, a C++ template std::vector is very likely to be instantiated to the identical code for std::vector<int> and std::vector<unsigned> because the container cares only about the size of the parameter type. Identical Code Folding (ICF) is a size optimization to identify and merge such identical functions.
If --icf=all is given, mold tries to merge all identical functions. This reduces the size of the output most, but it is not “safe” optimization. It is guaranteed in C and C++ that two pointeres pointing two different functions will never be equal, but --icf=all breaks that assumption as two functions have the same address after merging. So a care must be taken when you use that flag that your program does not depend on the function pointer uniqueness.
--icf=safe is a flag to merge functions only when it is safe to do so. That is, if a program does not take an address of a function, it is safe to merge that function with other function, as you cannot compare a function pointer with something else without taking an address of a function. needs to be used with a compiler that supports .llvm_addrsig section which contains the information as to what symbols are address-taken. LLVM/Clang supports that section by default. Since GCC does not support it yet, you cannot use --icf=safe with GCC (it doesn't do any harm but can't optimize at all.)
--icf=none and --no-icf disables ICF.
- --ignore-data-address-equality
Make ICF to merge not only functions but also data. This option should be used in combination with --icf=all.
- --image-base=addr
Set the base address to addr.
- --init=symbol
Call symbol at load-time.
- --no-undefined
Report undefined symbols (even with --shared).
- --noinhibit-exec
Create an output file even if errors occur.
- --pack-dyn-relocs=[none | relr]
If relr is specified, all
R_*_RELATIVErelocations are put into
.relr.dynsection instead of
.rel.dynor
.rela.dynsection. Since
.relr.dynsection uses a space-efficient encoding scheme, specifying this flag can reduce the size of the output. This is typically most effective for position-independent executable.
Note that a runtime loader has to support
.relr.dynto run executables or shared libraries linked with --pack-dyn-relocs=relr, and only ChromeOS, Android and Fuchsia support it as of now in 2022.
- --package-metadata=string
Embed string to a .note.package section. This option in intended to be used by a package management command such as
rpmto embed metadata regarding a package to each executable file.
- --perf
Print performance statistics.
- --pie, --pic-executable
- --no-pie, --no-pic-executable
Create a position-independent executable.
- --preload
Preload object files.
- --print-gc-sections
- --no-print-gc-sections
Print removed unreferenced sections.
- --print-icf-sections
- --no-print-icf-sections
Print folded identical sections.
- --push-state
- --pop-state
--push-state saves the current values of --as-needed, --whole-archive, --static, and --start-lib. The saved values can be restored by --pop-state.
--push-state and --pop-state pairs can nest.
These options are useful when you want to construct linker command line options programmatically. For example, if you want to link libfoo.so by as-needed basis but don't want to change the global state of --as-needed, you can append "--push-state --as-needed -lfoo --pop-state" to the linker command line options.
- --quick-exit
- --no-quick-exit
Use
quick_exitto exit.
- --relax
- --no-relax
Rewrite machine instructions with more efficient ones for some relocations. The feature is enabled by default.
- --require-defined=symbol
Like --undefined, except the new symbol must be defined by the end of the link.
- --repro
Embed input files into
.reprosection.
- --retain-symbols-file=file
Keep only symbols listed in file.
file is a text file containing a symbol name on each line. mold discards all local symbols as well as global sybmol that are not in file. Note that this option removes symbols only from
.symtabsection and does not affect
.dynsymsection, which is used for dynamic linking.
- --reverse-sections
Reverses the order of input sections before assigning them the offsets in the output file.
- --rpath=dir
Add dir to runtime search path.
- --run command arg file ...
Run
commandwith mold as
/usr/bin/ld.
- --section-start=section=address
Set address to section. address is a hexadecimal number that may start with an optional ‘0x’.
- --shared, --Bshareable
Create a share library.
- --shuffle-sections
- --shuffle-sections=number
Randomizes the output by shuffleing the order of input sections before assigning them the offsets in the output file. If number is given, it's used as a seed for the random number generator, so that the linker produces the same output as for the same seed. If no seed is given, it uses a random number as a seed.
- --spare-dynamic-tags=number
Reserve given number of tags in
.dynamicsection.
- --start-lib
- --end-lib
Handle object files between --start-lib and --end-lib as if they were in an archive file. That means object files between them are linked only when they are needed to resolve undefined symbols. The options are useful if you want to link object files only when they are needed but want to avoid the overhead of running ar(3).
- --static
Do not link against shared libraries.
- --stats
Print input statistics.
- --sysroot=dir
Set target system root directory to dir.
- --thread-count=count
Use count number of threads.
- --threads
- --no-threads
Use multiple threads. By default, mold uses as many threads as the number of cores or 32, whichever is the smallest. The reason why it is capped to 32 is because mold doesn't scale well beyond that point. To use only one thread, pass --no-threads or --thread-count=1.
- --trace
Print name of each input file.
- --unique=pattern
Don't merge input sections that match pattern.
- --unresolved-symbols=[report-all | ignore-all | ignore-in-object-files | ignore-in-shared-libs]
How to handle undefined symbols.
- --version-script=file
Read version script from file.
- --warn-common
- --no-warn-common
Warn about common symbols.
- --warn-once
Only warn once for each undefined symbol instead of warn for each relocation referring an undefined symbol.
- --warn-unresolved-symbols
- --error-unresolved-symbols
Normally, the linker reports an error for unresolved symbols. --warn-unresolved-symbols option turns it into a warning. --error-unresolved-symbols option restores the default behavior.
- --whole-archive
- --no-whole-archive
When archive files (.a files) are given to a linker, only object files that are needed to resolve undefined symbols are extracted from them and linked to an output file. --whole-archive changes that behavior for subsequent archives so that a linker extracts all object files and link them to an output. For example, if you are creating a shared object file and you want to include all archive members to the output, you should pass --whole-archive. --no-whole-archive restores the default behavior for subsequent archives.
- --wrap=symbol
Make symbol to be resolved to __wrap_symbol. The original symbol can be resolved as __real_symbol. This option is typically used for wrapping an existing function.
- -z cet-report=[none | warning | error]
Intel Control-flow Enforcement Technology (CET) is a new x86 feature available since Tiger Lake which is released in 2020. It defines new instructions to harden security to protect programs from control hijacking attacks. You can tell compiler to use the feature by specifying the
-fcf-protectionflag.
-z
cet-reportflag is used to make sure that all object files were compiled with a correct
-fcf-protectionflag. If warning or error are given, mold prints out a warning or an error message if an object file was not compiled with the compiler flag.
mold looks for
GNU_PROPERTY_X86_FEATURE_1_IBTbit and
GNU_PROPERTY_X86_FEATURE_1_SHSTKbit in
.note.gnu.propertysection to determine whether or not an object file was compiled with
-fcf-protection.
- -z now
- -z lazy
By default, functions referring other ELF modules are resolved by the dynamic linker when they are called for the first time. -z
nowmarks an executable or a shared library file so that all dynamic symbols are loaded when a file is loaded to memory. -z
lazyrestores the default behavior.
- -z origin
Mark object requiring immediate
$ORIGINprocessing at runtime.
- -z ibt
Turn on
GNU_PROPERTY_X86_FEATURE_1_IBTbit in
.note.gnu.propertysection to indicate that the output uses IBT-enabled PLT. This option implies -z
ibtplt.
- -z ibtplt
Generate Intel Branch Tracking (IBT)-enabled PLT which is the default on x86-64.
- -z execstack
- -z noexecstack
By default, the pages for the stack area (i.e. the pages where local variables reside) are not executable for security reasons. -z
execstackmakes it executable. -z
noexecstackrestores the default behavior.
- -z keep-text-section-prefix
- -z nokeep-text-section-prefix
Keep
.text.hot,
.text.unknown,
.text.unlikely,
.text.startupand
.text.exitas separate sections in the final binary.
- -z relro
- -z norelro
Some sections such as
.dynamichave to be writable only during an executable or a shared library file is being loaded to memory. Once the dynamic linker finishes its job, such sections won't be mutated by anyone. As a security mitigation, it is preferred to make such segments read-only during program execution.
-z
relroputs such sections into a special segment called
relro. The dynamic linker make a relro segment read-only after it finishes its job.
By default, mold generates a relro segment. -z
norelrodisables the feature.
- -z separate-loadable-segments
- -z separate-code
- -z noseparate-code
If one memory page contains multiple segments, the page protection bits are set in such a way that needed attributes (writable or executable) are satisifed for all segments. This usually happens at a boundary of two segments with two different attributes.
separate-loadable-segmentsadds paddings between segments with different attributes so that they do not share the same page. This is the default.
separate-codeadds paddings only between executable and non-executable segments.
noseparate-codedoes not add any paddings between segments.
- -z defs
- -z nodefs
Report undefined symbols (even with --shared).
- -z shstk
Enforce shadow stack by turning GNU_PROPERTY_X86_FEATURE_1_SHSTK bit in
.note.gnu.propertyoutput section. Shadow stack is part of Intel Control-flow Enforcement Technology (CET), which is available since Tiger Lake (2020).
- -z text
- -z notext, -z textoff
mold by default reports an error if dynamic relocations are created in read-only sections. If -z
notextor -z
textoffare given, mold creates such dynamic relocations without reporting an error. -z
textrestores the default behavior.
- -z max-page-size
Some CPU ISAs support multiple different memory page sizes. This option specifies the maximum page size that an output binary can run on. If you specify a large value, the output can run on both large and small page systems, but it wastes a bit of memory at page boundaries on systems with small pages.
The default value is 4 KiB for i386, x86-64 and RISC-V, and 64 KiB for ARM64.
- -z nodefaultlib
Make the dynamic loader to ignore default search paths.
- -z nodelete
Mark DSO non-deletable at runtime.
- -z nodlopen
Mark DSO not available to dlopen(3).
- -z nodump
Mark DSO not available to dldump(3).
- -z nocopyreloc
Do not create copy relocations.
- -z initfirst
Mark DSO to be initialized first at runtime.
- -z interpose
Mark object to interpose all DSOs but executable.
See Also
gold(1), ld(1), elf(5) ld.so(8)
Authors
Rui Ueyama <ruiu@cs.stanford.edu>
Bugs
Report bugs to.
Referenced By
The man page ld.mold(1) is an alias of mold(1). | https://www.mankier.com/1/mold | CC-MAIN-2022-40 | refinedweb | 4,315 | 58.38 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.