text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
The default scripting language for Elasticsearch is now Painless. Painless is a custom-built language with syntax similar to Groovy designed to be fast as well as secure. Many Groovy scripts will be identitical to Painless scripts to help make the transition between languages as simple as possible. Documentation for Painless can be found at Painless Scripting Language One common difference to note between Groovy and Painless is the use of parameters — all parameters in Painless must be prefixed with params. now. The following example shows the difference: Groovy: { "script_score": { "script": { "lang": "groovy", "inline": "Math.log(_score * 2) + my_modifier", "params": { "my_modifier": 8 } } } } Painless ( my_modifer is prefixed with params): { "script_score": { "script": { "lang": "painless", "inline": "Math.log(_score * 2) + params.my_modifier", "params": { "my_modifier": 8 } } } } The script.default_lang setting has been removed. It is no longer possible set the default scripting language. If a different language than painless is used then this should be explicitly specified on the script itself. For scripts with no explicit language defined, that are part of already stored percolator queries, the default language can be controlled with the script.legacy.default_lang setting. The deprecated 1.x syntax of defining inline scripts / templates and referring to file or index base scripts / templates have been removed. The script and params string parameters can no longer be used and instead the script object syntax must be used. This applies for the update api, script sort, script_score function, script query, scripted_metric aggregation and script_heuristic aggregation. So this usage of inline scripts is no longer allowed: { "script_score": { "lang": "groovy", "script": "Math.log(_score * 2) + my_modifier", "params": { "my_modifier": 8 } } } and instead this syntax must be used: { "script_score": { "script": { "lang": "groovy", "inline": "Math.log(_score * 2) + my_modifier", "params": { "my_modifier": 8 } } } } The script or script_file parameter can no longer be used to refer to file based scripts and templates and instead file must be used. This usage of referring to file based scripts is no longer valid: { "script_score": { "script": "calculate-score", "params": { "my_modifier": 8 } } } This usage is valid: { "script_score": { "script": { "lang": "groovy", "file": "calculate-score", "params": { "my_modifier": 8 } } } } The script_id parameter can no longer be used the refer to indexed based scripts and templates and instead id must be used. This usage of referring to indexed scripts is no longer valid: { "script_score": { "script_id": "indexedCalculateScore", "params": { "my_modifier": 8 } } } This usage is valid: { "script_score": { "script": { "id": "indexedCalculateScore", "lang" : "groovy", "params": { "my_modifier": 8 } } } } The query field in the template query can no longer be used. This 1.x syntax can no longer be used: { "query": { "template": { "query": {"match_{{template}}": {}}, "params" : { "template" : "all" } } } } and instead the following syntax should be used: { "query": { "template": { "inline": {"match_{{template}}": {}}, "params" : { "template" : "all" } } } } The top level template field in the search template api has been replaced with consistent template / script object syntax. This 1.x syntax can no longer be used: { "template" : { "query": { "match" : { "{{my_field}}" : "{{my_value}}" } }, "size" : "{{my_size}}" }, "params" : { "my_field" : "foo", "my_value" : "bar", "my_size" : 5 } } and instead the following syntax should be used: { "inline" : { "query": { "match" : { "{{my_field}}" : "{{my_value}}" } }, "size" : "{{my_size}}" }, "params" : { "my_field" : "foo", "my_value" : "bar", "my_size" : 5 } } Indexed scripts and templates have been replaced by stored scripts which stores the scripts and templates in the cluster state instead of a dedicate .scripts index. For the size of stored scripts there is a soft limit of 65535 bytes. If scripts exceed that size then the script.max_size_in_bytes setting can be added to elasticsearch.yml to change the soft limit to a higher value. If scripts are really large, other options like native scripts should be considered. Previously indexed scripts in the .scripts index will not be used any more as Elasticsearch will now try to fetch the scripts from the cluster state. Upon upgrading to 5.x the .scripts index will remain to exist, so it can be used by a script to migrate the stored scripts from the .scripts index into the cluster state. The current format of the scripts and templates hasn’t been changed, only the 1.x format has been removed. The following Python script can be used to import your indexed scripts into the cluster state as stored scripts: from elasticsearch import Elasticsearch,helpers es = Elasticsearch([ {'host': 'localhost'} ]) for doc in helpers.scan(es, index=".scripts", preserve_order=True): es.put_script(lang=doc['_type'], id=doc['_id'], body=doc['_source']) This script makes use of the official Elasticsearch Python client and therefore you need to make sure that your have installed the client in your environment. For more information on this please see elasticsearch-py. The following Perl script can be used to import your indexed scripts into the cluster state as stored scripts: use Search::Elasticsearch; my $es = Search::Elasticsearch->new( nodes => 'localhost:9200'); my $scroll = $es->scroll_helper( index => '.scripts', sort => '_doc'); while (my $doc = $scroll->next) { $e->put_script( lang => $doc->{_type}, id => $doc->{_id}, body => $doc->{_source} ); } This script makes use of the official Elasticsearch Perl client and therefore you need to make sure that your have installed the client in your environment. For more information on this please see Search::Elasticsearch. After you have moved the scripts via the provided script or otherwise then you can verify with the following request if the migration has happened successfully: GET _cluster/state?filter_path=metadata.stored_scripts The response should include all your scripts from the .scripts index. After you have verified that all your scripts have been moved, optionally as a last step, you can delete the .scripts index as Elasticsearch no longer uses it. All the methods related to interacting with indexed scripts have been removed. The Java API methods for interacting with stored scripts have been added under ClusterAdminClient class. The sugar methods that used to exist on the indexed scripts API methods don’t exist on the methods for stored scripts. The only way to provide scripts is by using BytesReference implementation, if a string needs to be provided the BytesArray class should be used. Prior to 5.0.0, script engines could register multiple languages. The Javascript script engine in particular registered both "lang": "js" and "lang": "javascript". Script engines can now only register a single language. All references to "lang": "js" should be changed to "lang": "javascript" for existing users of the lang-javascript plugin. Prior to 5.0.0 scripting engines could register multiple extensions. The only engine doing this was the Javascript engine, which registered "js" and "javascript". It now only registers the "js" file extension for on-disk scripts. The Javascript engine previously registered "js" and "javascript". It now only registers the "js" file extension for on-disk scripts. The script, script_id and scripting_upsert query string parameters have been removed from the update api. The TemplateQueryBuilder has been moved to the lang-mustache module. Therefor when using the TemplateQueryBuilder from the Java native client the lang-mustache module should be on the classpath. Also the transport client should load the lang-mustache module as plugin: TransportClient transportClient = TransportClient.builder() .settings(Settings.builder().put("node.name", "node")) .addPlugin(MustachePlugin.class) .build(); transportClient.addTransportAddress( new InetSocketTransportAddress(new InetSocketAddress(InetAddresses.forString("127.0.0.1"), 9300)) ); Also the helper methods in QueryBuilders class that create a TemplateQueryBuilder instance have been removed, instead the constructors on TemplateQueryBuilder should be used. The template query has been deprecated in favour of the search template api. The template query is scheduled to be removed in the next major version. The following helper methods have been removed from GeoPoint scripting: factorDistance factorDistanceWithDefault factorDistance02 factorDistance13 arcDistanceInKm arcDistanceInKmWithDefault arcDistanceInMiles arcDistanceInMilesWithDefault distanceWithDefault distanceInKm distanceInKmWithDefault distanceInMiles distanceInMilesWithDefault geohashDistanceInKm geohashDistanceInMiles Instead use arcDistance, arcDistanceWithDefault, planeDistance, planeDistanceWithDefault, geohashDistance, geohashDistanceWithDefault and convert from default units (meters) to desired units using the appropriate constance (e.g., multiply by 0.001 to convert to Km)._per_minute. You should watch out for this if you are hard-coding values into your scripts. Elasticsearch recommends the usage of parameters for efficient script handling. See details here.
https://www.elastic.co/guide/en/elasticsearch/reference/5.4/breaking_50_scripting.html
CC-MAIN-2019-39
refinedweb
1,304
54.93
User talk:Dmb From Uncyclopedia, the content-free encyclopedia edit Welcome! Hello, D Leggsicon I have placed a copy of the deleted article here. I can't speak for another Admin, but I will say that it may be due to the fact that in the two or three months since the article was started it hasn't expanded into anything longer than a stub. Normally these would be deleted as a matter of course. Also, we've just had a Forest Fire Week, which means that anything that doesn't look like a proper article gets torched. If you want to carry on editing the above article in the safety of your own namespace area, feel free. Things rarely ever get deleted from user namespaces. Once the article gets to look like something impressive you can ask one of the Admins to move it to the mainspace for you. Good luck. -- Sir Mhaille (talk to me) edit A tip If you don't want your articles to get deleted, try not sucking. -- » Sir Savethemooses Grand Commanding Officer ... holla atcha boy» 18:41, 16 July 2006 (UTC) - There's a definite learning curve to uncyclopedia. Perhaps Uncyclopedia:How To Get Started Editing might be useful? At the very least, it might help with your userpage. --Hrodulf 16:34, 19 July 2006 (UTC)
http://uncyclopedia.wikia.com/wiki/User_talk:Dmb
CC-MAIN-2016-07
refinedweb
222
73.07
As the name suggests, a copy constructor is typically used to create a copy of an object. There are many intricate details about how it operates and how it is used. It also helps in control passing and returning of user-defined types by value during function calls. This article gets into some of the conceptual details about copy constructor with appropriate examples in C++. The Constructor According to the Object Oriented Programming principle, when an object of a class is instantiated, its data members should be initialized properly during creation. This is important because using an object with uninitialized data members makes these members prone to unintended problems. Initialization is done with the help of a constructor, a special member function defined as the same name as the class name. This naming scheme makes it distinct from the other member functions of the class. Unlike member functions, another distinctive feature is that a constructor never returns a value, not even void. And, constructors usually are declared as public. C++ relies on the constructor call to create an object. This ensures that objects are created and data members are initialized appropriately before being used. The constructor call is implicit and, if we explicitly do not create a constructor, the C++ compiler provides one for us, called the default constructor. Default constructors are no-argument constructors; therefore, if we want to send any argument in the constructor, we must do so with an explicit definition. But, note that, as we define an explicit constructor in a class, the C++ compiler stops providing a default constructor. Here is an example of its uses: class IntArray { private: int* p2arr; int max; int count; public: IntArray(int n = 256); IntArray(int n, int val); // ... }; // Constructors are invoked when we create objects IntArray a1; // Invokes no-arg constructor IntArray a2(5,7); // Invokes constructor with // 2-argument IntArray *cp = new IntArray(); // Dynamic object creation Copy Constructor Copy constructors are those which take a reference of themselves and create a copy. For example, we can create an object: object (const object&) and refer to as "object of object ref". It also can essentially be used in control passing and returning of user-defined types by value during a function call. According to Lipman, "A constructor is a copy constructor if its first parameter is a reference to the class type and any additional parameters have default values". The simplest reason of its existence is that we want to create a copy of the object itself. There are a couple of things to note while creating a copy constructor. - The reference arguments of the copy constructor ideally will be designated as const to allow a const object to be copied. - The argument of the copy constructor must be by reference and not by value because, if you imagine an argument by value, it is nothing but a copy of the object itself. As a result, the copy constructor would itself invoke the copy constructor recursively. This is a fatal logic. The argument by reference, on the other hand, is not a copy; therefore, there is no question of invoking the copy constructor. This is the reason why argument of the copy constructor is by reference and not by value. class IntArray { private: // ... public: // ... IntArray(const IntArray&); // ... }; Note that, if we do not provide any explicit copy constructor, the C++ compiler provides a default one. But, we must be very careful in relying on the default copy constructor. Sometimes, it does exactly what we do not want, especially with dynamic initialization. Default Copy Constructor Suppose a class does not have an explicit copy constructor. Then, what the compiler does is that it creates a minimal version of the same. This is called a default or standard copy constructor. What it does is simply copy data members of the object passed to it to the corresponding data members of the new object. A typically standard copy constructor may be sufficient, but in some cases, a simple copying of values is not exactly what we want, especially when the object contains dynamic members (pointers). This means that it would copy the pointer content, which is an address to a location, and not the values contained in the address. As a result, data pointers from two distinct object points to the same memory location when standard copy constructor is invoked. This is definitely not what we want and is trouble uncalled for. What we want is two identical but distinct objects but what we get is a dynamic member (pointer) in two separate objects pointing to the same memory location. Now, imagine what happens if we try to release the memory of one object. The data member (pointer) of the second object would then point to a memory location which no longer exists because we have already deleted the first object. This is where we must intervene and override the default copy constructor. A Quick Example #ifndef INTARRAY_H #define INTARRAY_H class IntArray { private: int* p2arr; int max; int count; public: IntArray(int n = 256); IntArray(int n, int val); IntArray(const IntArray&); ~IntArray(); inline int length() const {return count;} int& operator[](int i); int operator[](int i) const; bool add(int val); bool remove(int pos); }; #endif // INTARRAY_H Listing 1: IntArray class declaration #include "intarray.h" #include <iostream> using namespace std; IntArray::IntArray(int n) { max = n; count = 0; p2arr = new int[max]; } IntArray::IntArray(int n, int val) { max = count = n; p2arr = new int[max]; for(int i=0;i<max;i++) p2arr[i] = val; } IntArray::IntArray(const IntArray& ar) { max = ar.max; count = ar.count; p2arr = new int[max]; for(int i=0;i<max;i++) p2arr[i] = ar.p2arr[i]; } IntArray::~IntArray() { delete[] p2arr; } int &IntArray::operator[](int i) { if(i<0||i>=count){ cerr<<"\n array out of range!\n"; exit(1); } return p2arr[i]; } int IntArray::operator[](int i) const { if(i<0||i>=count){ cerr<<"\n array out of range!\n"; exit(1); } return p2arr[i]; } bool IntArray::add(int val) { bool flag = false; if(count<max){ p2arr[count++] = val; flag = true; } return flag; } bool IntArray::remove(int pos) { bool flag = false; if(pos>=0 && pos<count){ for(int i=pos;i<count-1;i++) p2arr[i] = p2arr[i+1]; --count; flag = true; } return flag; } Listing 2: IntArray class member function definition #include <iostream> #include "intarray.h" using namespace std; int main() { IntArray a(5, 10); a[2] = 20; IntArray ca(a); for( int i=0; i < ca.length(); i++) cout << ca[i]<<endl; return 0; } Listing 3: Testing IntArray object Therefore, it is understood that we must write a new copy constructor, especially in classes which have dynamic data members to ensure that when a copy constructor is invoked, it copies the content of the location that dynamic members point to distinctively rather than the address of the content. Private Copy Construction Relying on default copy construction is fine in most cases. If the situation discussed above never occurs, we do not need an explicit copy constructor. But, how do we ensure that an object will never be passed by value? The technique is to declare a private copy constructor. Then, we do not need to create a definition except if we want any member function or friend function to do pass by value. This technique restricts the user to pass or return an object by value. The compiler flags an error if we try to do so. #ifndef MYCLASS_H #define MYCLASS_H class MyClass { private: int info; MyClass(const MyClass&); public: MyClass(int i); }; #endif // MYCLASS_H #include "myclass.h" MyClass::MyClass(int i) { info = i; } #include "myclass.h" using namespace std; void func(MyClass c); int main() { MyClass m(10); /* * None of the following is allowed. func(m); MyClass m1 = m; MyClass(m); */ return 0; } Conclusion Note that these are not comprehensive, but are some of the intricate concepts associated with copy constructor in C++. To sum up, - A copy constructor takes a reference to an existing object of its type to create a new one. - A copy constructor is automatically invoked by the compiler whenever we pass or return an object by value. - We can restrict an object passed or return by value using a private copy constructor. - A default copy constructor is fine, but be cautious if there is a dynamic data member. Happy learning :)
https://mobile.codeguru.com/cpp/cpp/algorithms/demystifying-copy-constructors-in-c.html
CC-MAIN-2020-05
refinedweb
1,393
52.19
- Write a C program to print all factors of a number. Required Knowledge - C printf and scanf functions - For loop in C A number N is a factor of number M, if and only if N divides M completely leaving no remainder(M % N = 0). For example, 4 is a factor of 40 because 4 divides 40 without leaving any remainder. 40 / 4 = 10. and 40 % 4 = 0; Here is the list of all factors of 40 : 1 2 4 5 8 10 20 40 Algorithm to find all factors of a number N Check with every number from 1 to N, whether it divides N completely or not. Let, i be any number between 1 to N - If(N % i == 0), then i is a factor of N - If(N % i != 0), then i is not a factor of N C program to find all factors of a number using for loop #include <stdio.h> int main() { int counter, N; /* * Take a number as input from user */ printf("Enter a Number\n"); scanf("%d", &N); printf("Factors of %d\n", N); /*Check for every number between 1 to N, whether it divides N */ for(counter = 1; counter <= N; counter++) { if(N%counter == 0) { printf("%d ", counter); } } return 0; } Output Enter a Number 40 Factors of 40 1 2 4 5 8 10 20 40 Enter a Number 37 Factors of 37 1 37
https://btechgeeks.com/c-program-to-find-all-factors-of-a-number-using-for-loop/
CC-MAIN-2021-43
refinedweb
234
68.84
Mission11_Reproduce_Mission10 This mission is similar to Mission10. You will display the temperature on the LCD. But you will use an external library to realize it. // Import LCD1602 and SHT3x driver from MadDrivers which is an online git repo. import LCD1602 import SHT3x //) } Code analysis In previous mission, the code includes the two files to configure the LCD and humiture sensor. However, there is a more convenient way - using libraries and you don't need to add hardware drivers to your projects. Simply put, a library contains a block of code that is for specific functionality. Then you can use it in any of your projects to realize those functionalities. In the previous code, the two files are included, so you can directly use them. Now, you will use the online libraries - LCD1602 and SHT3x in your code. They are in the MadDriver that contains all the related hardware libraries and its location has been indicated in the project. Thus you don't need to add these files to your project and just import them to your code. The IDE will download them automatically when building your project. Then the rest of the code is the same as the previous one. Reference I2C - use I2C protocol to communicate with other devices. LCD1602 - display some characters on a 16x2 LCD. SHT3x - measure temperature and humidity and communicate using I2C protocol. SwiftIOBoard - find the corresponding pin id of SwiftIO board.
https://docs.madmachine.io/tutorials/swiftio-maker-kit/mission11
CC-MAIN-2022-21
refinedweb
237
66.64
Introduction:In this article we will learn how to create web pages for mobile phones. Background:Developing mobile web pages is not the same as developing 3 the pages with controls we cannot use our toolbox controls. For creating controls we must write control code in source view.Step 5:Now move to source code view and create controls like below. Here we will Create Label, TextBox And Button controls. For creating mobile web controls we must create mobile form for that just replace our Desktop form like and create the controls for our mobile web page.<html xmlns="" ><body> <mobile:Form<mobile:LabelLabel1</mobile:Label><mobile:TextBox</mobile:TextBox><mobile:CommandCommand1</mobile:Command> </mobile:Form></body></html>Step 6:Now go to code view in the aspx.cs file and import the namespace of mobile pages and controls; also derive the Default page from MobilePage.using System.Web.Mobile;using System.Web.UI.MobileControls;public partial class Default : System.Web.UI.MobileControls.MobilePage{ protected void Page_Load(object sender, EventArgs e) { } protected void Command1_Click(object sender, EventArgs e) { }}Step 7Now run the application; the browser pages will look something like the following screen. To test these mobile web pages in mobile screens we have to test in mobile emulators. But here we will see our pages in a desktop browser.Adding Mobile Web Pages Template To Visual Studio 2008:Here we will see how to add Deafult template to visual studio 2008. This template actually I got from some website. This template is a special mobile page template which will add mobile page to our project. This will make your work easy for adding the mobile web page. That a the same web.Confige Default.aspx and Default.aspx.cs files. Now delete these files and say Add new item, which displays Visual Studio template with our mobile web page templates as follows.Conclusion:Here we have seen how to develop mobile web pages. ©2016 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/uploadfile/krishnasarala/developing-mobile-web-pages-in-Asp-Net/
CC-MAIN-2016-22
refinedweb
334
58.18
>How arbitrary is this path? Must it be within the DocumentRoot? This one I can answer I believe. I put a path starting from C:\, and it worked fine, with the limitation that you can't have spaces in the path, and you can't use double quotes to get around that. So I had to figure out how to write things dos style, like MyDocu~1, and that worked. I came across a bit of trouble with the importer now, It can't find a module that exists. File "C:\Docume~1\Dan\MyDocu~1\PYROOT\pyserver\web.py", line 348, in import_module module = apache.import_module(module, 1, 1, path) File "C:\Program Files\Python\lib\site-packages\mod_python\importer.py", line 236, in import_module return __import__(module_name, {}, {}, ['*']) ImportError: No module named _config Now I've confirmed there is a module named _config.py in the path specified, and I can find it if I add path to sys.path. There is an import at the top of _config.py that should fail and raise and exception, but that shouldn't be related? Any ideas why this is happening? Thanks, -Dan On 4/21/06, Jorey Bump <list at joreybump.com> wrote: > Graham Dumpleton wrote: > > Graham Dumpleton wrote .. > >> The new module importer completely ignores packages as it is practically > >> impossible to get any form of automatic module reloading to work > >> correctly with them when they are more than trivial. As such, packages > >> are handed off to standard Python __import__ to deal with. That it even > >> finds the package means that you have it installed in sys.path. Even if > >> it was a file based module, because it is on sys.path and thus likely to > >> be installed in a standard location, the new module importer would again > >> ignore it as it leaves all sys.path modules up to Python __import__ > >> as too dangerous to be mixing importing schemes. > >> > >> Anyway, that all only applies if you were expecting PyServer.pyserver to > >> automatically reload upon changes. > > Graham, can you enumerate the different ways packages are handled, or is > it enough to say that packages are never reloaded? In this thread, you > explain that when a package is imported via PythonHandler, mod_python > uses the conventional Python __import__, requiring an apache restart to > reliably reload the package, as in the past. > > This also implies that if a published module imports a package, and the > published module is touched or modified, then the module will be > reloaded, but not the package. Is this correct? > > > BTW, that something outside of the document tree, possibly in sys.path, > > is dealt with by Python __import__ doesn't mean you can't have module > > reloading on stuff outside of the document tree. The idea is that if it is > > part of the web application and needs to be reloadable, that it doesn't > > really belong in standard Python directories anyway. People only install > > it there at present because it is convenient. > > There are security benefits to not putting your code in the > DocumentRoot. It's also useful to develop generic utilities that are > used in multiple apps (not just mod_python), but that you don't want > available globally on the system. I prefer extremely minimal frontends > in the DocumentRoot, with most of my code stored elsewhere. Will the new > importer support reloading modules outside of the DocumentRoot without > putting them in sys.path? > > > The better way of dealing with this with the new module importer is to > > put your web application modules elsewhere, ie., not on sys.path. You then > > specify an absolute path to the actual .py file in the handler directive. > > > > <Directory /> > > SetHandler mod_python > > PythonHandler /path/to/web/application/PyServer/pserver.py > > ... > > How arbitrary is this path? Must it be within the DocumentRoot? > > > Most cases I have seen is that people use packages purely to create a > > namespace to group the modules. With the new module importer that > > doesn't really need to be done anymore. That is because you can > > directly reference an arbitrary module by its path. When you use the > > "import" statement in files in that directory, one of the places it will > > automatically look, without that directory needing to be in sys.path, > > is the same directory the file is in. This achieves the same result as > > what people are using packages for now but you can still have module > > reloading work. > > Does it (the initial loading, not the reloading) also apply to packages > in that directory? Or will it only work with standalone single file > modules in the root of that directory? > > This is all very nifty, because it implies that a mod_python application > can now be easily distributed by inflating a tarball and specifying the > PythonHandler accordingly. > > If the new importer works outside of the DocumentRoot, and Location is > used instead of Directory, no files need to be created in the > DocumentRoot at all. Or is this currently impossible, in regards to > automatic module reloading? I already do this for some handlers I've > written, and really like the flexibility provided by the virtualization. > > > > _______________________________________________ > Mod_python mailing list > Mod_python at modpython.org > >
http://modpython.org/pipermail/mod_python/2006-April/020943.html
CC-MAIN-2018-39
refinedweb
852
65.22
Ok, I have been trying to work this thing for a week. It is a program that is supposed to read from 2 files (.dat) and write to a third file (.dat) then update one of the first files. i.e. read oldmast.dat read trans.dat compare data write data and changes to newmast.dat update oldmast.dat to match info in newmast.dat I know how to make it read the entire files, but I don't know how to make it pull only what I need to compare. I'm not looking for handouts, just a little help(don't want confusion on this). #include <iostream> #include <iomanip> #include <string> #include <fstream> #include <cstdlib> using namespace std; int main() { string line; int accountNumber; string firstName; string lastName; double currentBalance; double dollarAmount; ifstream inOldMaster("oldmast.dat", ios::in); ifstream inTransaction("trans.dat", ios::in); ofstream outNewMaster("newmast.dat", ios::in); //oldmast.dat if(!inOldMaster) { cerr << "File could not be opened." << endl; exit(1); } cout << "Contents of oldmast.dat \n\n"; cout << left << setw(10) << "Account" << setw(13) << "First Name" << setw(13) << "Last Name" << setw(13) << "Balance" << endl << fixed << showpoint; while(inOldMaster >> accountNumber >> firstName >> lastName >> currentBalance) { cout << left << setw(10) << accountNumber << setw(13) << firstName << setw(7) << lastName << setw(13) << setprecision(2) << right << currentBalance << endl; } inOldMaster.close(); cout << "\n\n\n\n"; //trans.dat if(!inTransaction) { cerr << "File could not be opened." << endl; exit(1); } cout << "Contents of trans.dat \n\n"; cout << "Account" << setw(15) << right << "Dollar Amount" << endl; while(inTransaction >> accountNumber >> dollarAmount) { cout << accountNumber << setw(13) << setprecision(2) << right << dollarAmount << endl; } cout << "\n\n\n"; return 0; } contents of oldmast.dat accountNumber firstName lastName currentBalance 100 john doe 234.43 200 sally sue 124.52 ...... contents of trans.dat accountNumber dollarAmount 100 100.00 200 -25.99 ...... It is supposed to match the account numbers and add the dollar amount to the current balance to create a new current balance. It is also supposed to print out a statement for account numbers that do not have a match, BUT I will try to get that myself first. I'm just trying to get it to do the other part of matching and adding. Thanks in advance for the help.
https://www.daniweb.com/programming/software-development/threads/386238/comparing-two-files
CC-MAIN-2017-26
refinedweb
371
69.28
The values you choose in your mannequin’s hyperparameters could make all of the distinction. When you’re solely making an attempt to tune a handful of hyperparameters, you may have the ability to run experiments manually. Nevertheless, with deep studying fashions the place you’re usually juggling hyperparameters for the structure, the optimizer, and discovering one of the best batch measurement and studying price, automating these experiments at scale shortly turns into a necessity. On this article, we’ll stroll by an instance of the best way to run a hyperparameter tuning job with Vertex Coaching to find optimum hyperparameter values for an ML mannequin. To hurry up the coaching course of, we’ll additionally leverage the tf.distribute Python module to distribute code throughout a number of GPUs on a single machine. The entire code for this tutorial could be discovered on this pocket book. To make use of the hyperparameter tuning service, you’ll must outline the hyperparameters you wish to tune in your coaching software code in addition to your customized coaching job request. In your coaching software code, you’ll outline a command-line argument for every hyperparameter, and use the worth handed in these arguments to set the corresponding hyperparameter in your code. You’ll additionally must report the metric you wish to optimize to Vertex AI utilizing the cloudml-hypertune Python package deal. The instance supplied makes use of TensorFlow, however you should utilize Vertex Coaching with a mannequin written in PyTorch, XGBoost, or every other framework of your selection. Utilizing the tf.distribute.Technique API If in case you have a single GPU, TensorFlow will use this accelerator to hurry up mannequin coaching with no further work in your half. Nevertheless, if you wish to get an extra enhance from utilizing a number of GPUs on a single machine or a number of machines (every with probably a number of GPUs), then you definately’ll want to make use of tf.distribute, which is TensorFlow’s module for operating a computation throughout a number of gadgets. The best method to get began with distributed coaching is a single machine with a number of GPU gadgets. A TensorFlow distribution technique from the tf.distribute.Technique API will handle the coordination of knowledge distribution and gradient updates throughout all GPUs. tf.distribute.MirroredStrategy is a synchronous knowledge parallelism technique that you should utilize with just a few code adjustments. This technique creates a replica of the mannequin on every GPU in your machine. The following gradient updates will occur in a synchronous method. Which means every employee gadget computes the ahead and backward passes by the mannequin on a special slice of the enter knowledge. The computed gradients from every of those slices are then aggregated throughout all the GPUs and decreased in a course of referred to as all-reduce. The optimizer then performs the parameter updates with these decreased gradients, thereby preserving the gadgets in sync. Step one in utilizing the tf.distribute.Technique API is to create the technique object. technique = tf.distribute.MirroredStrategy() Subsequent, it’s worthwhile to wrap the creation of your mannequin variables inside the scope of the technique. This step is essential as a result of it tells MirroredStrategy which variables to reflect throughout your GPU gadgets. Lastly, you’ll scale your batch measurement by the variety of GPUs. Whenever you do distributed coaching with the tf.distribute.Technique API and tf.knowledge, the batch measurement now refers back to the international batch measurement. In different phrases, when you go a batch measurement of 16, and you’ve got two GPUs, then every machine will course of eight examples per step. On this case, 16 is called the worldwide batch measurement, and eight because the per duplicate batch measurement. To take advantage of out of your GPUs, you’ll wish to scale the batch measurement by the variety of replicas. GLOBAL_BATCH_SIZE = PER_REPLICA_BATCH_SIZE * technique.num_replicas_in_sync Notice that distributing the code is optionally available. You’ll be able to nonetheless use the hyperparameter tuning service by following the steps within the subsequent part even when you don’t want to make use of a number of GPUs. Replace coaching code for hyperparameter tuning To make use of hyperparameter tuning with Vertex Coaching, there are two adjustments you’ll must make to your coaching code. First, you’ll must outline a command-line argument in your predominant coaching module for every hyperparameter you wish to tune. You’ll then use the worth handed in these arguments to set the corresponding hyperparameter in your software’s code. Let’s say we wished to tune the educational price, the optimizer momentum worth, and the variety of neurons within the mannequin’s closing hidden layer. You should use argparse to parse the command line arguments as proven within the operate under. You’ll be able to choose no matter names you want for these arguments, however it’s worthwhile to use the worth handed in these arguments to set the corresponding hyperparameter in your software’s code. For instance, your optimizer may appear like: Now that we all know what hyperparameters we wish to tune, we have to decide the metric to optimize. After the hyperparameter tuning service runs a number of trials, the hyperparameter values we’ll choose for our mannequin would be the mixture that maximizes (or minimizes) the chosen metric. We will report this metric with the assistance of the cloudml-hypertune library, which you should utilize with any framework. import hypertune In TensorFlow, the keras mannequin.match methodology returns a Historical past object. The Historical past.historical past attribute is a file of coaching loss values and metrics values at successive epochs. When you go validation knowledge to mannequin.match the Historical past.historical past attribute will embody validation loss and metrics values as properly. For instance, when you educated a mannequin for 3 epochs with validation knowledge and supplied accuracy as a metric, the Historical past.historical past attribute would look just like the next dictionary. To pick the values for studying price, momentum, and variety of models that maximize the validation accuracy, we’ll outline our metric because the final entry (or NUM_EPOCS - 1) of the 'val_accuracy' listing. Then, we go this metric to an occasion of HyperTune, which is able to report the worth to Vertex AI on the finish of every coaching run. And that’s it! With these two simple steps, your coaching software is prepared. Launch hyperparameter tuning Job When you’ve modified your coaching software code, you possibly can launch the hyperparameter tuning job. This instance demonstrates the best way to launch the job with the Python SDK, however you may as well use the Cloud console UI. You’ll must make it possible for your coaching software code is packaged up as a customized container. When you’re unfamiliar with how to try this, confer with this tutorial for detailed directions. In a pocket book, create a brand new Python three pocket book from the launcher. In your pocket book, run the next in a cell to put in the Vertex AI SDK. As soon as the cell finishes, restart the kernel. !pip3 set up google-cloud-aiplatform --upgrade --user After restarting the kernel, import the SDK: To launch the hyperparameter tuning job, it’s worthwhile to first outline the worker_pool_specs, which specifies the machine sort and Docker picture. The next spec defines one machine with two NVIDIA T4 Tensor Core GPUs. Subsequent, outline the parameter_spec, which is a dictionary specifying the parameters you wish to optimize. The dictionary secret is the string you assigned to the command line argument for every hyperparameter, and the dictionary worth is the parameter specification. For every hyperparameter, it’s worthwhile to outline the Sort in addition to the bounds for the values that the tuning service will strive. If you choose the sort Double or Integer, you’ll want to offer a minimal and most worth. And if you choose Categorical or Discrete you’ll want to offer the values. For the Double and Integer varieties, you’ll additionally want to offer the Scaling worth. You’ll be able to be taught extra about the best way to choose one of the best scale on this video. The ultimate spec to outline is metric_spec, which is a dictionary representing the metric to optimize. The dictionary secret is the hyperparameter_metric_tag that you just set in your coaching software code, and the worth is the optimization purpose. metric_spec='accuracy':'maximize' As soon as the specs are outlined, you’ll create a CustomJob, which is the widespread spec that can be used to run your job on every of the hyperparameter tuning trials. You may want to interchange with a bucket in your mission for staging. Lastly, create and run the HyperparameterTuningJob. There are just a few arguments to notice: max_trial_count: You’ll must put an higher sure on the variety of trials the service will run. Extra trials usually results in higher outcomes, however there can be a degree of diminishing returns, after which extra trials have little or no impact on the metric you’re making an attempt to optimize. It’s a greatest observe to begin with a smaller variety of trials and get a way of how impactful your chosen hyperparameters are earlier than scaling up. parallel_trial_count: When you use parallel trials, the service provisions a number of coaching processing clusters. The employee pool spec that you just specify when creating the job is used for every particular person coaching cluster. Growing the variety of parallel trials reduces the period of time the hyperparameter tuning job takes to run; nevertheless, it could possibly scale back the effectiveness of the job total. It’s because the default tuning technique makes use of outcomes of earlier trials to tell the project of values in subsequent trials. search_algorithm: You’ll be able to set the search algorithm to grid, random, or default (None). Grid search will exhaustively search by the hyperparameters, however just isn’t possible in high-dimensional house. Random search samples the search house randomly. The draw back of random search is that it doesn’t use data from prior experiments to pick out the subsequent setting. The default choice applies Bayesian optimization to go looking the house of potential hyperparameter values and is the really helpful algorithm. If you wish to be taught extra in regards to the particulars of how this Bayesian optimization works, take a look at this weblog. As soon as the job kicks off, you’ll have the ability to observe the standing within the UI beneath the HYPERPARAMETER TUNING JOBS tab. When it is completed, you’ll click on on the job title and see the outcomes of the tuning trials. What’s subsequent You now know the fundamentals of the best way to use hyperparameter tuning with Vertex Coaching. If you wish to check out a working instance from begin to end, you possibly can check out this tutorial. Or when you’d wish to study multi-worker coaching on Vertex, see this tutorial. It’s time to run some experiments of your individual!
https://cloudsviewer.com/2021/08/04/distributed-training-and-hyperparameter-tuning-with-tensorflow-on-vertex-ai/
CC-MAIN-2022-21
refinedweb
1,867
53.31
You’re building a web application, and you need some realistic information to shove into it. You need to check that your validation functions work perfectly, and see that your product actually works. The only problem is, you can’t really use real-world data. There are just far too many legal and ethical considerations you need to make. Indeed, in some jurisdictions, there are specific legal obstacles to using real-world data in development environments. Take, for instance, the UK. Here, there’s something called the Data Protection Act, 1998. It’s quite unambiguous with how companies are allowed to handle the data it retains: Personal data shall be obtained only for one or more specified and lawful purposes, and shall not be further processed in any manner incompatible with that purpose or those purposes. Or, in other words data can only be used within a context agreed with the person who has provided their data, albeit with a handful of exceptions. As a result, it’s often not possible to use personal data in a testing or development environment. So, how do we get around this? Easy. We generate fake data. But what if you need to generate huge amounts of realistic data? Thankfully, there are a number of libraries called Faker which programatically create dummy personal information, including names, email addresses and phone numbers. In this article, I’m going to show you how to use these libraries within a number of popular languages, including Ruby, Perl, Python and JavaScript. Ruby I’m a big Ruby fan. There’s a lot to love with this language, including one of the best package managers out there, a friendly and welcoming developer community and a healthy ecosystem of third-party libraries. It’s also ludicrously easy to learn . To get your hands on the Faker library for Ruby, you will first need to make sure you have RubyGems installed. You can grab a binary for your development platform of choice on the official RubyGems website. Then, install Faker from the command line: gem install faker You may need to install it as root. If so, run: sudo gem install faker And then fire up your favorite text editor. We’re now going to create some fake names! require 'faker' puts Faker::Name.name So, we import the faker module, and then print out some names. When you run this, you should see something like this. Okay, let’s add some other stuff. We’re going to generate some (algorithmically valid) credit card numbers, an email address and a street address. Add the following lines. puts Faker::Address.street_address puts Faker::Business.credit_card_number puts Faker::Internet.email Run that again. You’ll see something like this. Perl Perl ain’t dead. No, sir-e. Whilst it’s hardly the hippest, trendiest language on the block right now, it still has its fans. Unsurprisingly, there’s a port of Faker for Perl. But how do you use it? Well, first you need to install it. I’m assuming you have Perl and CPAN installed. If not, install it. If you are using Windows, may I recommend you install Strawberry Perl, which is a mature, community supported implementation of Perl for Windows XP to 8.1. In a command prompt, run: cpan Data::Faker You may be prompted for your root password, so don’t walk away. Then, open up your favorite text editor and create a file called ‘data.pl’. Inside, add the following lines. use Data::Faker; my $faker = Data::Faker->new(); print $faker->name."\n"; print $faker->street_address."\n"; print $faker->email."\n"; This should make a fair bit of sense. We import the Data::Faker libraries, instantiate the Faker object and then print out a name, street address and email. You might notice we’re not creating credit card numbers here, however. That’s because the Perl port is significantly more limited than the Ruby port. When you run it, you should see something like this. Python Let’s move on to Python. I write about Python a lot Move Over Shell-Scripts: Sh.py Is Here, And It's Awesome. Move Over Shell-Scripts: Sh.py Is Here, And It's Awesome. I bet you didn't know that you could write shell scripts in Python: sh.py allows you to call programs, pass parameters and handle outputs. Read More , and it’s without a doubt my favorite language to code in. If you’re tempted to give it a try, check out this article The 5 Best Websites to Learn Python Programming The 5 Best Websites to Learn Python Programming Want to learn Python programming? Here are the best ways to learn Python online, many of which are entirely free. Read More from my colleague Joel Lee about sites where you can learn to program in Python. It also turns out that Faker has been ported to this awesome language. The Python port of Faker is unique with respect to how it allows you to create fake information specific to a locale. Here’s how you can use it. Firstly, install Faker. On Python, it goes by the name of ‘fake-factory’. I’m assuming that you have a current install of pip and Python installed. If not, install it. pip install fake-factory And then open up a text editor and add the following lines. from faker import Factory fake = Factory.create() print(fake.name()) print(fake.street_address()) Run it, and you’ll see this. Okay, but what about those other locales we discussed? Suppose we want to generate fake information that is specific to France? That’s easy. We just pass Factory.create() a corresponding ISO language code string. So, for French, we write: fake = Factory.create('fr_FR') Which (when executed) produces this: Cool, right? Conclusion Faker is a powerful tool for those building tools where they need access to realistic information, without breaking any data protection rules. Whilst support isn’t consistent (or complete) across all languages, it remains a pretty useful tool. It’s worth noting that whilst we discussed Faker within the context of Perl, Python and Ruby, it is also available for PHP and JavaScript, although it’s worth noting that the JavaScript port isn’t actually all that usable. The code for this article is available on my Github profile. As always, let me know your thoughts on this post and drop me a comment. Explore more about: App Development, Programming, Python. Did you figure out how to remove phone extensions from phone_number() ? After removing via 'pip uninstall fake-factory', I did a clean reinstall: monte@machin-shin:~$ sudo pip install fake-factory Downloading/unpacking fake-factory Downloading fake-factory-0.4.0.tar.gz (244kB): 244kB downloaded Running setup.py egg_info for package fake-factory Installing collected packages: fake-factory Running setup.py install for fake-factory changing mode of build/scripts-3.3/faker from 644 to 755 changing mode of /usr/local/bin/faker to 755 Successfully installed fake-factory Cleaning up... monte@machin-shin:~$ python data.py Traceback (most recent call last): File "data.py", line 1, in from faker import Factory ImportError: No module named faker monte@machin-shin:~$ ...which is weird because it *was* installed and pip didn't throw any errors. monte@machin-shin:~$ pip show fake-factory --- Name: fake-factory Version: 0.4.0 Location: /usr/local/lib/python3.3/dist-packages Requires: monte@machin-shin:~$ ...okay, again, weird because I installed it using the 'system' python, which is 2.7.5 on Ubuntu 13.10, last I checked: monte@machin-shin:~$ python Python 2.7.5+ (default, Feb 27 2014, 19:37:08) [GCC 4.8.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> So... tried running the program using python3 as the interpreter instead of 2.x: monte@machin-shin:~$ python3 data.py Virginie de la Moreau 7, boulevard Claude Baron monte@machin-shin:~$ I think the reason it didn't work at *all* yesterday was that when I wrote my 'test' script using the template you provided in the article, I included '#!/usr/bin/env python' at the beginning of the file without really thinking much about it, and was running it as an executable. I am kind of confused as to why it installed under python 3.3 when I ran pip from python 2.7? That's really, really, really weird. I have no idea what brought that on. Thanks for your thorough and detailed comment though. Our readers might well benefit from it. Cheers! Weird. Did Pip throw any errors? Do me a favor. Run the example from my Github and tell me what you see. Did the 'pip install fake-factory' bit... but can't import faker: monte@machin-shin:~$ ./fake-data.py Traceback (most recent call last): File "./fake-data.py", line 3, in from faker import Factory ImportError: No module named faker monte@machin-shin:~$ generatedata is a useful resource, and it's available to install on your own. Those are good things. A library written in your primary development language, directly accessible and configurable , is preferable for somebody like me. I would expect it to also have a lower overhead than making a few thousand HTTP requests - whether local or remote - but I could be wrong about that. Exactly. Lower overhead, more scalable and generally just better. Thanks for the comment man! Or you could simply use You can. But damn, that's one long-winded process. Also, sometimes you need to generate dummy data as part of a fixtures script. It's just way easier to use a faker library.
https://www.makeuseof.com/tag/generate-dummy-data-python-ruby-perl/
CC-MAIN-2019-39
refinedweb
1,614
68.47
Five-minute quickstart¶ In this quickstart, you will: - Add a simple workflow to the central database via the command line - Run that workflow - Monitor your job status with the FireWorks database - Get a flavor of the Python API This tutorial will emphasize “hands-on” usage of FireWorks via the command line and not explain things in detail. Start FireWorks¶. Reset/Initialize the FireWorks database (the LaunchPad): lpad reset Note All FireWorks commands come with built-in help. For example, type lpad -hor lpad reset -h. There often exist many different options for each command. Add a Workflow¶ There are many ways to add Workflows to the database. You can do it directly from the command line as: lpad add_scripts 'echo "hello"' 'echo "goodbye"' -n hello goodbye -w test_workflow Output: 2013-10-03 13:51:19,991 INFO Added a workflow. id_map: {0: 1, 1: 2} This added a two-job linear workflow. The first jobs prints hello to the command line, and the second job prints goodbye. We gave names (optional) to each step as “hello” and “goodbye”. We named the workflow overall (optional) as “test_workflow”. Let’s look at our test workflow: lpad get_wflows -n test_workflow -d more Output: { "name": "test_workflow", "state": "READY", "states": { "hello--1": "READY", "goodbye--2": "WAITING" }, "created_on": "2014-02-10T22:10:27.024000", "launch_dirs": { "hello--1": [], "goodbye--2": [] }, "updated_on": "2014-02-10T22:10:27.029000" } We get back basic information on our workflows. The second step “goodbye” is waiting for the first one to complete; it is not ready to run because it depends on the first job. Run all Workflows¶ You can run jobs one at a time (“singleshot”) or all at once (“rapidfire”). Let’s run all jobs: rlaunch --silencer rapidfire Output: hello goodbye Clearly, both steps of our workflow ran in the correct order. Let’s again look at our workflows: lpad get_wflows -n test_workflow -d more Output: { "name": "test_workflow", "state": "COMPLETED", "states": { "hello--1": "COMPLETED", "goodbye--2": "COMPLETED" }, "created_on": "2014-02-10T22:18:50.923000", "launch_dirs": { "hello--1": [ "/Users/ajain/Documents/code_matgen/fireworks/launcher_2014-02-10-22-18-50-679233" ], "goodbye--2": [ "/Users/ajain/Documents/code_matgen/fireworks/launcher_2014-02-10-22-18-50-868852" ] }, "updated_on": "2014-02-10T22:18:50.923000" } FireWorks automatically created launcher_directories for each step in the Workflow and ran them. We see that both steps are complete. Note that there exist options to choose where to run jobs, as well as to tear down empty directories after running jobs. Launch the web GUI¶ If you have a web browser, you can launch the web GUI to see your results: lpad webgui Note that there are options to run the web site in a server mode. Python code¶ The following Python code achieves the same behavior: from fireworks import Firework, Workflow, LaunchPad, ScriptTask from fireworks.core.rocket_launcher import rapidfire # set up the LaunchPad and reset it launchpad = LaunchPad() launchpad.reset('', require_password=False) # create the individual FireWorks and Workflow fw1 = Firework(ScriptTask.from_str('echo "hello"'), name="hello") fw2 = Firework(ScriptTask.from_str('echo "goodbye"'), name="goodbye") wf = Workflow([fw1, fw2], {fw1:fw2}, name="test workflow") # store workflow and launch it locally launchpad.add_wf(wf) rapidfire(launchpad) In the code above, the {fw1:fw2} argument to Workflow is adding a dependency of fw2 to fw1. You could instead define this dependency when defining your FireWorks: fw1 = Firework(ScriptTask.from_str('echo "hello"'), name="hello") fw2 = Firework(ScriptTask.from_str('echo "goodbye"'), name="goodbye", parents=[fw1]) wf = Workflow([fw1, fw2], name="test workflow") Next steps¶ Now that you’ve successfully gotten things running, we encourage you to learn about all the different options FireWorks provides for designing, managing, running, and monitoring workflows. A good next step is the Introductory tutorial, which takes things more slowly than this quickstart.
https://pythonhosted.org/FireWorks/quickstart.html
CC-MAIN-2017-13
refinedweb
624
54.63
DOMNodeinterface is the primary datatype for the entire Document Object Model. More... DOMNodeinterface is the primary datatype for the entire Document Object Model. It represents a single node in the document tree. While all objects implementing the DOMNode interface expose methods for dealing with children, not all objects implementing the DOMNode interface may have children. For example, DOMText DOMElement or attributes for a DOMComment ), this returns null. Note that the specialized interfaces may contain additional and more convenient mechanisms to get and set the relevant information. The values of nodeName, nodeValue, and attributes vary according to the node type as follows: See also the Document Object Model (DOM) Level 2 Core Specification. NodeType. DocumentPosition:. DOCUMENT_POSITION_CONTAINED_BY: The node is contained by the reference node. A node which is contained is always following, too. DOCUMENT_POSITION_CONTAINS: The node contains the reference node. A node which contains is always preceding, too. DOCUMENT_POSITION_DISCONNECTED: The two nodes are disconnected. Order between disconnected nodes is always implementation-specific. DOCUMENT_POSITION_FOLLOWING: The node follows the reference node. DOCUMENT_POSITION_IMPLEMENTATION_SPECIFIC: The determination of preceding versus following is implementation-specific. DOCUMENT_POSITION_PRECEDING: The second node precedes the reference node. Destructor. The name of this node, depending on its type; see the table above. Gets the value of this node, depending on its type. An enum value representing the type of the underlying object. Gets the parent of this node. All nodes, except DOMDocument, DOMDocumentFragment, and DOMAttr may have a parent. However, if a node has just been created and not yet added to the tree, or if it has been removed from the tree, a null DOMNode is returned. Gets a DOMNodeList that contains all children of this node. If there are no children, this is a DOMNodeList containing no nodes. The content of the returned DOMNodeList is "live" in the sense that, for instance, changes to the children of the node object that it was created from are immediately reflected in the nodes returned by the DOMNodeList accessors; it is not a static snapshot of the content of the node. This is true for every DOMNodeList, including the ones returned by the getElementsByTagName method. Gets the first child of this node. If there is no such node, this returns null. Gets the last child of this node. If there is no such node, this returns null. Gets the node immediately preceding this node. If there is no such node, this returns null. Gets the node immediately following this node. If there is no such node, this returns null. Gets a DOMNamedNodeMap containing the attributes of this node (if it is an DOMElement) or null otherwise. Gets the DOMDocument object associated with this node. This is also the DOMDocument object used to create new nodes. When this node is a DOMDocument or a DOMDocumentType which is not used with any DOMDocument yet, this is null. Returns a duplicate of this node. This function serves as a generic copy constructor for nodes. The duplicate node has no parent ( parentNode returns null.). Cloning an DOMElement copies all attributes and their values, including those generated by the XML processor to represent defaulted attributes, but this method does not copy any text it contains unless it is a deep clone, since the text is contained in a child DOMText node. Cloning any other type of node simply returns a copy of this node. Inserts the node newChild before the existing child node refChild. If refChild is null, insert newChild at the end of the list of children. If newChild is a DOMDocumentFragment object, all of its children are inserted, in the same order, before refChild. If the newChild is already in the tree, it is first removed. Note that a DOMNode that has never been assigned to refer to an actual node is == null. Replaces the child node oldChild with newChild in the list of children, and returns the oldChild node. If newChild is a DOMDocumentFragment object, oldChild is replaced by all of the DOMDocumentFragment children, which are inserted in the same order. If the newChild is already in the tree, it is first removed. Removes the child node indicated by oldChild from the list of children, and returns it. Adds the node newChild to the end of the list of children of this node. If the newChild is already in the tree, it is first removed. This is a convenience method to allow easy determination of whether a node has any children. trueif the node has any children, falseif the node has no children. Sets the value of the node. Any node which can have a nodeValue will also accept requests to set it to a string. The exact response to this varies from node to node -- Attribute, for example, stores its values in its children and has to replace them with a new Text holding the replacement value. For most types of Node, value is null and attempting to set it will throw DOMException(NO_MODIFICATION_ALLOWED_ERR). This will also be thrown if the node is read-only. Puts all DOMText nodes in the full depth of the sub-tree underneath this DOMNode, including attribute nodes, into a "normal" form where only markup (e.g., tags, comments, processing instructions, CDATA sections, and entity references) separates DOMText nodes, i.e., there are neither adjacent DOMText nodes nor empty DOMTextCDATASections, the normalize operation alone may not be sufficient, since XPointers do not differentiate between DOMText nodes and DOMCDATASection nodes. Tests whether the DOM implementation implements a specific feature and that feature is supported by this node. trueif the specified feature is supported on this node, falseotherwise. DOMDocument interface, this is always null. Get the namespace prefix of this node, or null if it is unspecified. Returns the local part of the qualified name of this node. For nodes created with a DOM Level 1 method, such as createElement from the DOMDocument interface, it is null. Set the namespace prefix of this node. Note that setting this attribute, when permitted, changes the nodeName attribute, which holds the qualified name, as well as the tagName and name attributes of the DOMElement and DOMAttr interfaces, when applicable. Note also that changing the prefix of an attribute, that is known to have a default value, does not make a new attribute with the default value and the original prefix appear, since the namespaceURI and localName do not change. Returns whether this node (if it is an element) has any attributes. trueif this node has any attributes, falseotherwise. Returns whether this node is the same node as the given one. This method provides a way to determine whether two DOMNode references returned by the implementation reference the same object. When two DOMNode references are references to the same object, even if through a proxy, the references may be used completely interchangeably, such that all attributes have the same values and calling the same DOM method on either reference always has exactly the same effect. trueif the nodes are the same, falseotherwise. Tests whether two nodes are equal. This method tests for equality of nodes, not sameness (i.e., whether the two nodes are pointers to the same object) which can be tested with DOMNode::isSameNode. All nodes that are the same will also be equal, though the reverse may not be true. Two nodes are equal if and only if the following conditions are satisfied: The two nodes are of the same type.The following string attributes are equal: nodeName, localName, namespaceURI, prefix, nodeValue , baseURI. This is: they are both null, or they have the same length and are character for character identical. The attributes DOMNamedNodeMaps are equal. This is: they are both null, or they have the same length and for each node that exists in one map there is a node that exists in the other map and is equal, although not necessarily at the same index.The childNodes DOMNodeLists are equal. This is: they are both null, or they have the same length and contain equal nodes at the same index. This is true for DOMAttr nodes as for any other type of node. Note that normalization can affect equality; to avoid this, nodes should be normalized before being compared. For two DOMDocumentType nodes to be equal, the following conditions must also be satisfied: The following string attributes are equal: publicId, systemId, internalSubset.The entities DOMNamedNodeMaps are equal.The notations DOMNamedNodeMaps are equal. On the other hand, the following do not affect equality: the ownerDocument attribute, the specified attribute for DOMAttr nodes, the isWhitespaceInElementContent attribute for DOMText nodes, as well as any user data or event listeners registered on the nodes. trueotherwise false. Associate an object to a key on this node. The object can later be retrieved from this node by calling getUserData with the same key. Deletion of the user data remains the responsibility of the application program; it will not be automatically deleted when the nodes themselves are reclaimed. Both the parameter data and the returned object are void pointer, it is applications' responsibility to keep track of their original type. Casting them to the wrong type may result unexpected behavior. nullif there was none. Retrieves the object associated to a key on a this node. The object must first have been set to this node by calling setUserData with the same key. void*associated to the given key on this node, or nullif there was none. The absolute base URI of this node or null if undefined. This value is computed according to . However, when the DOMDocument supports the feature "HTML" , the base URI is computed using first the value of the href attribute of the HTML BASE element if any, and the value of the documentURI attribute from the DOMDocument interface otherwise. When the node is an DOMElement, a DOMDocument or a a DOMProcessingInstruction, this attribute represents the properties [base URI] defined in . When the node is a DOMNotation, an DOMEntity, or an DOMEntityReference, this attribute represents the properties [declaration base URI]. Compares the reference node, i.e. the node on which this method is being called, with a node, i.e. the one passed as a parameter, with regard to their position in the document and according to the document order. This attribute returns the text content of this node and its descendants. No serialization is performed, the returned string does not contain any markup. No whitespace normalization is performed and the returned string does not contain the white spaces in element content. The string returned is made of the text content of this node depending on its type, as defined below: This attribute removes any possible children this node may have and, if the new string is not empty or null, replaced by a single DOMText node containing the string this attribute is set to. No parsing is performed, the input string is taken as pure textual content. Look up the prefix associated to the given namespace URI, starting from this node. The default namespace declarations are ignored by this method. nullif none is found. If more than one prefix are associated to the namespace prefix, the returned namespace prefix is implementation dependent. This method checks if the specified namespaceURI is the default namespace or not. trueif the specified namespaceURIis the default namespace, falseotherwise. Look up the namespace URI associated to the given prefix, starting from this node. nullif none is found. This method makes available a DOMNode's specialized interface. DOMNodewhich implements the specialized APIs of the specified feature, if any, or nullif there is no alternate DOMNodewhich implements interfaces associated with that feature. Any alternate DOMNodereturned by this method must delegate to the primary core DOMNodeand not return results inconsistent with the primary core DOMNodesuch as key, attributes, childNodes, etc. Called to indicate that this Node (and its associated children) is no longer in use and that the implementation may relinquish any resources associated with it and its associated children. If this is a document, any nodes it owns (created by DOMDocument::createXXXX()) are also released. Access to a released object will lead to unexpected result.
http://xerces.apache.org/xerces-c/apiDocs-3/classDOMNode.html
CC-MAIN-2013-48
refinedweb
2,004
64.71
PluginMessage and pycobject [SOLVED] On 13/10/2014 at 07:17, xxxxxxxx wrote: I am using c4d.GePluginMessage() to send a message to another plugins - PluginMessage(). It is working for the id. I do receive the correct id, but getting the data is more difficult. Data in PluginMessage() is a PyCObject and I do not know jow to set and get parameters from a PyCObject? In Plugin nr 2 (sending) : c4d.GePluginMessage(PLUGIN_ID1, None) In plugin nr 1 (receiving) : def PluginMessage(id, data) : print "Received in NR1 - id, data: ", id, data return True On 14/10/2014 at 07:53, xxxxxxxx wrote: Or is there another way to communicate and pass data between plugin (not using shared memory)? On 14/10/2014 at 08:49, xxxxxxxx wrote: Hello, Another way may be to use SpecialEventAdd. Both ways use rough memory in form of a PyCObject. You find some examples on that in this threads: best wishes, Sebastian On 14/10/2014 at 10:08, xxxxxxxx wrote: Can someone please make a working example of this rather than posting bit's and pieces? Everything works as expected in C++ for me. I can send and receive casted object data using a custom message. Or using the GePluginMessage() function. But I cannot get any of these things to work in Python. All I need to see is a very simple GeDialog plugin with one button in it that sends a message with a value in it. Any value is fine. But I'd prefer the value be based on the selected object. And a very simple tag plugin that just receives the message and gets the data. All these little bits and pieces spread over the forums are too confusing. And missing important pieces. Thanks, -ScottA On 14/10/2014 at 12:23, xxxxxxxx wrote: Here a working example thanks to Sebastian and the other Post posters, thanks. One small issue: when the input value (the value to send taken from the input slider) that is send as p2, is zero (p2=0), I get the error: TypeError: PyCObject_AsVoidPtr with non-Cobject. The code below is for two command plugins nr1 and nr2. Plugin nr2 sends specialevent to nr1 where is is received using CoreMessage and output into the dialog. I made it in R15. If you want the sources, send me a pm. -Pim Plugin nr: import c4d from c4d import gui, plugins, utils, bitmaps import os import sys to send", borderstyle=c4d.BORDER_NONE) self.AddEditSlider(id=1025, flags=c4d.BFH_SCALEFIT, initw=70) self.AddButton(1026, c4d.BFV_MASK, initw=100, name="Send to nr1") return True def Command(self, id, msg) : if (id == 1026) : print "self.GetInt32(1025) : ", self.GetInt32(1025) c4d.SpecialEventAdd(PLUGIN_ID1, p1=PLUGIN_ID1, p2=self.GetInt32(1025)) return True return True class PluginNr2(plugins.CommandData) : dialog = None def Execute(self, doc) : # create the dialog if self.dialog is None: self.dialog = MyDialog() return self.dialog.Open(dlgtype=c4d.DLG_TYPE_ASYNC, pluginid=PLUGIN_ID2, defaultw=200, defaulth=150, xpos=-1, ypos=-1) def RestoreLayout(self, sec_ref) : # manage the dialog if self.dialog is None: self.dialog = MyDialog() return self.dialog.Restore(pluginid=PLUGIN_ID2, "Plugin r15 nr2",0, bmp, "Plugin r15 nr2", PluginNr2()) Plugin nr1: import c4d from c4d import gui, plugins, utils, bitmaps import os received", borderstyle=c4d.BORDER_NONE) self.AddEditNumber(id=1025, flags=c4d.BFH_SCALEFIT, initw=70) return True def Command(self, id, msg) : return True def CoreMessage(self, id, msg) : if id == PLUGIN_ID1: #) P2MSG_UN = msg.GetVoid(c4d.BFM_CORE_PAR2) pythonapi.PyCObject_AsVoidPtr.restype = c_void_p pythonapi.PyCObject_AsVoidPtr.argtypes = [py_object] P2MSG_EN = pythonapi.PyCObject_AsVoidPtr(P2MSG_UN) print "Data received CoreMessage in MyDialog: ", P1MSG_EN, P2MSG_EN self.SetInt32(1025, P2MSG_EN) return True class PluginNr1(plugins.CommandData) : dialog = None def Execute(self, doc) : # create the dialog if self.dialog is None: self.dialog = MyDialog() return self.dialog.Open(dlgtype=c4d.DLG_TYPE_ASYNC, pluginid=PLUGIN_ID1, defaultw=200, defaulth=150, xpos=-1, ypos=-1) def RestoreLayout(self, sec_ref) : # manage the dialog if self.dialog is None: self.dialog = MyDialog() return self.dialog.Restore(pluginid=PLUGIN_ID1, "Plugin r15 nr1",0, bmp, "Plugin r15 nr1", PluginNr1()) On 14/10/2014 at 14:18, xxxxxxxx wrote: Thank you Pim. That helps me out a lot. -ScottA On 15/10/2014 at 00:59, xxxxxxxx wrote: Ok, I'm glad to do something in return. On 15/10/2014 at 07:52, xxxxxxxx wrote: The code only handle integer > 0. When the value received was 0, it gave an error (perhaps because 0 = Null pointer?). Here some code to handle positive and negative integers and 0 itself. I still do not know how to send strings, because SpecialEventAdd requires two intergers P1 and P2 # c_int: Handle integer value negative and positive P2MSG_UN = msg.GetVoid(c4d.BFM_CORE_PAR2) pythonapi.PyCObject_AsVoidPtr.restype = c_int #was c_void_p pythonapi.PyCObject_AsVoidPtr.argtypes = [py_object] try: #handle value of 0 P2MSG_EN = pythonapi.PyCObject_AsVoidPtr(P2MSG_UN) except TypeError: P2MSG_EN = 0 Note: Who put [solved] in the Post header? On 15/10/2014 at 08:02, xxxxxxxx wrote: Sorry, I thought this was solved. I will remove the "Solved". Best wishes, Sebastian On 16/10/2014 at 07:31, xxxxxxxx wrote: Hello, there may be a way to avoid any handling of a "PyCObject" object. You can set a value in Cinema's WorldContainer and then send your message. When your target objects catch that message they find the proper values again in the WorldContainer: Sending the message: worldContainer = c4d.GetWorldContainerInstance() worldContainer.SetString(1000100,"This is just some string!") c4d.GePluginMessage(1000101, 0) Receiving the message: def PluginMessage(id, data) : if id==1000101: worldContainer = c4d.GetWorldContainerInstance() print("message received: "+worldContainer.GetString(1000100)) return True return False best wishes, Sebastian On 16/10/2014 at 08:08, xxxxxxxx wrote: Great, I'll give it a test. I also see that in R16 you have now the Cast function. Perhaps we can use it to cast PyCObject - CType. On 16/10/2014 at 09:18, xxxxxxxx wrote: *Please don't tell people to put things in the World Container!!!!!!!! That will store data in there permanently!! I do not want plugin makers adding things to my WC without my permission!! This is a bad, bad, bad, practice. The WC should only be used to add preference options that will get deleted when the plugin is removed. NEVER STORE DATA IN THE WORLD CONTAINER!!!...EVER!! Is that forceful enough? Should I use bigger letters? Please guys. Don't do it...just don't. Think about what would happen if everyone dumps stuff into your WC. Your WC will become so full of crap that you don't even know is in there. Does anyone know if there's a way I can lock my WC. To prevent people from adding stuff to it that I don't want? -ScottA On 16/10/2014 at 12:10, xxxxxxxx wrote: It's just preferences that are stored in the WC. You do want preferencs of your installed plugins to be saved, don't you? It's true that the stored data won't be deleted if the plugin is and there could be a better system, but it's not like you store tremendous amount of data in it. On 16/10/2014 at 13:49, xxxxxxxx wrote: It's fine to store preferences data in there. As long as it shows up in the preferences pallet. That way we can at least visually see that someone added something to it. But it should stop there and go no further. Anyone that puts preference data in there MUST have a remove mechanism in place to remove that data when the plugin is removed. And you just know that people will either forget this...or do it wrong. And the data will sit there forever. If people start using this as a quick and dirty way to store data by creating custom containers with data in there. Then this quickly turns into a garbage dump on my computer. Eventually people will use the same container ID's as another person. And then there will be bugs and crashing. And people won't know why. This is just a really, really, bad idea all around. And it should not be used or promoted. Dumping hidden permanent data on someone's computer is never ever acceptable! EVER!!! -ScottA On 17/10/2014 at 07:12, xxxxxxxx wrote: Hello, Indeed, the above idea has potential risks and should only be applied when absolutely necessary. An unique ID must be used to avoid a collision. The suggested solution is a workaround because we haven't thought of anything better. While it is a workaround, it is a solution for a feature that is currently missing. There is no such rule, that you are not allowed to store into the world container. Anybody with a viable reason, may do so, as long as a valid registered ID is used to store the data. To make sure that the WorldContainer is "clean" after your operation you may use RemoveData(). best wishes, Sebastian On 17/10/2014 at 07:58, xxxxxxxx wrote: Thank you for backing me up on this Sebastian. -ScottA
https://plugincafe.maxon.net/topic/8219/10712_pluginmessage-and-pycobject-solved
CC-MAIN-2020-40
refinedweb
1,512
67.76
Creating a GUI Program in Java You will use the NetBeans GUI program creator to build a simple GUI program for your birthday converter. Your GUI program will have three input fields, an output field, and a show button and an exit button. 1. Create a new Java project (start NetBeans, click File, choose New Project, Select Java, and select Java Applications and click next). 2. Name this project JavaGUIDate. You will next create a simple GUI Class using the NetBeans GUI Generator: 3. Right click your Package name in the Source Package folder (JavaGUIDate) name (in the Projects panel) and choose New and then choose Jpanel Form: 4. You see a New Jpanel Form. Name this Panel class JPanelDate: 5. Click Finish 6. You see a blank window with a set of tools under the Palette: 7. Scroll down until you see the Label control under the pallet area. 8. Drag the label control onto the empty box and position it where you want the title. Under the Properties box (below the Pallet area) are a list of properties for the selected control. These include the label Font and the label text. Clicking the small button next to the property allows you to set the property. Those properties you can set are in Bold. 9. Make sure the label is selected. Locate the Font property in the properties window and click the small button to the right of the Font property. 10. Choose a new font and font size. 11. Locate the Text property and click the button. 12. Type the new text line “Show Birthday Day” and click OK. You may have to widen the label to see all the text. 13. Locate the Text Field under the Palette area and drag a text field into the blank window. Onscreen guides will help you position the component. 14. Repeat this step and drag two more text fields into the window. 15. Drag three small labels until they are next to the text fields. The screen should look something like: You will next change the text for the labels. 16. Click each label, locate the Text property, and change the label text to Year, Month, Day. 17. Drag a label under the three text fields and size it so it extends across the text area. Change the text in this label to “Day of Week”. Set the horizontal alignment of this label to Center. 18. Drag two buttons from the Palette window onto the form and place them below the Day of Week label. 19. Select each text label and click the Text property. Delete the text inside this text field. The field will compress. You will have to expand the text field by dragging it wider. 20. Set the text of one button to Show and the other to Exit. The finished form should look like: The controls that you will modify in your program must be given a name. You will next set the control code name so you can access the control when you run the program. 21. For each of the text fields, click the field and then click the Code button under the Properties window. Locate the variable name and set the name to what the field will contain (jYear, jMonth, jDay). 22. Click the large label with the text Day of Week and click the Code button. You will have to give this field a name as well. Name this field jDayOutput. 23. Name the two buttons jShowButton and jExitButton. You will next set the Code for this program 1. Click the Source tab in the window. This displays the source code being created for your form. 2. Scroll to the top of the source code and, just below the package name javaguidate enter: import java.util.*; 3. Switch to the Design View (the second tab button at the top of the window) and double click the Show Button. This opens the code window and places the cursor at the location where code can be entered. The method should be named jShowButtonActionPerformed(…). 4. Type the following lines of code: // TODO add your handling code here: String strdays[] = {"Sunday","Monday","Tuesday", "Wednesday","Thursday","Friday","Saturday"}; int yearval = Integer.parseInt(jYear.getText()); int monthval = Integer.parseInt(jMonth.getText() )-1; int dayval = Integer.parseInt(jDay.getText()); GregorianCalendar cal = new GregorianCalendar(yearval,monthval,dayval); int dayofweek = cal.get(GregorianCalendar.DAY_OF_WEEK); dayofweek--; jDayOutput.setText(strdays[dayofweek]); 5. Save the Jpanel. 6. Switch back to the Main program. 7. At the top of the Main class, just below the package statement include the following: import javax.swing.*; 8. In the main() method type the following: // TODO code application logic here JFrame f1 = new JFrame("Date"); f1.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); JPanelDate p1 = new JPanelDate(); f1.getContentPane().add(p1); f1.setSize(500,300); f1.setVisible(true); 9. Run this program. You should see a GUI
https://www.techylib.com/el/view/bravesnails/creating_a_gui_program_in_java
CC-MAIN-2017-51
refinedweb
814
76.11
<< Back to Course Outline Contents Introduction We've done a lot about the mechanics of how to get things into your applet, and how to interact with it. Next week, we'll be taking a look at how you might approach a program of your own - the example given is a 'snake' style game. There's a few last things you need to know about - the icing on the cake, really - in order to produce well-presented applets. These can be broadly categorized as multimedia features - namely: Sound, Pictures, and Animation. You may be surprised at how easily some of these facilities can be incorporated into your applet. Before we start, though, lets quickly summarize some of what we learned in the last session. Session 6 Summary In Session 6, we covered: The 2D API - What it is - drawing on the screen or printer - and what it means Graphics2D Class - How this works, and how to use it with polymorphism Co-Ordinates - The co-ordinate system that Java's graphics use Drawing Shapes - A look at how to draw shapes such as lines, rectangles, ellipses, and text Transformations - A look at translations, rotations, and shearing of shapes. Line and Fill Attributes - How to alter the look and colour of lines and shapes Colours - How to use colours from a Java Applet. The Color class. Session 7 Overview In session 7, we will be looking at:- Introduction Summary from Session 6 Session 7 overview Using pictures Playing sound and music Concurrency and Timers Animation techniques <<break>> Scrolling banner example Summary and Further Sources Using Pictures There are a number of different ways you can work with images in Java. At the most sophisticated level, you can load images, transform them using predefined (or your own) filters, and write them to the screen. Alternatively, you can easily make use of icons within your components. However, we will be looking at the simplest way of using images, which is to load them directly into our applet, and paint them to the screen. Furthermore, we will make use of the way of working we employed in the previous session, when drawing shapes to the screen - by utilising the paint() method of the JApplet. Images can be held in a class called Image. So to define an object where an image is to be held, we would type something like:- Image greenBall; ... assuming we were going to create a new image, and call it greenBall - you can guess what the image is going to be! In order to be able to access this image throughout our class, we would define the attribute before defining the init() method, but after we have declared the class. E.g:- To load the image from your web site, you need to place the image into the same directory as the Java source code (ImageEg1.java file in this case) and use the following command:- greenBall = getImage(getCodeBase(),"green_ball.gif"); It is important that you place this in the init() method, as this ensures the Applet has been properly loaded before the image is retrieved. The getImage method is part of the JApplet class. It takes two parameters - the first is the directory in which the image can be found - getCodeBase() will return the directory in which the Java source file is held. The second parameter gives the name of the image in this directory. A new image object is created, and this is returned from the getImage method and placed in the greenBall object. Note that if you wanted to place the image in the directory where the calling HTML document resides, you would use the method getDocumentBase() instead of getCodeBase(). That was simple. However, we need to paint the image (as many times as we like) on the screen. The object was created outside of any methods, so it will be available to all methods, for the life of the Applet. This means, we can use it whenever we like in the Applet. So, in order to paint the image on the screen, we will place the command to do this within the paint() method of the applet, as we did in the previous session to draw shapes on the screen:- So what are we doing? At the beginning, we are casting the Applet's Canvas from a Graphics class to a Graphics2D class, and assigning it to the g2 attribute. Then, as before, a border is drawn around the applet. Note that this time, we've assigned the applet's width and height to an attribute (variable) called appletWidth and appletHeight. This is to save having to repeatedly type getSize.Width() and getSize.Height() and it also runs a little faster if the value is having to be accessed many times. Similarly, halfWidth and halfHeight have been set to the width/height of the applet minus the width/height of the image, divided by two. This gives the position where the green ball would sit if it were to be placed exactly in the middle of the applet. Note that to find the width of the ball, we use the following method:- greenBall.getWidth(this) Thus, we are passing the current JApplet object to the Image's getWidth method. The reasons for this are beyond the scope of this course. Briefly (if you want to know), the JApplet implements an the ImageObserver interface which means that whenever any changes occur to an image, the repaint() method is called. In this case, this would mean that if the image had not yet been loaded fully before the getWidth() method had been called, then when it has been fully loaded, the repaint() method will be called. Otherwise, a value of -1 is returned. So, to draw the image directly in the middle of the screen, we use the following command:- g2.drawImage(greenBall,halfWidth,halfHeight,this); This draws an image on the g2 canvas - i.e. on the Applet. The image that gets drawn is the one loaded into the greenBall object. The x position (i.e. horizontal position) is given by halfWidth and the y position (i.e. vertical position) is given by halfHeight. Finally this refers to an object whose class implements the ImageObserver interface. In otherwise, ignore this, and just put this in! The lines following this place the image to the left, above, to the right, and below the image in the middle, by moving the x,y position by the width of the image. Easy peasy! Exercise Load the QuickCup project that contains this information (img_eg1.qjp), compile and run it (using AppletViewer) to see the results. Using Windows Explorer, take a look at the directory (week07 directory) in which the project is kept. Notice that there are two image files there - green_ball.gif (which we use in the program) and flash_star.gif Declare a new Image object called flashStar just next to where the greenBall is declared. Insert a blank line under the line where the green_ball.gif file is loaded into the greenBall object (using the getImage method). Insert a command that does the same, except loads the flash_star.gif image into the flashStar object. In the paint() method, put in the following commands:- Save the project, Compile it, and if you get no errors, run it with the Applet Viewer. See if you can work out how this does what it does. If you do not get time to complete this exercise, the results can be seen in project img_eg2.qjp. Playing Sound & Music Since v1.2, Java has had a very simple mechanism for playing sound and music. Let's take a look at our Pairs game again, and spice it up a bit with some dramatic music in the background, and a sound effect when a pair is matched. Open the pairs.qjp project from week 7's directory. The first thing to note is that if you wish to make use of the AudioClip interface, which is implemented by the applet, you need to import it as follows (see line 34 of Pairs.java file):- import java.applet.* We also need to declare a new variable / attribute (line 55) to play the bleep noise for when a pair is matched. We do not need to declare a variable for the music - we will see why later:- AudioClip keyBleep; A file is loaded into the AudioClip object as follows (at the end of the init() method, when the java is first displayed - line 115) :- keyBleep = getAudioClip(getCodeBase(),"blip.wav"); Note that we use the getAudioClip method of the JApplet, state where the sound is held (same directory as the java class file), and finally give the name of the sound file. This is then placed in the object name keyBleep. Note that on the following line, we do a similar thing, except we are getting a MIDI music file.You may think that as this needs to be played continuously and in a loop, there's not need to store it to a variable, as it won't be referenced again. Thus, the line would read:- getAudioClip(getCodeBase(),"vivaldi.mid").loop(); However, as the object gets called within the init() method, it falls out of the scope when the code gets to the end of the init() method, and may be disposed of by Java's garbage collector at any time after that. Thus, you may find that the tune stops playing at some random time once you start playing the game. To avoid this, we need to set up a variable at the scope of the class, so that the variable is available for the life of the applet. Therefore, line 55 now reads:- AudioClip keyBleep,vivaldi; to set up an object called vivaldi to hold the tune (if you excuse the pun), and line 116 now reads:- vivaldi=getAudioClip(getCodeBase(),"vivaldi.mid"); vivaldi.loop(); Thus the vivaldi object is created inside the init() method, but remains for the life of the applet as it is declared outside of the init() method. Once created, a method called loop() is used (part of the AudioClip class), which simply plays the music over and over again, until the stop() method is called. Since we are happy to play the music until the Applet is closed, we haven't used this method anywhere! If you look on line 193, you can see that the keyBleep sound effect gets played after the two buttons get set to "-MATCH-" - i.e. when two buttons have been matched:- keyBleep.play(); So the play method of the AudioClip class will play the sound loaded into the object once only. Exercise There is another sound effect called tada.wav in the same directory as the other sounds. Add a new AudioClip variable / attribute called tada, and load tada.wav into the object at the end of the init() method. When the game is completed, and just before the dialog box is displayed (i.e. after the line starting if (pairsRemaining==0)), insert two instructions to stop the vivaldi music and then play the tada AudioClip sound effect. If the user chooses to play another game, start the music once again - put the instruction to do this just before the call to the randomOrdering() method. Try compiling, and if there are no errors, running the program with the AppletViewer program to test it work OK. See the results in pairs_tada.qjp Concurrency and Timers Java is what is known as a multi-threaded programming language. This means that you can run more than one task at the same time from the same program, and synchronize the order and priority that these tasks (or processes, or threads) are executed. Writing threaded applications takes quite a lot of practice, and for this reason will not be covered in this course. However, there is an easier-to-use framework that you can use if you wish to run a task at regular intervals. The class we use to perform this trick is called the Timer. It allows you to run a particular method (actionPerformed) once every so many milliseconds. This is what's known as a tick - a bit like the regular tick you get if you listen to a clock - in this case, a tick lasts 1000ms or 1 second. So with each tick an action is performed. This can be used if you wish to animate an object on the screen. With each tick, you can (for example) move an object slightly in the applet, and repaint it. The general structure that you need to implement in your program to make use of a Timer is as follows:- Briefly, we are creating a new object for the Timer class (the object instance in this example is called timer - note the distinction - the object is in lower case and so is recognized as different to the Timer class). When we create the Timer, we specify the amount of time between each tick - ie. 50ms (milliseconds) in this case. Note that there are 1000ms in a second, so this gives 20 ticks per second. The this parameter refers to the class that implements an ActionListener interface - i.e. has a actionPerformed method in it. This is the method to be called each time a tick comes around. Thus, by specifying this we are saying that the current class (TimerExample) has a method called actionPerformed that will be called every 50ms. Thus, the code (shown as ...) inside the actionPerformed will be executed every 50ms in this scenario. The start() and stop() methods of the applet specify what should happen when focus move away and back to the applet after it has been created - e.g. by minimizing the browser window, and restoring it again. Thus, this says that if the timer is not running, it will be restarted, and if it is running, it will be stopped, respectively. Animation Techniques We can make use of this ability to execute a method periodically to perform animation. For example, to give the impression of a shape moving across the screen, we could change the co-ordinate position by a set amount every time the actionPerformed method is called, and then repaint the screen. Here's some sample code to demonstrate this:- Open the anim1.qjp QuickCup project from the week07 directory and take a look at the Animate1.htm tab - you can see that two parameters are passed from the web page to the Java Applet:- These two parameters relate to the animation of the images. The speed is the time between each update or tick - in this case, the image will be moved and redisplayed every 100ms, or 10 times a second. The step is the number if pixels - or distance horizontally - that the image will be moved with each tick. In this case, it will be moved the smallest possible distance, 1 pixel, with each tick. Try running the applet as it stands. The animation is relatively smooth, but slow. Try running it at 50ms and then 10ms speed. You may notice that the speed at 10ms is not 10 times the speed at 100ms. Why is this? The reason will be that the time it takes to draw the images exceeds 10ms, so it isn't possible to animate at this rate. Another method of making the animation appear to move faster without increasing the actual number of ticks, is to move the images a greater distance with each tick. For example, if we set the step parameter to 2, and the speed to 50, then this would be approximately the same in speed as setting step to 1 and speed to 100. Notice that the animation isn't quite as smooth, though. Similarly, if we really did want the appearance of 10ms intervals, we might compromise and set step to 5 and speed to 50ms. The animation isn't as smooth, but the speed definitely looks much faster than setting the speed to 10ms and step to 1. Notice that on line 82 (part of the paint() method), that a light-grey rectangle is filled between the borders. This wipes out the previous image. Try taking this line out by placing a comment before it (two forward slashes - //) - notice that the images leave a 'trail', as the previously drawn images remain where they were. This is why the previously drawn images are drawn over by the background prior to re-drawing them at their new location. Note, that for greater efficiency, you may choose to fill a rectangle over just the image itself. Note that to do this, you may need to create variables to remember the size and position of that rectangle between calls to the paint() method, as the value of x changes outside of the method. N.b. see project anim2.qjp for an example of this. On line 99, just before the images are re-drawn, you will find the following line of code:- if(imageRHpos > (appletWidth-6)) xPos=7; where wholeWidth is calculated as being:- int imageRHpos = xPos+sqWidth+2*ballWidth; Thus, imageRHpos is the width of the whole image - 2 balls plus one square, added to the position of the left-hand side of the image. This gives the horizontal position of the right-hand side of the image. Thus, if this position is after the width of the applet (minus 6 pixels to take into account the border), then it's just about to go off the end applet, so reset the left-hand side of the image to 7, which is just to the right of the border. The only other point of note is the actionPerformed method, which gets called whenever the timer ticks:- All that happens here, is that the xPos variable is incremented (added to) by the contents of the step variable, which is the step value taken from the HTML page's parameter. For example, is xPos started at 6, and the step value was 5 pixels, then the result would be 6+5=11 - xPos would then contain the value 11. Then, the repaint() method is called, which sends a signal to Java to call the paint() method when it is next convenient to do so. Avoid calling the paint() method directly, as this can lead to garbled-looking images, as you'd be bypassing Java's synchronization mechanism by doing this. Scrolling Banner Example As an example of combining these techniques with the text-drawing techniques we learned last week, take a look at the banner.qjp project in the week07 directory. This is very much based on the previous applet, which scrolled a graphic from the left hand side to the right-hand-side of the screen. This time, we will create some text, and store its outline as a shape, so that it can be drawn on the screen. In order to give the illusion of movement, the text will be drawn starting at the right hand margin of the applet, and then moving it a specific amount to the left with each tick - thus, the horizontal X position is decreasing, thus if you look on line 81 of the Banner.java tab, you can see the that the text graphic gets moved as follows:- xPos-=step; repaint(); This means that the value of step is subtracted from the value of xPos. What happens when xPos reaches 0. If we carry on, it will be starting from a position that is off the left hand side of the applet. This is actually useful, because if we can hide anything that is not visible from the applet (a technique known as clipping), this would give the appearance of the text scrolling across the applet. And when the right hand side of the text reaches the left hand side of the applet - i.e. all of the text is hidden to the left of the clipped area of the applet, the text can be displayed from the right hand side of the applet again. You can think of this as being a bit like having a piece of card with a small window cut into it. You feed in a piece of ticker tape with a message on from the right hand side so that it moves left, with only a small portion visible in the window. When the ticker tape is pulled to the left so far that it disappears from the window, it can be fed back around from the right again. The following data is read in from the HTML page, so that if these values change on the HTML page, the Java program does not need to be recompiled:- The speed value represents the time between ticks for the Timer object, the step value describes how many pixels the text should move with each tick, the fontSize describes the size (in points) of the font to be scrolled across the page, and the bannerText is the text to be scrolled. On lines 40 and 41, you can see that two variables have been created at class-level - i.e. they will exist for as long as the class' object exists:- These are initialised as part of the init() method of the applet, on lines 63-66 as follows:- Therefore, the text is placed into a TextLayout object, and then converted to a shape (which can be written onto a canvas using either the draw() method for the outline or fill() method for the inside), which can then be reused with each tick. Thus, by placing these instructions here, the text does not have to be recreated with each tick, as it would do if we placed these instructions in the paint() method. Line 68 is as follows:- xPos = getSize().width-6; This describes the starting position for the text, which is the right-hand side of the applet minus six pixels to take into account the border. As already described, the actionPerformed() method which gets called with each tick of the Timer, simply change the X (horizontal) position and repaints the canvas as follows:- Next, to the paint() method, which draws the text:- In order to blank any previous text drawn on the screen, line 96 writes a rectangle over the area of the canvas between the borders as follows:- g2.fill(new Rectangle2D.Double(6,6,appletWidth-13,appletHeight-13)); In order to avoid painting over the borders, the area that can be drawn in can be set using the setClip method on the Graphics2D object (g2):- g2.setClip(new Rectangle2D.Double(6,6,appletWidth-13,appletHeight-13)); You can see that this corresponds to the area that was blanked - i.e. the area between the borders. Thus, if we try to write over the border now, it will be left intact. Only the area inside the border can be written to. The next command checks to see if the right-hand side of the applet has reached the left-hand side of the border (taking into account the step amount). If it has, the position of the text is moved to the right of the border to start again:- if ( (xPos+bannerShape.getBounds().width+step) < 6 ) xPos = appletWidth-6; Finally, the text colour is set to red, the starting position of the text is translated (i.e. moved) to the correct place on the screen, and the inside shape of the text (held in bannerShape) is filled into the applet's canvas g2. Finally, the starting position of any further draws to the canvas are set back to what they were before (ie. 0,0) for neatness, and the clip area is cleared, so the whole of the canvas can be drawn on again:- A lot of processing power goes into drawing text in this manner. You may notice that there is some flickering, especially at higher speeds. This is due to the time delay between drawing over the previous text with a grey rectangle, and drawing the new text, which is quite labour-intensive for the computer. Buffering Updates One way around this is to buffer the updates. This means that you do you drawing on a different canvas, and only transfer the image to the applet's canvas at the last moment. This effectively means that the blanking of the letters, and the re-drawing of the letters takes place off-screen, so that you only ever see the changed image, never the blank rectangle. This is good for complex animation, as it means you can make any drawing changes off-screen to make the animation look as smooth and flicker-free as possible. This technique is known as double-buffering, as the image is buffered (held in memory) once for the applet, and once off-screen for any intermediate updates. Take a look at the banner2.qjp QuickCup project in the week07 directory. This works the same as the banner.qjp project, except it utilises a double-buffering technique to help smoothen the animation. Look in the Banner2.java tab. On line 33, you can see that we have imported another library - this is so we can use the image buffering classes:- import java.awt.image.*; On lines 45 and 46, you can see that two objects are created at class-level, so that they can be accessed throughout the life of the object - i.e. between ticks of the Timer:- These BufferedImage object holds the image to work on, and the Graphics2D object makes that image available as a canvas that can be painted on. To create a buffered image that is the size and has the attributes of the applet, a separate method has been created called setUpImage() which is called from the init() method on line 73. Why have we done this? The canvas to be drawn on could change if the user resizes the window, so we may need to recreate the two objects under these circumstances. Thus, this code will be used in more than one place in the program. Thus, make it into a method, so that it can be called from different places:- Your applet (inheriting from the JApplet class) has a method called createImage available, which takes a picture of the applet (given a width and height to copy - in this case, we are taking the whole applet), and returns an Image object. This is cast to a BufferedImage and assigned to the bufImage object. We now have a copy of a picture of the applet. However, in order to draw on this buffered image, we need to create a new Graphics2D object by calling the createGraphics() method on the buffered image, and calling this in bufGraphics. This object can then be used to draw shapes etc. on it. Note that we could have recreated these every time a tick occurs, but this would be time-consuming, as Java would continually need to garbage collect as older ticks are discarded. This causes a characteristic jerkiness as Java garbage collects. Look at line 100, the first line in the paint() method. Instead of creating the g2 attribute/variable as a casting of the Graphics g parameter, we have assigned the bufGraphics object to it, so we can paint to the buffered image:- Graphics2D g2=bufGraphics; All painting is then done to the buffered image, in exactly the same way as it was previously done to the applet. Thus, the painting is occurring off-screen. Finally, on line 123, the buffered image is drawn onto the canvas, in the same way as any other image would be - in this case, it is placed at position 0,0 - i.e. at the top-left hand corner of the Applet's canvas, as it is the same size as the Applet's canvas:- g.drawImage(bufImage,0,0,null); So, the update of the screen only takes place on this line - it is updated ONCE per tick, rather than the several times it was updated earlier when the border was redrawn, the text blanked with a grey rectangle, and the text redrawn into a clipped area. The result looks much smoother - less flicker, and less jerkiness. Summary and Further Sources
http://www.simonhuggins.com/courses/javaweb/course_notes/week07/index.htm
crawl-001
refinedweb
4,698
67.89
Two Characters Hackerrank There are two characters, take it as x & y.We have different strings which can be made through these characters in an alternate way like xyxyxy or yxyxyx but not xxyy or xyyx or so on Now, there is a twist that you can convert some string ‘s’ to string ‘t’ by deleting characters from ‘s’. When you delete a character from s, you must delete all occurrences of it in s. For example, if s = abaacdabd and you delete the character a, then the string becomes bcdbd. Suppose you have provided a string S, convert it to the possible longest String T using the same alphabet then print the length of the newly string T in a new Line otherwise if the string T is not possible to form then print its length to 0. Problem: The problem is of hackerrank to go to that use the below link: - Input Format The first line contains a single integer denoting the length of s. The second line contains string s. - Constraints 1<=|s|<=1000 Here, an only contains lowercase English alphabetic letters (i.e., a to z). - Output text Print a single integer denoting the maximum length of t for the given s; if it is not possible to form string t, print 0 instead. Sample Input 10 beabeefeab Sample Output 5 Explanation - The characters present in s are a, b, e, and f. This means that must consist of two of those characters. - If we delete e and f, the resulting string is babab. This is a valid t as there are only two distinct characters (a and b), and they are alternating within the string. - If we delete a and f, the resulting string is bebeeeb. This is not a valid string t because there is three consecutive e’s present. - If we delete the only e, the resulting string is babfab. This is not a valid string t because it contains three distinct characters. Thus, we print the length of babab, which is 5, as our answer. Solution Case: s only consists of 1 distinct character.Because string t must contain exactly two distinct characters that alternate throughout the string, we cannot form t in this case so the answer will be 0. Case: s consists of two or more distinct characters. Choose two characters to be present in string t and delete all other characters from s. Check the resulting string to see if it’s a valid t. In other words, determine if the two characters alternate or not. If yes, then the string is valid and you simply need to simply save its length; otherwise, it’s invalid and we consider its length to be 0. Check all possible pairs of characters and then print the maximum of these lengths. #!/bin/python3 from itertools import combinations #include <bits/stdc++.h> #include<assert.h> using namespace std; int make_string(string str, char a, char b) { int len = str.size(); string temp = ""; char last; for(int i=0; i<len; i++) { if(str[i]== a || str[i] == b) temp+=str[i]; } len = temp.size(); if(len <= 1) return 0; last = temp[0]; for(int i=1; i<len; i++) { if(temp[i] == last) return 0; last = temp[i]; } return len; } int main(){ string s; char a, b; cin>>s; cin>>a>>b; cout<<make_string(s, a, b); return 0; } def meet_pattern(s): return all(s[i-1] != s[i] for i in range(1,len(s))) s_len = int(input().strip()) s = input().strip() letters = set(s) max_len = 0 for pair in combinations(letters,2): substr = "".join(i for i in s if i in pair) if meet_pattern(substr): max_len = max(max_len, len(substr)) print(max_len) import sys #sys.stdin=open("in","r") n=int(raw_input()) s=raw_input() assert s.isalpha() ans=0 for i in range(0,26): for j in range(0,26): if i==j: continue p1 = i p2 = j flag = 1 l = 0 for c in s: if ord(c)-ord('a')!=p1 and ord(c)-ord('a')!=p2: continue if ord(c)-ord('a') == p1: l = l + 1 p1,p2 = p2,p1 else: flag = 0 if flag == 1 and l>1: ans=max(ans,l) print
https://coderinme.com/hackerrank-two-characters/
CC-MAIN-2019-13
refinedweb
707
72.26
import "cloud.google.com/go/errorreporting" Package errorreporting is a Google Stackdriver Error Reporting library. This package is still experimental and subject to change. See for more information. Code: package main import ( "context" "errors" "log" "cloud.google.com/go/errorreporting" ) func main() { // Create the client. ctx := context.Background() ec, err := errorreporting.NewClient(ctx, "my-gcp-project", errorreporting.Config{ ServiceName: "myservice", ServiceVersion: "v1.0", }) if err != nil { // TODO: handle error } defer func() { if err := ec.Close(); err != nil { log.Printf("failed to report errors to Stackdriver: %v", err) } }() // Report an error. err = doSomething() if err != nil { ec.Report(errorreporting.Entry{ Error: err, }) } } func doSomething() error { return errors.New("something went wrong") } Client represents a Google Cloud Error Reporting client. func NewClient(ctx context.Context, projectID string, cfg Config, opts ...option.ClientOption) (*Client, error) NewClient returns a new error reporting client. Generally you will want to create a client on program initialization and use it through the lifetime of the process. Close calls Flush, then closes any resources held by the client. Close should be called when the client is no longer needed. Flush blocks until all currently buffered error reports are sent. If any errors occurred since the last call to Flush, or the creation of the client if this is the first call, then Flush reports the error via the Config.OnError handler. Report writes an error report. It doesn't block. Errors in writing the error report can be handled via Config.OnError. ReportSync writes an error report. It blocks until the entry is written. type Config struct { // ServiceName identifies the running program and is included in the error reports. // Optional. ServiceName string // ServiceVersion identifies the version of the running program and is // included in the error reports. // Optional. ServiceVersion string // OnError is the function to call if any background // tasks errored. By default, errors are logged. OnError func(err error) } Config is additional configuration for Client. type Entry struct { Error error Req *http.Request // if error is associated with a request. User string // an identifier for the user affected by the error Stack []byte // if user does not provide a stack trace, runtime.Stack will be called } Entry holds information about the reported error. Package errorreporting imports 14 packages (graph) and is imported by 8 packages. Updated 2018-12-07. Refresh now. Tools for package owners.
https://godoc.org/cloud.google.com/go/errorreporting
CC-MAIN-2018-51
refinedweb
388
53.58
On May 31, 2018 we had a 17 minute outage on our 1.1.1.1 resolver service; this was our doing and not the result of an attack. Cloudflare is protected from attacks by the Gatebot DDoS mitigation pipeline. Gatebot performs hundreds of mitigations a day, shielding our infrastructure and our customers from L3/L4 and L7 attacks. Here is a chart of a count of daily Gatebot actions this year: In the past, we have blogged about our systems: Today, things didn't go as planned. Gatebot Cloudflare’s network is large, handles many different types of traffic and mitigates different types of known and not-yet-seen attacks. The Gatebot pipeline manages this complexity in three separate stages: - attack detection - collects live traffic measurements across the globe and detects attacks - reactive automation - chooses appropriate mitigations - mitigations - executes mitigation logic on the edge The benign-sounding "reactive automation" part is actually the most complicated stage in the pipeline. We expected that from the start, which is why we implemented this stage using a custom Functional Reactive Programming (FRP) framework. If you want to know more about it, see the talk and the presentation. Our mitigation logic often combines multiple inputs from different internal systems, to come up with the best, most appropriate mitigation. One of the most important inputs is the metadata about our IP address allocations: we mitigate attacks hitting HTTP and DNS IP ranges differently. Our FRP framework allows us to express this in clear and readable code. For example, this is part of the code responsible for performing DNS attack mitigation: def action_gk_dns(...): [...] if port != 53: return None if whitelisted_ip.get(ip): return None if ip not in ANYCAST_IPS: return None [...] It's the last check in this code that we tried to improve today. Clearly, the code above is a huge oversimplification of all that goes into attack mitigation, but making an early decision about whether the attacked IP serves DNS traffic or not is important. It's that check that went wrong today. If the IP does serve DNS traffic then attack mitigation is handled differently from IPs that never serve DNS. Cloudflare is growing, so must Gatebot Gatebot was created in early 2015. Three years may not sound like much time, but since then we've grown dramatically and added layers of services to our software stack. Many of the internal integration points that we rely on today didn't exist then. One of them is what we call the Provision API. When Gatebot sees an IP address, it needs to be able to figure out whether or not it’s one of Cloudflare’s addresses. Provision API is a simple RESTful API used to provide this kind of information. This is a relatively new API, and prior to its existence, Gatebot had to figure out which IP addresses were Cloudflare addresses by reading a list of networks from a hard-coded file. In the code snippet above, the ANYCAST_IPS variable is populated using this file. Things went wrong Today, in an effort to reclaim some technical debt, we deployed new code that introduced Gatebot to Provision API. What we did not account for, and what Provision API didn’t know about, was that 1.1.1.0/24 and 1.0.0.0/24 are special IP ranges. Frankly speaking, almost every IP range is "special" for one reason or another, since our IP configuration is rather complex. But our recursive DNS resolver ranges are even more special: they are relatively new, and we're using them in a very unique way. Our hardcoded list of Cloudflare addresses contained a manual exception specifically for these ranges. As you might be able to guess by now, we didn't implement this manual exception while we were doing the integration work. Remember, the whole idea of the fix was to remove the hardcoded gotchas! Impact The effect was that, after pushing the new code release, our systems interpreted the resolver traffic as an attack. The automatic systems deployed DNS mitigations for our DNS resolver IP ranges for 17 minutes, between 17:58 and 18:13 May 31st UTC. This caused 1.1.1.1 DNS resolver to be globally inaccessible. Lessons Learned While Gatebot, the DDoS mitigation system, has great power, we failed to test the changes thoroughly. We are using today’s incident to improve our internal systems. Our team is incredibly proud of 1.1.1.1 and Gatebot, but today we fell short. We want to apologize to all of our customers. We will use today’s incident to improve. The next time we mitigate 1.1.1.1 traffic, we will make sure there is a legitimate attack hitting us.
https://blog.cloudflare.com/today-we-mitigated-1-1-1-1/
CC-MAIN-2019-43
refinedweb
791
64.1
#include <itkKdTree.h> data structure for storing k-nearest neighbor search result (k number of Neighbors) This class stores the instance identifiers and the distance values of k-nearest neighbors. We can also query the farthest neighbor's distance from the query point using the GetLargestDistance method. Definition at line 581 of file itkKdTree.h. Constructor Definition at line 586 of file itkKdTree.h. Destructor Returns the distance of the farthest neighbor from the query point Definition at line 610 of file itkKdTree.h. Returns the instance identifier of the index-th neighbor among k-neighbors Definition at line 645 of file itkKdTree.h. Returns the vector of k-neighbors' instance identifiers Definition at line 637 of file itkKdTree.h. Replaces the farthest neighbor's instance identifier and distance value with the id and the distance Definition at line 618 of file itkKdTree.h. References itk::NumericTraits< T >::min(). Initialize the internal instance identifier and distance holders with the size, k Definition at line 598 of file itkKdTree.h. External storage for the distance values of k-neighbors from the query point. This is a reference to external vector to avoid unnecessary memory copying. Definition at line 660 of file itkKdTree.h. The index of the farthest neighbor among k-neighbors Definition at line 652 of file itkKdTree.h. Storage for the instance identifiers of k-neighbors Definition at line 655 of file itkKdTree.h.
https://itk.org/Doxygen/html/classitk_1_1Statistics_1_1KdTree_1_1NearestNeighbors.html
CC-MAIN-2021-43
refinedweb
235
57.06
Live Editing Using an inline script can be painful as you have to keep clicking buttons and saving. It's more productive to point to a file so it can be updated automatically. If you have a groovy script as opposed to a groovy class, as in the previous examples, the method name should be `run`. When configured like this you can modify your code without even leaving the Resolve Issue dialog, let alone doing a page refresh. The path to the script can be relative to a script root. Live Editing for IDE Users If using an IDE you can get code completion by adding the following lines at the beginning of your script: examples / jira / src / test / resources / com / onresolve / jira / groovy / test / behaviours / scripts / RequireFixVersionIfFixedex2.groovy import com.onresolve.jira.groovy.user.FieldBehaviours import groovy.transform.BaseScript @BaseScript FieldBehaviours fieldBehaviours
https://docs.adaptavist.com/sr4js/latest/features/behaviours/behaviours-examples/live-editing
CC-MAIN-2021-17
refinedweb
143
56.35
Download presentation Presentation is loading. Please wait. Published bySara Crosse Modified about 1 year ago 1 Advanced Graphics OpenGL Alex Benton, University of Cambridge – Supported in part by Google UK, Ltd 2 Today’s technologies Java Common, re-usable language; extremely well-designed Steadily increasing popularity in industry Weak but evolving 3D support C++ Long-established language Long history with OpenGL Technically C has the long history. C++ never really improved it. Long history with DirectX Losing popularity in some fields (finance, web) but still strong in others (games, medical) OpenGL Open source with many implementations Extraordinarily well-designed, old, but still evolving Fairly cross-platform DirectX/Direct3d Less well-designed Microsoft™ only DX 10 requires Vista! But! Dependable updates… Java3D Poor cross-platform support (surprisingly!) Available by GPL; community- developed 3 OpenGL OpenGL is… hardware-independent operating system independent vendor neutral OpenGL is a state-based renderer set up the state, then pass in data: data is modified by existing state very different from the OOP model, where data would carry its own state 4 OpenGL OpenGL is platform-independent, but implementations are platform-specific and often rely on native libraries Great support for Windows, Mac, linux, etc Support for mobile devices with OpenGL-ES Android, iPhone, Symbian OS Accelerates common 3D graphics operations Clipping (for primitives) Hidden-surface removal (Z-buffering) Texturing, alpha blending (transparency) NURBS and other advanced primitives (GLUT) 5 OpenGL in Java: JOGL JOGL is the Java binding for OpenGL. JOGL apps can be deployed as applications or as applets. This means that you can embed 3D in a web page. (If the user has installed the latest Java, of course.) Admittedly, applets are somewhat “1998”. Using JOGL: Wiki: You can download JOGL from and To deploy an embedded applet, you’ll use Sun’s JNLP wrappers, which provide signed applets wrapped around native JOGL binaries. 6 A quick intro to JOGL: Hello Square public class HelloSquare { public static void main(String[] args) { new Thread() { public void run() { Frame frame = new Frame("Hello Square"); GLCanvas canvas = new GLCanvas(); // Setup GL canvas frame.add(canvas); canvas.addGLEventListener(new Renderer()); // Setup AWT frame frame.setSize(400, 400); frame.addWindowListener(new WindowAdapter(){ public void windowClosing(WindowEvent e) { System.exit(0); } }); frame.setVisible(true); // Render loop while(true) { canvas.display(); } }.start(); } public class Renderer implements GLEventListener { public void init(GLAutoDrawable glDrawable) { final GL gl = glDrawable.getGL(); gl.glClearColor(0.2f, 0.4f, 0.6f, 0.0f); } public void display(GLAutoDrawable glDrawable) { final GL gl = glDrawable.getGL(); gl.glClear(GL.GL_COLOR_BUFFER_BIT); gl.glLoadIdentity(); gl.glTranslatef(0, 0, -5); gl.glBegin(GL.GL_QUADS); gl.glVertex3f(-1, -1, 0); gl.glVertex3f( 1, -1, 0); gl.glVertex3f( 1, 1, 0); gl.glVertex3f(-1, 1, 0); gl.glEnd(); } public void reshape(GLAutoDrawable gLDrawable, int x, int y, int width, int height) { final GL gl = gLDrawable.getGL(); final float h = (float)width / (float)height; gl.glMatrixMode(GL.GL_PROJECTION); gl.glLoadIdentity(); (new GLU()).gluPerspective(50, h, 1, 1000); gl.glMatrixMode(GL.GL_MODELVIEW); } 7 1) Shaded square A simple parametric surface in JOGL public void vertex(GL gl, float x, float y, float z) { gl.glColor3f( (x+1)/2.0f, (y+1)/2.0f, (z+1)/2.0f); gl.glVertex3f(x, y, z); } public void sphere(GL gl, double u, double v) { vertex(gl, cos(u)*cos(v), sin(u)*cos(v), sin(v)); } //... gl.glBegin(GL.GL_QUADS); for (double u = 0; u <= 2*PI; u += 0.1) { for (double v = 0; v <= PI; v += 0.1) { sphere(gl, u, v); sphere(gl, u+0.1, v); sphere(gl, u+0.1, v+0.1); sphere(gl, u, v+0.1); } gl.glEnd(); 2) Parametric sphere 8 Animating a parametric surface The animation at right shows the linear interpolation between four parametric surface functions. Colors are by XYZ. The code is online, and pretty simple—please play with it 9 Behind the scenes Two players: The CPU, your processor and friend The GPU (Graphical Processing Unit) or equivalent software The CPU passes streams of vertices and of data to the GPU. The GPU processes the vertices according to the state that has been set; here, that state is “every four vertices is one quadrilateral polygon”. The GPU takes in streams of vertices, colors, texture coordinates and other data; constructs polygons and other primitives; then draws the primitives to the screen pixel-by-pixel. This process is called the rendering pipeline. 10 Anatomy of a rendering pipeline 1) Geometry is defined in local space. The vertices and coordinates of a surface are specified relative to a local basis and origin. This encourages re-use and replication of geometry; it also saves the tedious math of storing rotations and other transformations within the vertices of the shape itself. This means that changing the position of a highly complex object requires only changing a 4x4 matrix instead of recalculating all vertex values. World space Viewing space 3D screen space 2D display space Local space 11 Anatomy of a rendering pipeline 2) The pipeline transforms vertices and surface normals from local to world space. A series of matrices are concatenated together to form the single transformation which is applied to each vertex. The rendering engine (e.g., OpenGL) is responsible for associating the state that transforms each group of vertices with the actual vertex values themselves. World space Viewing space 3D screen space 2D display space Local space 12 Anatomy of a rendering pipeline 3) Rotate and translate the geometry from world space to viewing or camera space. At this stage, all vertices are positioned relative to the point of view of the camera. (The world really does revolve around you!) For example, a cube at (10,000, 0, 0) viewed from a camera (9,999, 0, 0) would now have relative position (1, 0, 0). Rotations would have similar effect. This makes operations such as clipping and hidden-object removal much faster. Viewing space World space 3D screen space 2D display space Local space 13 Anatomy of a rendering pipeline 4) Perspective: Transform the viewing frustrum into an axis-aligned box with the near clip plane at z=0 and the far clip plane at z=1. Coordinates are now in 3D screen space. This transformation is not affine: angles will distort and scales change. Hidden-surface removal can be accelerated here by clipping objects and primitives against the viewing frustrum. Depending on implementation this clipping could be before transformation or after or both. 3D screen space World space Viewing space 2D display space Local space 14 Anatomy of a rendering pipeline 5) Collapse the box to a plane. Rasterize primitives using Z-axis information for depth-sorting and hidden-surface-removal. Clip primitives to the screen. Scale raster image to the final raster buffer and rasterize primitives. 2D display space World space Viewing space 3D screen space Local space 15 Recap: sketch of a rendering pipeline Object definition Local space Scene composition Viewing frame definition Lighting definition World space Backface culling Viewing frustum culling HUD definition Viewing space Hidden-surface removal Scan conversion Shading 3D screen space Image Display space L2W W2V V2S S2D P’ = S2D V2S W2V L2W P L Each of these transforms can be represented by a 4x4 matrix. 16 OpenGL’s matrix stacks OpenGL uses matrix stacks to store stacks of matrices, where the topmost matrix is (usually) the product of all matrices below. This allows you to build a local frame of reference— local space—and apply transforms within that space. Remember: matrix multiplication is associative but not commutative. ABC = A(BC) = (AB)C ≠ ACB ≠ BCA Pre-multiplying matrices that will be used more than once is faster than multiplying many matrices every time you render a primitive. A AB ABC 17 OpenGL’s matrix stacks GL has three matrix stacks: Modelview – positioning things relative to other things Projection – camera transforms Texture – texture-mapping transformations You choose your current matrix with glMatrixMode() ; this sets the state for all following matrix operations. Each time you call glTranslate(), glRotate(), etc., these commands modify the current topmost matrix on the current stack. If you want to make local changes that only have limited effect, you use glPushMatrix() to push a new copy of your current matrix onto the top of the stack; then you modify it freely and, when done, call glPopMatrix(). 18 Matrix stacks and scene graphs Matrix stacks are designed for nested relative transforms. glPushMatrix(); glTranslatef(0,0,-5); glPushMatrix(); glRotatef(45,0,1,0); renderSquare(); glPopMatrix(); glPushMatrix(); glRotatef(-45,0,1,0); renderSquare(); glPopMatrix(); identity T T T R 1 identity T T R 2 identity T … Send primitives from CPU to hardware 19 Rendering simple primitives GL’s state machine applies its state to each vertex in sequence. To render simple primitives, tell GL what kind of primitive to render: glBegin(GL_LINES) glBegin(GL_LINE_STRIP) glBegin(GL_TRIANGLES) glBegin(GL_QUADS) glBegin(GL_TRIANGLE_STRIP) And several others After calling glBegin(), you can call glVertex() repeatedly, passing in triples (or quads) of floats (or doubles) which are interpreted as positions in the context of the current rendering state. GL is very flexible about data sizes and data types When you’re done, call glEnd(). Your primitives will now be rasterized. glBegin(GL.GL_QUADS); glVertex3f(-1, -1, 0); glVertex3f( 1, -1, 0); glVertex3f( 1, 1, 0); glVertex3f(-1, 1, 0); glEnd(); 20 Rendering primitives in a slightly less painfully inefficient manner Instead of sending each vertex individually, send them en masse: Using glDrawArrays() we can avoid the overhead of a huge number of glVertex() calls. 21 Rendering primitives in a way that’s really quite efficient, actually glDrawArrays() takes a bulk list of vertices, but it still sends every vertex to the GPU once for every triangle or quad that uses it. If your surface repeats the same vertex more than once, you can use glDrawElements() instead. glDrawElements() acts like glDrawArrays() but takes an additional list of indices into the array. Now you’ll pass down each vertex exactly once, referencing its integer index multiple times. 22 Camera control in OpenGL OpenGL has two stacks that apply to geometry being rendered: Modelview and Projection. The values atop these two stacks are concatenated to transform each vertex from local to world to screen space. You set up perspective on the Projection stack You position your scene in world co-ordinates on the Modelview stack You can position your camera on either stack; it’s just another transform GL’s utility library, glu, provides several convenient utility methods to set up a perspective view: gluLookAt gluPerspective gluOrtho, etc By default your camera sits at the origin, pointing down the negative Z axis, with an up vector of (0,1,0). I usually set my camera position on the Modelview matrix stack 23 Scene graphs A scene graph is a tree of scene elements where a child’s transform is relative to its parent. The final transform of the child is the ordered product of all of its ancestors in the tree. OpenGL’s matrix stack and depth-first traversal of your scene graph: two great tastes that go great together! M fingerToWorld = (M person M torso M arm M hand M finger ) Person Torso Arm Leg… Hand Finger … … … 24 Great for… Collision detection between scene elements Culling before rendering Accelerating ray-tracing Your scene graph and you A common optimization derived from the scene graph is the propagation of bounding volumes. These take many forms: bounding spheres, axis-aligned bounding boxes, oriented bounding boxes… Nested bounding volumes allow the rapid culling of large portions of geometry Test against the bounding volume of the top of the scene graph and then work down. 25 Your scene graph and you Many 2D GUIs today favor an event model in which events ‘bubble up’ from child windows to parents. This is sometimes mirrored in a scene graph. Ex: a child changes size, which changes the size of the parent’s bounding box Ex: the user drags a movable control in the scene, triggering an update event If you do choose this approach, consider using the model/ view/ controller design pattern. 3D geometry objects are good for displaying data but they are not the proper place for control logic. For example, the class that stores the geometry of the rocket should not be the same class that stores the logic that moves the rocket. Always separate logic from representation. 26 Hierarchical modeling in action void renderLevel(GL gl, int level, float t) { gl.glPushMatrix(); gl.glRotatef(t, 0, 1, 0); renderSphere(gl); if (level > 0) { gl.glScalef(0.75f, 0.75f, 0.75f); gl.glPushMatrix(); gl.glTranslatef(1, -0.75f, 0); renderLevel(gl, level-1, t); gl.glPopMatrix(); gl.glPushMatrix(); gl.glTranslatef(-1, -0.75f, 0); renderLevel(gl, level-1, t); gl.glPopMatrix(); } gl.glPopMatrix(); } 27 Hierarchical modeling in action 28 Mobile OpenGL: OpenGL-ES GL has been ported, slightly redux, to mobile platforms: Symbian Android iPhone Windows Mobile Currently two flavors: 1.x for ‘fixed function’ hardware 2.x for ‘programmable’ hardware (with shader support) Chips with built-in shader support are now available; effectively GPUs for cell phones 29 Mobile OpenGL: OpenGL-ES Key traits of OpenGL-ES: Very small memory footprint Very low power consumption Smooth transitions from software rendering on low-end devices to hardware rendering on high-end; the developer should never have to worry Surprisingly wide-spread industry adoption OpenGL-ES 2.0+ emphasize shaders over software running on the phone’s processor. Shaders move processing from the device CPU to the peripheral GPU-- mobile parallel processing. 30 The future of portable 3D “ARhrrrr”, an ‘augmented reality’ game concept from the Georgia Tech Augmented Reality Lab 31 Recommended reading The OpenGL Programming Guide Some folks also favor The OpenGL Superbible for code samples and demos There’s also an OpenGL-ES reference, same series The Graphics Gems series by Glassner et al All the maths you’ve already forgotten The NeonHelium online OpenGL tutorials Similar presentations © 2017 SlidePlayer.com Inc.
http://slideplayer.com/slide/3872995/
CC-MAIN-2017-09
refinedweb
2,331
51.18
Practical .NET ASP.NET provides a wealth of options for dynamically integrating JavaScript into your client-side pages. And by adding T4 into the mix, you can generate, at runtime, exactly the client-side code that your page needs. Most of the JavaScript you put in your ASP.NET page is static: You write it at design time, either in a library or in the page itself. But there are at least three scenarios when dynamically generating JavaScript at runtime on the server can make your ASP.NET application simpler and faster. For instance, if you have a UserControl that incorporates client-side code, you need to ensure that any page it's hosted on includes the JavaScript libraries your UserControl depends on. At the same time, you don't want your page to download any more JavaScript than necessary. The solution is to have each UserControl dynamically add the JavaScript libraries it needs at runtime, while not adding any library already added by the page. And while AJAX is great, reaching back to the server from the client so you can retrieve information is necessarily slow. As long as the amount of data isn't huge, it's faster to just embed essential data in the page and skip the AJAX call (a practice that also reduces demands on your server). Finally, if you have a page that handles several different scenarios, you could write some general-purpose JavaScript code that handles all the scenarios. Unfortunately, that code will have to contain If or Select Case statements, making it harder to test and debug than code dedicated to handling a specific scenario. Dynamically generating the JavaScript code you need on the server solves this problem by letting you create just the code needed for this request. If you use the runtime code-generation tools in the Microsoft Text Template Transformation Toolkit (T4), your JavaScript code-generation process can be extremely simple. Supporting JavaScript in UserControls The control point for inserting JavaScript code into your ASP.NET page on the server is the ClientScriptManager, available from the ClientScript property of the Page class. A good first step, then, is to retrieve that object in the Page_Load event and store it in a field so you can use it elsewhere in the page. That's what this code does: Dim csm As ClientScriptManager Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load csm = Me.ClientScript; End Sub Retrieving the ClientScriptManager in a UserControl is only slightly more complicated -- you have to go to the UserControl Page property to get to the ClientScript property: csm = Me.Page.ClientScript Now that you have the ClientScriptManager in a UserControl, you can use its RegisterClientScriptInclude to add script tags for JavaScript libraries your control needs. You must pass the RegisterClientScriptInclude method two parameters: a key to mark the request and the URL of the script to be downloaded. The key allows you to use the ClientScriptManager's IsClientScriptIncludeRegistered method to check to see if a tag with that key has already been loaded. Using IsClientScriptIncludeRegistered isn't required -- the ClientScriptManager is smart enough not to add the same library twice. However, using IsClientScriptIncludeRegistered is supposed to be faster than relying on the ClientScriptManager's internal checking (I've never bothered to test this). This code in a UserControl checks to see if the key "OverStock.js" has already been registered. If it hasn't, the UserControl adds a script tag to download the OverStock.js script to the page it's a part of: If csm.IsClientScriptIncludeRegistered("OverStock.js") = False Then csm.RegisterClientScriptInclude("OverStock", "Scripts/Overstock.js") End If The resulting HTML will look something like this: <body > <form method="post" action="InsertCode.aspx " id="OverStockForm" > <div class="aspNetHidden" > <input type="hidden" name="__VIEWSTATE" ... </div > <script src="Scripts/Overstock.js" type="text/javascript" > </script > By using RegisterClientScriptInclude in your UserControl, the developer building a page with your UserControl doesn't have to be aware that your UserControl needs the library: Your UserControl takes care of itself. And it doesn't matter if the developer building the page adds two copies of your UserControl to the page or if some other UserControl needs the OverStock library: The library will be added to the page exactly once. You do need to ensure that the same key is used for every request for a library -- using the library's file name is a good way to ensure this. Embedding Data in the Page When you need the absolutely most-recent data from the server in a page, use an AJAX call to retrieve it from the server. However, if it's good enough that the data is up-to-date when the page is created, using AJAX is overkill. If the data is part of the page UI (for instance, in a dropdown list), you can store the data in a control. However, if you have data that isn't in any control, you can still put it in the page by wrapping it up in a JavaScript array with the ClientScriptManager RegisterArrayDeclaration method. When using the RegisterArrayDeclaration, pass the name of your array as it will be used in your JavaScript code, and a string containing a set of comma-delimited values. That string can be ugly, though. This example appears to create an array of company divisions to be used in code: Dim divisions As String = "Central, South, Overseas" csm.RegisterArrayDeclaration("Divisions", divisions) However, the resulting JavaScript code looks like this: // <![CDATA[ var Divisions = new Array(Central, South, Overseas); //]] > JavaScript is going to treat the array members as function names, rather than as strings. To get the array members treated as strings, you'll need a string like this: Dim divisions As String = """Central"", ""South"", ""Overseas""" The resulting JavaScript will now look like what you want: // <![CDATA[ var Divisions = new Array("Central", "South", "Overseas"); //]] > As with registering a script tag, ASP.NET won't add two arrays with the same name. However, when ASP.NET sees a second RegisterArrayDeclaration with the same array name, ASP.NET adds the new item to the existing array. You're probably going to be loading your array from a database, so you can take advantage of this to simplify your code. This example builds an array from the Northwind database Categories table, using the Entity Framework to retrieve the data: Dim ne As New northwndEntities For Each cat In ne.Categories csm.RegisterArrayDeclaration("Divisions", """" & cat.CategoryName & """") The resulting JavaScript array declaration looks like this: // <![CDATA[ var Categories = new Array("Beverages", "Condiments", "Confections", "Dairy Products", "Grains/Cereals", "Meat/Poultry", "Produce", "Seafood"); //]] > Unfortunately, unlike registering script tags, the RegisterArrayDeclaration doesn't stop you from adding duplicate items to the array. Attaching Events Dynamically Another use for server-side JavaScript generation is to pass server-side data as a parameter to the functions that need it by generating your function calls at runtime. This function, for instance, expects to be passed a tax rate that, like my array data, can't be found elsewhere on the page: <script type="text/javascript" > function CalculateTaxes(taxRate) { // Use taxRate } </script > The following server-side code adds a call to the client-side CalculateTaxes function for the client-side click event of a button, using the button's OnClientClick property. This server-side code inserts the tax rate as the parameter to the function call: Me.CalcTaxesButton.OnClientClick = "CalculateTaxes(" & taxRate & ")" The resulting HTML will look something like this (assuming that the taxRate variable on the server is set to 13 percent): <input type="button" name="CalcTaxesButton" value="Calculate Taxes" onclick="CalculateTaxes(0.13); ... While it's easy to add code to the onclick event through the OnClientClick property, using other events is only slightly more complicated. A control's Attributes property lets you access any attribute on the associated HTML element by name, so you can use Attributes to access other events. This code, for instance, calls the CalculateTaxes method from a control's onblur event: Me.TotalSalesTextBox.Attributes("onblur") = "CalculateTaxes(" & taxRate & ")" Accessing controls in a GridView to set their client-side events requires more finesse. A GridView that lists the order details for an order might have five columns: product name, price, discount, quantity and (in the fifth column) a button for calculating the taxes on the current row. As each row is built and the data is loaded into the row, the GridView fires the RowDataBound event. The e parameter in that event has a Row property that provides access to the row currently being created. This code would access the button in the fifth column, and set its OnClientClick event to call the CalculateTaxes function: Protected Sub GridView1_RowDataBound(sender As Object, e As System.Web.UI.WebControls.GridViewRowEventArgs) _ Handles GridView1.RowDataBound Dim but As Button If e.Row.RowIndex >= 0 Then but = CType(e.Row.Cells(4).Controls(0), Button) but.OnClientClick = "CalculateTaxes(" & taxRate & ")" End If End Sub But why stop there? The code in the CalculateTaxes function would now have to find the matching row and retrieve the price, discount and quantity textboxes from controls in the row. Why not just pass all values from the row to the CalculateTaxes function so that the function doesn't have to retrieve data from the GridView at all, as the code in Listing 1 does? Generating Code My last example generated code for each row on the GridView to meet the needs for that row. You can take the same approach to the page as a whole: On the server, why not generate exactly the client-side code that the page needs? Your best tool for doing that is Microsoft T4 technology. Assume that goods the company has too many of ("Overstock") are given an automatic 10 percent discount. The code in the CalculateTaxes function would have to look like this to handle that scenario: function CalculateTaxes(taxRate, ProductID, quantity, price, discount) { if (type="Overstock") { discount = .10; } var extendedPrice = quantity * price * (1-discount); But in a page devoted to overstock items, the discount will always be 10 percent. In that case, for the Overstock page, the code could be much simpler: function CalculateTaxes(taxRate, ProductID, quantity, price) { var extendedPrice = quantity * price * .9; Rather than write (and test) the more-complicated code, why not generate just the JavaScript code that's required? A T4 template to generate the two different versions of the code would look like Listing 2. A template consists of control blocks (enclosed in <# # > delimiters) containing server-side code that controls the code-generation process. Between the control blocks is the text that will form the generated code. This example checks a StockType variable pulled from a T4 Session object (not the ASP.NET Session object) to decide what text to include in the generated code. To create this T4 template, first select Project | Add New Item, and (in the Add New Item dialog) select the General category. In that category, select the Preprocessed Text Template and give it a name before clicking the Add button (I called mine CalcTaxes.tt). You'll get a couple of warnings -- just select the "Don't show this again option" and click the OK button. The Preprocessed Text Template added to your project consists of a .tt file that you put the template code in, and a code file (which you can safely ignore). To generate the JavaScript code from the template at runtime, first instantiate your template's class (CalcTaxes, in my case), which will be in the My.Templates namespace in Visual Basic: Dim jsT4 As New My.Templates.CalcTaxes To pass the StockType value as a parameter to the TransformText method, use the Preprocessed template Session property. You first load the Session property with a Dictionary that uses a string to store object values. You can then add any parameters you need to the Session object, as this code does: jsT4.Session = New Dictionary(Of String, Object) jsT4.Session.Add("StockType", "Overstock") Finally, you call the TransformText method of the class, which returns a string containing the generated code. The code to get the code from CalcTaxes looks like this: Dim jsText As String = jsT4.TransformText() Now you need to insert the generated JavaScript code into your page. The easiest way to do that is with the ClientScriptManager RegisterClientScriptBlock method. As with the other Register methods, this method accepts a key that allows you to check to see if the script block has already been added. Unlike the other Register methods, it's essential that you use the IsClientScriptBlockRegistered method to make sure that you don't add the same script block twice. That's what this code does: If csm.IsClientScriptBlockRegistered("CalcTaxes") = False Then csm.RegisterClientScriptBlock(Page.GetType, "CalcTaxes", jsText, True) End If An ASP.NET button's client-side click event will automatically call the ASP.NET __doPostBack function and post back to the server, even if you set the button's UseSubmitBehavior property to False. There are two ways to prevent triggering this post-back to the server. One solution: when setting the OnClientClick property, add a statement that returns false after the call to your JavaScript function: Me.CalcTaxesButton.OnClientClick = "CalculateTaxes(" & taxRate & "); return false;" That will suppress any subsequent processing by the button (for instance, the call to the __doPostback function). If your function doesn't return any value or returns false, you can simply tack "return" to your function call: Me.CalcTaxesButton.OnClientClick = "return CalculateTaxes(" & taxRate & ")" The other solution is to add an HTML button of type input to your page and give it a runat attribute set to server, as in this example: <input id="CalcTaxesHTML" type="button" value="CalculateTaxes" runat="server"/> Adding the runat attribute makes the HTML button accessible from your code. The HTML button won't have an OnClientClick property, but it will have an Attributes property. You can use it to set the button's onclick event, like this: Me.CalcTaxesHTML.Attributes("onclick") = "CalculateTaxesSimple(" & taxRate & ")" —P.V. The first parameter to the RegisterClientScriptBlock method specifies the kind of object that will be accessing the script (typically, the Page object); the second parameter is the key that identifies the block; the third parameter is the generated JavaScript code; and the final parameter causes the code to be wrapped in a script block. My page now has more focused code that does just what the page needs. There's nothing wrong with static JavaScript. But when using user controls that depend on JavaScript libraries, or when trying to improve performance by avoiding server-side calls -- or if you're just trying to simplify your client-side code through server-side code generation -- ASP.NET gives you the tools to dynamically add the code you need to your page. Printable Format > More TechLibrary I agree to this site's Privacy Policy. > More Webcasts
http://visualstudiomagazine.com/articles/2012/11/01/javascript-generator.aspx
CC-MAIN-2014-15
refinedweb
2,474
52.8
This-20170111 - 11 Jan 2017 16:02:54 GMT - Search in distribution - RDF::NS::Trine - Popular RDF namespace prefixes from prefix.cc as RDF::Trine nodes - RDF::NS::URIS - Popular RDF namespace prefixes from prefix.cc as URI objects - RDF::SN - Short names for URIs with prefixes from prefix.cc This contains a list of 58 prefix and URI pairs that are commonly used in RDF. The intention is that prefixes in this list can be safely used in code that has a long lifetime. The list has been derived mostly from W3C standards documents, but also so...KJETILK/RDF-NS-Curated-0.005 - 23 Jan 2017 15:30:22 - 01 Mar 2016 16:57:19 GMT - Search in distribution This document contains a complete list of ExifTool tag names, organized into tables based on information type. Tag names are used to reference specific meta information extracted from or written to a file....EXIFTOOL/Image-ExifTool-10.55 - 05 Jun 2017 14:41:23 command line client is installed with RDF::Trine::Exporter::GraphViz to create nice graph diagrams from RDF data. Namespace prefixes are taken from RDF::NS....VOJ/RDF-Trine-Exporter-GraphViz-0.141 - 20 Aug 2012 07:50:56 GMT - Search in distribution - RDF::Trine::Exporter::GraphViz - Serialize RDF graphs as dot graph diagrams XML::Writer is a helper module for Perl programs that write an XML document. The module handles all escaping for attribute values and character data and constructs different types of markup, such as tags, comments, and processing instructions. By def...JOSEPHW/XML-Writer-0.625 - 05 Jun 2014 14:24:06 ATRICKETT/XML-RSS-Tools-0.34 - 27 May 2014 14:48:39 GMT - Search in distribution NOTE: This module requires the XML::Parser module, version 2.19 or higher. WARNING: This module is not re-entrant or thread-safe due to the use of static variables while importing XML. The XMLNews::Meta module handles the import, export, and programm...DMEGG/XMLNews-Meta-0.01 - 09 Dec 1999 19:27:36 GMT - Search in distribution BBC/Pinwheel-0.2.7 - 19 Jan 2010 17:03:02 GMT - Search in distribution This module provides access to the DublinCore metadata specification, see. Actually, the dublin-core spec is rather empty: applications need to define the content of the supplied containers themselves. When this content is RDF d...MARKOV/Data-DublinCore-1.00 - 21 Jul 2015 15:40:52 GMT - Search in distribution - 18 Nov 2005 18:33:44 RDQL::Parser - A simple top-down LL(1) RDQL and SPARQL parser - see and - 19 Jun 2006 10:23:19 GMT - Search in distribution Describe Flickr photos as RDF. This package inherits from *Net::Flickr::API*....ASCOPE/Net-Flickr-RDF-2.2 - 23 Dec 2010 02:12:31 GMT - Search in distribution
https://metacpan.org/search?q=RDF-Prefixes
CC-MAIN-2017-26
refinedweb
465
56.15
How to normalize an array in NumPy? I would like to have the norm of one NumPy array. More specifically, I am looking for an equivalent version of this function def normalize(v): norm = np.linalg.norm(v) if norm == 0: return v return v / norm Is there something like that in skearn or numpy? This function works in a situation where v is the 0 vector. If you're using scikit-learn you can use sklearn.preprocessing.normalize: import numpy as np from sklearn.preprocessing import normalize x = np.random.rand(1000)*10 norm1 = x / np.linalg.norm(x) norm2 = normalize(x[:,np.newaxis], axis=0).ravel() print np.all(norm1 == norm2) # True From: stackoverflow.com/q/21030391
https://python-decompiler.com/article/2014-01/how-to-normalize-an-array-in-numpy
CC-MAIN-2019-47
refinedweb
119
63.56
Type: Posts; User: raptor88 Thanks for your info. Raptor Thanks for that clarification. Raptor Hi Paul, Could you explain the difference between using "extern" on a function verses a global function? Say I create two files named Globals.h and Globals.cpp. I code a function in... Scratch this question. I found that VS "Express" 2012 is free. I was probably looking at the non Express version before. Answers for the other questions still needed though. Thanks, Raptor The "Windows 8 compatibility assistant" says that Visual C++ 2010 Express installed in my Vista laptop is not compatible with Windows 8 Pro. However, I've learned that many programs that Win8's... AH HA MOMENT! I can see now that when the vector of structs is instantiated, space for the string is not allocated "within" the struct. Just a pointer to the string is the actual member in the... This is what I'm trying to understand. How is the sizeof(MyStruct) the same for every struct when there is a variable length string in the struct? Without a variable length member in the struct... Totally right. Being new to C++, I used the parenthesis for "myVect(8)" in my actual test code but used the [] in my example here. Thanks for catching that. My understanding is that a vector... A new question regarding vectors and iterators. I declared a struct that contains integers and one string. Then I instantiated a vector to hold the structs and set the total number of structs. ... I wrote a test program and yes, pre-increment doesn't skip the first element. Goggling shows that a for loop works like a while loop with the increment occurring at the end of the loop. Also... Understand. Going back to your previous post, why is pre-increment faster than post-increment? And won't pre-increment skip the first element in the for loop? Thanks, Raptor Actually no, I'm not preferring method-3 because of less typing. Method-2 uses "vector<int>::size_type" in place of Method-3's "unsigned" which is not that much more typing. Method 2: for... I put my question in that form to simplify what I was trying to ask. My basic question was whether Method-3 should always work under the conditions I set. Just trying to understand how vector... C++ beginner learning how to use vectors. I see that there are 3 ways to iterate through a vector. Method 1: for(vector<int>::iterator it = myVect.begin(); it != myVect.end(); it++) ... My bad Paul. In hindsight I can see why you think my plan is to code most everything using globals and not using classes. The code I posted was just extending your example to verify whether my... Would that be moving the enum statement before the struct statement? Since I need to access the betInfoMap[4] in my "GetBets", "PayBets" and on every iteration of the crap table drawing routine... Is this the correct way to use Paul's code of the map of structs in an array for 4 players? #include "stdafx.h" #include <string> #include <iostream> #include <map> using namespace std; Thanks for clearing that up Paul. Learned a lot from this thread. I did plan to only use your method in the future since it's easier to type but now I know it also works better. Thanks, Raptor Guess I need to do more research since I really don't understand why the code I showed works different from Pauls. Thanks for pointing that out. For other beginning C++ learners, here's a substitute for Paul's struct code. Tested and it works identically. // The following code is equivalent to Paul's struct code and it works... Yes, I did realize that but failed to say that "structs are like classes except they are always public by default". Thanks for clarifying it just in case I didn't realize it. Yes, once one... Hi Paul, Thank you very much for taking the time to explain your code. Who would have guessed that even structs can have constructors. C++ books and web tutorials are so simplistic that they... OK, am more than willing to buy another C++ book. Please tell me the name of a book that explains what Paul did with his constructor statement within a struct. None of my 6 or 7 C++ books nor hours... Hi Paul, Going over your code, that's an extremely clever method of setting up the structure so it can be initialized in the format you show in main. I Googled for over 2 hours and could never... That is really helpful Paul. I'll try to figure out how to use that for 4 players. Maybe an array to hold the maps. Thanks! Raptor
http://forums.codeguru.com/search.php?s=22de28a9e772b91125217aebfecb9f71&searchid=6640169
CC-MAIN-2015-14
refinedweb
798
76.52
BBC micro:bit Bit:Commander Neopixels Introduction The Bit:Commander has 6 Neopixels. They are at the top of the board and handily labelled 0 to 5. The Neopixels are connected to pin 13 on the micro:bit. The Neopixels on the Bit:Commander don't need any special treatment. You program for the Neopixels as you would normally do. The following program cycles through the button colours, blinking them on and off. from microbit import * import neopixel # Initialise neopixels npix = neopixel.NeoPixel(pin13, 6) # light all neopixels with given colour def light_all(col): for pix in range(0, len(npix)): npix[pix] = col npix.show() red = (64,0,0) green = (0,64,0) blue = (0,0,64) yellow = (64,24,0) off = (0,0,0) cols = [red,green,blue,yellow,off] while True: for c in cols: light_all(c) sleep(1000) light_all(off) sleep(1000) Simple Joystick/Button Example This second program is a little cuter. You hold down one of the 4 pushbuttons and then move the joystick left or right. The button colour is 'wiped' across the LEDs in the direction indicated. Whilst it might not be the most useful of gadgets, it is quite a nice feeling to control the colours like this. # Hold down a button and move the joystick left or right to # wipe a colour across the LED chain in the direction of # choice from microbit import * import neopixel # Initialise neopixels npix = neopixel.NeoPixel(pin13, 6) red = (64,0,0) green = (0,64,0) blue = (0,0,64) yellow = (64,24,0) off = (0,0,0) button_pins = [pin12, pin15, pin14, pin16] ledcols = [red,blue,green,yellow] def wipe(col, delay, left): for pix in range(0, 6): if left: npix[pix] = col else: npix[5-pix] = col npix.show() sleep(delay) while True: # check button presses # no multitouch, priority is y,g,b,r chosen = off for i in range(4): if button_pins[i].read_digital(): chosen = ledcols[i] # now check for joystickX for wipe direction x = pin1.read_analog() if x<150: wipe(chosen,50,False) sleep(500) elif x>750: wipe(chosen,60,True) sleep(500) sleep(50) Exploring Further There are a few other pages with Neopixel code on the site. Some of the functions there are usable here in a drop-in fashion. One of the most obvious things to do is to use the LEDs as indicators, that a button is pressed, a trigger reading value reached or that a radio message has arrived. Most of your fancy electronic gadgets have sound and light indicators to make the experience better. Adding those features to your circuits and projects makes them seem more polished.
http://www.multiwingspan.co.uk/micro.php?page=bcneo
CC-MAIN-2019-09
refinedweb
443
70.53
15 September 2008 10:02 [Source: ICIS news] (adds share price, updates throughout) LONDON (ICIS news)--Shares in Ciba Specialty Chemicals rocketed by more than 25% in early Monday trading as investors reacted positively to a takeover bid from German chemicals giant BASF. ?xml:namespace> BASF offered Swfr6.1bn ($5.5bn/€3.9bn) to buy the Basel, Switzerland-based company, which recommended its shareholders accept the offer. The offer represented Swfr50 a share, a 32% premium over the closing price of Ciba shares on Friday 12 September, and a premium of 64.3% over the volume-weighted average price of Ciba shares over the last 60 trading days, BASF said. According to media reports, there would certainly be job cuts in connection with the takeover. However, Ciba had already planned to cut around 1,200 jobs by the end of 2009 as part of its long-term restructuring programme. Investment bank UBS, in ongoing calls with the companies, said BASF expected the deal to be earnings-accretive in the second year after the deal. Given the relative small size of Ciba - BASF's 2008 earnings before interest, tax, depreciation and amortisation (EBITDA) was €11.2bn while Ciba’s was €470m – the impact on BASF at group level would be small, it said. The bank added that there were some overlaps in paper chemicals, but argued that the main reason for acquisition is to expand BASF's specialty chemicals portfolio. At 10:05 local time (8.05 GMT), Ciba’s shares surged to Swfr47.45, 25% up on Friday’s close. Ciba’s share value has nearly halved since the third quarter of 2006 as the company struggled with low margins. In August analysts cut ratings and earnings estimates for Ciba after the firm reported second-quarter operating profits 50% below consensus. Ciba said integration into BASF would strengthen the businesses through access to BASF’s global research, production and marketing platform, raw materials and intermediates. “BASF and Ciba have reached a transaction agreement in which the board of directors of Ciba supports BASF’s attractive offer and recommends its acceptance to Ciba’s shareholders,” Ciba said in a statement. “Ciba strengthens BASF’s strategy and operations in the field of specialised chemical engineering through its leading innovation capabilities and application expertise in plastics additives, coating effects and water and paper treatment,” it added. A spokeswoman at Ciba said the company would hold an extraordinary general meeting at the end of November to seek approval from its shareholders. The transaction further requires the approval of the relevant merger control authorities. ($1 = Swfr1.11
http://www.icis.com/Articles/2008/09/15/9156139/ciba-shares-rocket-on-basf-takeover-offer.html
CC-MAIN-2015-22
refinedweb
432
51.99
I cannot believe it is so fragile ...is_devfsd_or_child() simplemindedly checks for pgrp:static int is_devfsd_or_child (struct fs_info *fs_info){ if (current == fs_info->devfsd_task) return (TRUE); if (current->pgrp == fs_info->devfsd_pgrp) return (TRUE); return (FALSE);} /* End Function is_devfsd_or_child */unfortunately, bash (I do not know if it does it always or not) resets pgrp on startup. I.e. if your action is using shell it is no more considered devfsd descendant ... and it will attempt in turn start devfsd action while devfsd is waiting for it ot finish.Thierry, i refer mostly to dynamics scripts currently. Every time I update devfsd it hangs in one of them. And actually it is enough to do service devfsd restart to trigger this. It may be 2.5 specific again in that it is not as easily seen under 2.4.I have no idea what can be done. Is there any way in kernel to find out if one task is descendant of other task? Even rewriting devfsd to use non-blocking calls and request queue does not help as it apprently just results in endless loop (action triggering action triggering action ...)-andrey-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2003/7/11/232
CC-MAIN-2017-22
refinedweb
216
56.86
decryption and padding in java cryptography934529 May 2, 2012 3:18 PM i have to decrypt a frame on my server. encrypted frame is coming from client device through GPRS on socket.encryption is done with "TripleDes" and with a given key.same algorithm and key i am using n server side. frame is a combination of Hex and Ascii String. problem is when i decrypt this frame i get an error : : thanks in advance. : here are the functions which i am using inside the code:here are the functions which i am using inside the code: if ((len = inputStream.read(mainBuffer)) > -1) { totalLength = len; } if (totalLength > 0) { byteToAscii = function.byteToAscii(mainBuffer, totalLength); } if (byteToAscii.length() > 0) { completeHexString = function.stringToHex(byteToAscii); debugInfo = "FRAME RECV.=" + completeHexString; /* FRAME RECV.*/ } byte[] key = new byte[]{31, 30, 31, 36, 32, 11, 11, 11, 22, 26, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30}; myKeySpec = new DESedeKeySpec(key); mySecretKeyFactory = SecretKeyFactory.getInstance("TripleDES"); dekey = mySecretKeyFactory.generateSecret(myKeySpec); byte[] zeros = {0, 0, 0, 0, 0, 0, 0, 0}; IvParameterSpec iv = new IvParameterSpec(zeros); Cipher c = Cipher.getInstance("TripleDES/CBC/PKCS5Padding"); c.init(Cipher.DECRYPT_MODE, dekey, iv); byte[] decordedValue = new BASE64Decoder().decodeBuffer(completeHexString); byte[] decValue = c.doFinal(decordedValue); String decryptedValue = new String(decValue); System.out.println("decryptedValue= " + decryptedValue); i am new in java cryptography. pl tell me how to do it?i am new in java cryptography. pl tell me how to do it? public String stringToHex(String base) { StringBuffer buffer = new StringBuffer(); int intValue = 0; for (int x = 0; x < base.length(); x++) { intValue = base.charAt(x); String hex = Integer.toHexString(intValue); if (hex.length() == 1) { buffer.append("0" + hex + ""); } else { buffer.append(hex + ""); } } return buffer.toString(); } public String byteToAscii(byte[] b, int length) { String returnString = ""; for (int i = 0; i < length; i++) { returnString += (char) (b[i] & 0xff); } return returnString; } thanks in advance. This content has been marked as final. Show 11 replies 1. Re: decryption and padding in java cryptographysabre150 May 2, 2012 3:28 PM (in response to 934529)Surely you should be hex decoding and not base64 decoding! Looking at it again it is not clear to me why you are base64 decoding at all. If the 'completeHexString' value shown is typical then it does not represent a hex encoding of base64 encoded data or double hex encoded data at all. For both base64 and hex the hex decoding of "41ed34a41" should be ASCII characters if the ciphertext is either hex or base64 encoded but 'ed' most definitely is not and ASCII character. Edited by: sabre150 on May 2, 2012 3:47 PM 2. Re: decryption and padding in java cryptography934529 May 3, 2012 6:21 AM (in response to sabre150)thanx 4 ur reply. i have modified my code. i am not decoding it with Base64 now. but now getting the exception "Given final block not properly padded".can you post the code which can work properly to decrypt my frame. ------ Cipher c = Cipher.getInstance("TripleDES"); c.init(Cipher.DECRYPT_MODE, key); int l = completeHexStr.length(); if (l%8==1) completeHexStr = completeHexStr + "0000000"; up to- else if (l%8==7) completeHexStr = completeHexStr + "0"; byte decordedValue[] =completeHexString.getBytes(); byte[] decValue = c.doFinal(decordedValue); ------- 3. Re: decryption and padding in java cryptographyEJP May 3, 2012 7:03 AM (in response to 934529) I am not decoding it with Base64 now.You're not decoding it at all now. You need to write yourself an inverse function for stringToHex(), and use it. 4. Re: decryption and padding in java cryptography934529 May 3, 2012 8:00 AM (in response to EJP)this is the code in c used for encryption at client side. do i need to decode the code at my side? #include <svc_sec.h> const unsigned char fixed_key[] = { 0x31, 0x30, 0x31, 0x36, 0x32, 0x11, 0x11, 0x11, 0x22, 0x26, 0x30, 0x30, 0x30, 0x30, 0x30, 0x30, 0x30, 0x30, 0x30, 0x30, 0x30, 0x30, 0x30, 0x30}; int Comm_Encrypt_Data(unsigned char *Test_Input_Data, int Len_Input_Data) { int Count_Input_Data, Counter_Input_Data; unsigned long Timer_1; unsigned char Init_Vector[8]; int Counter_Init_Vector, Temp_Byte_Count; unsigned char *Temp_Dst_Ptr, *Temp_Src_Ptr; unsigned char Temp_Input_Frame[9], Temp_Output_Frame[9]; unsigned char Test_Output_Data[500]; unsigned char Test_Key_Arr[9]; memset(&Init_Vector[0], '\0', sizeof(Init_Vector)); memset(Test_Key_Arr, '0', sizeof(Test_Key_Arr)); memcpy(Test_Key_Arr, &fixed_key[0], 8); Test_Key_Arr[sizeof(Test_Key_Arr)-1] = '\0'; Display_Data("KEY: ", Test_Key_Arr, sizeof(Test_Key_Arr)-1); memset(Test_Output_Data, '\0', sizeof(Test_Output_Data)); memcpy(Test_Output_Data, Test_Input_Data, 48); Count_Input_Data = Len_Input_Data -48 -3; //minus Data before payload, 3 bytes of '|' and CRC Counter_Input_Data = 0; while(Counter_Input_Data < Count_Input_Data) { Temp_Byte_Count = Count_Input_Data- Counter_Input_Data; if(Temp_Byte_Count > 8) Temp_Byte_Count = 8; memcpy(Temp_Input_Frame, &Test_Input_Data[48+Counter_Input_Data], Temp_Byte_Count); //succeeding bytes to be 0 if(Temp_Byte_Count < 8) { memset(&Temp_Input_Frame[Temp_Byte_Count], '0', (8-Temp_Byte_Count)); } Display_Data("InPut Data Before Init",Temp_Input_Frame, Temp_Byte_Count); //============Initialize the data Temp_Dst_Ptr = (unsigned char *)Temp_Input_Frame; Temp_Src_Ptr = (unsigned char *)&Init_Vector[0]; for(Counter_Init_Vector =0;Counter_Init_Vector < 8; Counter_Init_Vector++) *Temp_Dst_Ptr++ ^= *Temp_Src_Ptr++; //============Initializing data ends DES(DESE, (unsigned char *)&Test_Key_Arr[0], (unsigned char *)&Temp_Input_Frame[0], (unsigned char *)&Temp_Output_Frame[0]); //DES(TDES3KE, (unsigned char *)&Test_Key_Arr[0], // (unsigned char *)&Temp_Input_Frame[0], (unsigned char *)&Temp_Output_Frame[0]); Display_Data("AFTER DES::::", Temp_Output_Frame, Temp_Byte_Count); memcpy(&Test_Output_Data[48+Counter_Input_Data], Temp_Output_Frame, Temp_Byte_Count); Counter_Input_Data += Temp_Byte_Count; if(Counter_Input_Data < Count_Input_Data) { memcpy(Init_Vector, Temp_Output_Frame, 8); } } { memset(Test_Input_Data, '\0', Len_Input_Data); memcpy(&Test_Input_Data[0], &Test_Output_Data[48], Counter_Input_Data); //1 Separator + 2 CRCs } Display_Data("Final Output Frame", Test_Input_Data, Counter_Input_Data); return Counter_Input_Data; } Edited by: sabre150 on May 3, 2012 8:59 AM Moderator action : once again I have added [code ] tags to make the code readable. 5. Re: decryption and padding in java cryptographyEJP May 3, 2012 7:46 AM (in response to 934529)Actually you need to get rid of both byteToAscii() and stringToHex() altogether. The 'C' code is just sending the ciphertext, you are receiving it in the byte[] buffer, you should be decoding that directly instead of transforming it two or three times. 6. Re: decryption and padding in java cryptographysabre150 May 3, 2012 8:24 AM (in response to 934529)Cross posted person found this helpful Points to note - 1) The C code does not encode either as Base64 or Hex. 2) The C code does not use PKCS5 padding, it pads with bytes of of the character '0' ('0' and not '\0'). 3) The ciphertext seems to have the unencrypted 48 byte header so you would seem to need to remove that before decryption. 4) Your Java key bytes do not match the C key bytes. The C code does seem to use DESede with CBC mode using an IV of all zeros but you will need to decrypt using 'NoPadding' and then remove the padding bytes yourself. Presumably the 48 byte header has information that will allow the unambiguous removal of the padding bytes. 7. Re: decryption and padding in java cryptography934529 May 3, 2012 8:12 AM (in response to sabre150)can you tell me after seeing this c code at client end that what should i do to decrypt this hex string at my side. c code is not in my hand. pl post an example to decrypt this frame. 8. Re: decryption and padding in java cryptography934529 May 3, 2012 8:15 AM (in response to sabre150)yes you r right. i have removed unencrypted text already. is the encrypted part only. 9. Re: decryption and padding in java cryptographysabre150 May 3, 2012 8:21 AM (in response to 934529)Please note the edits to my previous reply. 10. Re: decryption and padding in java cryptographysabre150 May 3, 2012 8:47 AM (in response to 934529) 931526 wrote:This site does not provide a free coding service. Even if I was willing to provide code then since I do not know what the cleartext was that produced that ciphertext I cannot sensibly test any code. can you tell me after seeing this c code at client end that what should i do to decrypt this hex string at my side. c code is not in my hand. pl post an example to decrypt this frame. If you update your Java code according to my suggestions, create an SSCCE ( )that I can run and provide the cleartext associated with the ciphertext I will run the code and check it out. 11. Re: decryption and padding in java cryptography934529 May 3, 2012 9:15 AM (in response to sabre150)thanks.
https://community.oracle.com/thread/2385391?tstart=150
CC-MAIN-2017-22
refinedweb
1,367
54.73
02 March 2011 17:12 [Source: ICIS news] LONDON (ICIS)--Here is Wednesday’s end-of-day European oil and chemical market summary from ICIS. CRUDE: April WTI: $101.53/bbl, up $1.92/bbl. April BRENT: $116.92/bbl, up $1.62/bbl The continuing unrest in ?xml:namespace> NAPHTHA: $1,003-1,011/tonne, up $21/tonne The cargo range made significant gains from earlier in the day, driven by higher crude oil prices and a stronger crack spread. March swaps were assessed at $1,000-1,001/tonne. BENZENE: $1,360-1,380/tonne, down $10/tonne on the buy side The market was quiet this afternoon, and the bid level fell by $10/tonne. April was backwardated at $1,315-1,335/tonne. STYRENE: $1,525-1,535/tonne, down $10/tonne on the sell side Afternoon offers came down to $1,535/tonne in a quiet market. There was no firm range discussed for April, and it was assessed marginally lower at $1,520-1,530/tonne. TOLUENE: $1,080-1,100/tonne, steady The market was quiet, with no firm business seen. The March range was steady. MTBE: $1,149-1,150/tonne, up $24-25/tonne Prices continued to push higher in line with similar gains made in the gasoline market. Eurobob gasoline traded at $984-990/tonne, putting the factor against cash barges at 1.16-1.17, down one point on the low side. PARAXYLENE: $1,780-1,800/tonne, steady The market remained quiet today, with no firm buy/sell interest
http://www.icis.com/Articles/2011/03/02/9439976/evening-snapshot-europe-markets-summary.html
CC-MAIN-2014-15
refinedweb
261
77.33
Building several parsing modules Discussion in 'Python' started by Robert Neville, Mar 18, 2007.: - 339 - =?iso-8859-1?q?Fran=E7ois_Pinard?= - Jul 17, 2003 Python and CVS: several modules, one namespaceThomas Weholt, Jan 15, 2004, in forum: Python - Replies: - 0 - Views: - 260 - Thomas Weholt - Jan 15, 2004 Several issues with building a user controlShannon Cayze, Jun 30, 2003, in forum: ASP .Net Building Controls - Replies: - 1 - Views: - 139 - ctmhz - Jul 5, 2003 RDOC: several related modules in several C filesVictor \Zverok\ Shepelev, Mar 6, 2007, in forum: Ruby - Replies: - 3 - Views: - 188 - Max Lapshin - Mar 16, 2007 Same problem building several modules on XP using Visual Studio .NETkz, Feb 12, 2004, in forum: Perl Misc - Replies: - 0 - Views: - 134 - kz - Feb 12, 2004
http://www.thecodingforums.com/threads/building-several-parsing-modules.485245/
CC-MAIN-2014-42
refinedweb
123
58.45
Sometimes you’d like to write your own code for producing data to an Apache Kafka® topic and connecting to a Kafka cluster programmatically. Confluent provides client libraries for several different programming languages that make it easy to code your own Kafka clients in your favorite dev environment. One of the most popular dev environments is .NET and Visual Studio (VS) Code. This blog post shows you step by step how to use .NET and C# to create a client application that streams Wikipedia edit events to a Kafka topic in Confluent Cloud. Also, the app consumes a materialized view from ksqlDB that aggregates edits per page. The application runs on Linux, macOS, and Windows, with no code changes. C# was chosen for cross-platform compatibility, but you can create clients by using a wide variety of programming languages, from C to Scala. For the latest list, see Code Examples for Apache Kafka®. The app reads events from WikiMedia’s EventStreams web service—which is built on Kafka! You can find the code here: WikiEdits on GitHub. The following diagram shows the data pipeline, transformations, and the app’s topology. It was created with the Confluent Cloud Data Lineage feature, currently in Early Access. On the left, the node labeled rdkafka represents the producer app, which produces messages to the recent_changes topic, shown in the second node. The EDITS_PER_PAGE query, shown in the third node, consumes from the recent_changes topic, aggregates messages, and saves them to the sink topic in the fourth node, which for this cluster is named pksql-gnponEDITS_PER_PAGE. Prerequisites: CL60BLOG Follow these steps to create the WikiEditStream project. Because VS Code and .NET are completely cross-platform, the same steps work on Linux, WSL 2, PowerShell, and macOS. cd repos code . dotnet new console --name WikiEditStream The dotnet new command creates the WikiEditStream directory for you and adds two files: Program.cs and WikiEditStream.csproj. WikiEditStreamdirectory: cd WikiEditStream dotnet run Your output should resemble: Hello World! The WikiEditStream project requires Confluent’s client library for .NET, which is available as a NuGet package named Confluent.Kafka. In the VS Code terminal, add the Confluent.Kafka NuGet package: dotnet add package Confluent.Kafka When the package is installed, the project is ready for the producer and consumer code. Your client code needs API credentials to connect to Confluent Cloud. The Produce method implements these steps: In VS Code, open Program.cs and include the following namespaces: using Confluent.Kafka; using System; using System.IO; using System.Net.Http; using System.Text.Json; using System.Threading; using System.Threading.Tasks; Copy the following code and paste it into Program.cs, after the Main method: // Produce recent-change messages from Wikipedia to a Kafka topic. // The messages are sent from the RCFeed // to the topic with the specified name. static async Task Produce(string topicName, ClientConfig config) { Console.WriteLine($"{nameof(Produce)} starting"); // The URL of the EventStreams service. string eventStreamsUrl = ""; // Declare the producer reference here to enable calling the Flush // method in the finally block, when the app shuts down. IProducer<string, string> producer = null; try { // Build a producer based on the provided configuration. // It will be disposed in the finally block. producer = new ProducerBuilder<string, string>(config).Build(); // Create an HTTP client and request the event stream. using(var httpClient = new HttpClient()) // Get the RC stream. using (var stream = await httpClient.GetStreamAsync(eventStreamsUrl)) // Open a reader to get the events from the service. using (var reader = new StreamReader(stream)) { // Read continuously until interrupted by Ctrl+C. while (!reader.EndOfStream) { // Get the next line from the service. var line = reader.ReadLine(); // The Wikimedia service sends a few lines, but the lines // of interest for this demo start with the "data:" prefix. if(!line.StartsWith("data:")) { continue; } // Extract and deserialize the JSON payload. int openBraceIndex = line.IndexOf('{'); string jsonData = line.Substring(openBraceIndex); Console.WriteLine($"Data string: {jsonData}"); // Parse the JSON to extract the URI of the edited page. var jsonDoc = JsonDocument.Parse(jsonData); var metaElement = jsonDoc.RootElement.GetProperty("meta"); var uriElement = metaElement.GetProperty("uri"); var key = uriElement.GetString(); // Use the URI as the message key. // For higher throughput, use the non-blocking Produce call // and handle delivery reports out-of-band, instead of awaiting // the result of a ProduceAsync call. producer.Produce(topicName, new Message<string, string> { Key = key, Value = jsonData }, (deliveryReport) => { if (deliveryReport.Error.Code != ErrorCode.NoError) { Console.WriteLine($"Failed to deliver message: {deliveryReport.Error.Reason}"); } else { Console.WriteLine($"Produced message to: {deliveryReport.TopicPartitionOffset}"); } }); } } } finally { var queueSize = producer.Flush(TimeSpan.FromSeconds(5)); if (queueSize > 0) { Console.WriteLine("WARNING: Producer event queue has " + queueSize + " pending events on exit."); } producer.Dispose(); } } The Consume method implements these steps: Copy the following code and paste it into Program.cs, after the Produce method: static void Consume(string topicName, ClientConfig config) { Console.WriteLine($"{nameof(Consume)} starting"); // Configure the consumer group based on the provided configuration. var consumerConfig = new ConsumerConfig(config); consumerConfig.GroupId = "wiki-edit-stream-group-1"; // The offset to start reading from if there are no committed offsets (or there was an error in retrieving offsets). consumerConfig.AutoOffsetReset = AutoOffsetReset.Earliest; // Do not commit offsets. consumerConfig.EnableAutoCommit = false; // Enable canceling the Consume loop with Ctrl+C. CancellationTokenSource cts = new CancellationTokenSource(); Console.CancelKeyPress += (_, e) => { e.Cancel = true; // prevent the process from terminating. cts.Cancel(); }; // Build a consumer that uses the provided configuration. using (var consumer = new ConsumerBuilder<string, string>(consumerConfig).Build()) { // Subscribe to events from the topic. consumer.Subscribe(topicName); try { // Run until the terminal receives Ctrl+C. while (true) { // Consume and deserialize the next message. var cr = consumer.Consume(cts.Token); // Parse the JSON to extract the URI of the edited page. var jsonDoc = JsonDocument.Parse(cr.Message.Value); //}"; Console.WriteLine($"Consumed record with URI {uri}"); } } catch (OperationCanceledException) { // Ctrl+C was pressed. Console.WriteLine($"Ctrl+C pressed, consumer exiting"); } finally { consumer.Close(); } } } All that’s left is to set up the client app’s configuration. In the Main method, create a ClientConfig instance and populate it with the credentials from your Confluent Cloud cluster. Replace the default Main method with the following code: static async Task Main(string[] args) { // Configure the client with credentials for connecting to Confluent. // Don't do this in production code. For more information, see //. var clientConfig = new ClientConfig(); clientConfig.BootstrapServers="<bootstrap-host-port-pair>"; clientConfig.SecurityProtocol=Confluent.Kafka.SecurityProtocol.SaslSsl; clientConfig.SaslMechanism=Confluent.Kafka.SaslMechanism.Plain; clientConfig.SaslUsername="<api-key>"; clientConfig.SaslPassword="<api-secret>"; clientConfig.SslCaLocation = "probe"; // /etc/ssl/certs await Produce("recent_changes", clientConfig); //Consume("recent_changes", clientConfig); Console.WriteLine("Exiting"); } Replace the bootstrap-host-port-pair, api-key, and api-secret configs with the strings you copied from Confluent Cloud. Production code must never have hardcoded keys and secrets. They’re shown here only for convenience. For production code, see Safe storage of app secrets in development in ASP.NET Core. Also, depending on your platform, there may be complexity around the SslCaLocation config, which specifies the path to your SSL CA root certificates. Details vary by platform, but specifying probe may be sufficient for most cases. For more information, see SSL in librdkafka. Your program is ready to run, but it needs a topic in your cluster. recent_changes. Click Create with defaults to create the topic. In the VS Code terminal, build and run the program. dotnet run You should see editing events printed to the console, followed by batches of production messages. Produce starting ... Data string: {"$schema":"/mediawiki/recentchange/1.0.0","meta":{"uri":"" … Data string: {"$schema":"/mediawiki/recentchange/1.0.0","meta":{"uri":"" … Produced message to: recent_changes [[1]] @191 Produced message to: recent_changes [[5]] @119 Produced message to: recent_changes [[3]] @202 ... While the producer is running, return to Confluent Cloud. recent_changestopic. Now that you’ve produced messages to the recent_changes topic, you can consume them. In the Main method, comment out the call to Produce and uncomment the call to Consume. // await Produce("recent_changes", clientConfig); Consume("recent_changes", clientConfig); In the VS Code terminal, build and run the program. dotnet run Your output should resemble: Consume starting Consumed record with URI Consumed record with URI Consumed record with URI ... Because the consumer is configured with AutoOffsetReset.Earliest, it reads from the first message in the recent_changes topic to the most recent message and waits for more messages. Press Ctrl+C to stop the consumer. Producing messages and simply passing them through to a consumer isn’t particularly useful, so you need to add some logic. The following steps show how to create a ksqlDB app that aggregates recent_changes records by URI and counts the number of edits that occur per page, providing a materialized view on the stream. recent_changes_app, and click Launch application. Provisioning your new application starts and may take a few minutes to complete. The first step for implementing aggregation logic is to register a stream on the recent_changes topic. Copy and paste the following SQL into the query editor and click Run query. It registers a stream, named recent_changes_stream, on the recent_changes topic. The CREATE STREAM statement specifies the schema of the records. CREATE STREAM recent_changes_stream ( schema VARCHAR, meta STRUCT<uri VARCHAR, request_id VARCHAR, id VARCHAR, dt VARCHAR, domain VARCHAR, wiki_stream VARCHAR, wiki_topic VARCHAR, wiki_partition BIGINT, offset BIGINT>, id BIGINT, edit_type VARCHAR, wiki_namespace INT, title VARCHAR, comment VARCHAR, edit_timestamp BIGINT, user VARCHAR, bot VARCHAR, server_url VARCHAR, server_name VARCHAR, server_script_path VARCHAR, wiki VARCHAR, parsedcomment VARCHAR) WITH (KAFKA_TOPIC='recent_changes', VALUE_FORMAT='JSON', PARTITIONS=6); Your output should resemble: To define the schema of the recent_changes_stream, you can infer the fields from one of the messages. You must rename the stream, topic, and partition fields, because these are keywords in ksqlDB SQL. It’s fun to see a bit of the Kafka infrastructure that powers the WikiMedia stream in these messages—for example, the stream is named “mediawiki.recentchange”, and the underlying Kafka topic is named “eqiad.mediawiki.recentchange”. { "$schema": "/mediawiki/recentchange/1.0.0", "meta": { "uri": "", "request_id": "bd5cd656-eef7-44e6-ac7a-360dbe1f92bc", "id": "b6aa54cd-ea42-4ac0-9900-54670804f575", "dt": "2021-05-04T18:29:29Z", "domain": "commons.wikimedia.org", "stream": "mediawiki.recentchange", "topic": "eqiad.mediawiki.recentchange", "partition": 0, "offset": 3157257551 }, "id": 1674896597, "type": "categorize", "namespace": 14, "title": "Category:Botanists from Poland", "comment": "[[:File:Teofil Ciesielski (-1906).jpg]] added to category", "timestamp": 1620152969, "user": "2A01:C22:842E:7000:E8EE:6BD9:C640:FC09", "bot": false, "server_url": "", "server_name": "commons.wikimedia.org", "server_script_path": "/w", "wiki": "commonswiki", "parsedcomment": "<a href=\"/wiki/File:Teofil_Ciesielski_(-1906).jpg\" title=\"File:Teofil Ciesielski (-1906).jpg\">File:Teofil Ciesielski (-1906).jpg</a> added to category" } The official schema is in the Wikimedia schemas/event/primary repo. With the stream registered, you can query the records as they’re appended to the recent_changes topic. In VS Code, edit the Main method to comment out the Consume call and uncomment the Produce call. await Produce("recent_changes", clientConfig); //Consume("recent_changes", clientConfig); Run the app to start producing messages. dotnet run In the ksqlDB query editor, copy and paste the following query and click Run query: select META -> URI from RECENT_CHANGES_STREAM EMIT CHANGES; Your output should resemble: This is a transient, client-side query that selects the URI from each record. It runs continuously until you cancel it. Click Stop to cancel the query. With the stream defined, you can derive a table from it that aggregates the data. In the Add query properties section, configure the commit.interval.ms query property, which sets the output frequency from a table. The default is 30000 ms, or 30 seconds. Set it to 1000, so you wait for one second only before table updates appear. commit.interval.ms = 1000 In the query editor, run the following statement to create the edits_per_page table, which is derived from recent_changes_stream and shows an aggregated view of the number of edits per Wikipedia page: CREATE TABLE edits_per_page AS SELECT meta->uri, COUNT(*) AS num_edits FROM recent_changes_stream GROUP BY meta->uri EMIT CHANGES; Copy the following SELECT statement into the editor and click Run query: SELECT * FROM edits_per_page EMIT CHANGES; After a one-second delay, your output should resemble: Your consumer code can access the output from the ksqlDB app by consuming from the sink topic that receives the results from the edits_per_page query. Use the Confluent Cloud UI to find the name of the table topic. Mainmethod to comment out the Producecall and uncomment the Consumecall. Replace recent_changeswith the topic name for your table. //Produce("recent_changes", clientConfig).Wait(); Consume("<table-topic-name>", clientConfig); In the Consume method, get the URI from the message’s key, and get the number of edits from the NUM_EDITS field in the message’s value. Comment out the code for the recent_changes topic and uncomment the code for the ksqlDB sink topic. //}"; Run the consumer to view the aggregation results. dotnet run Your output should resemble: Consume starting Consumed record with URI, edits = 1 Consumed record with URI., edits = 2 Consumed record with URI, edits = 5 ... .NET provides a true “write once, run anywhere” experience. If you have access to two different platforms, for example, Linux and Windows, you can run the app as a producer on one platform and a consumer on the other. For example, you can replace the Produce and Consume calls in Main with some conditional logic: if(platform == System.PlatformID.Unix) { await Produce("recent_changes", clientConfig); //Consume("recent_changes", clientConfig); } else if(platform == System.PlatformID.Win32NT) { Consume("recent_changes", clientConfig); //await Produce("recent_changes", clientConfig); } Here’s the app running as a producer on Linux (WSL 2) and as a consumer in Windows PowerShell: In this tutorial, you created a simple cross-platform client that produces Wikipedia edit messages to a Kafka topic and consumes a table of aggregated records from a ksqlDB application. This is demo code, and it can be improved in a number of ways: appsettings.jsonconfiguration file: Secrets Management in .NET Applications With Confluent Cloud, you can develop cross-platform Kafka applications rapidly by using VS Code alongside the pipeline visualization and management features in the Confluent Cloud UI. Get started with a free trial of Confluent Cloud and use the promo code CL60BLOG for an additional $60 of free Confluent Cloud usage.*.
https://www.confluent.io/en-gb/blog/build-cross-platform-kafka-applications-using-c-and-dotnet-5/
CC-MAIN-2022-27
refinedweb
2,335
50.53
Azure File Sync is now GA! Please learn more by reading our Azure blog post here! Extend your on-premises file servers to Azure Files with Azure File Sync that enables you to get the best of both the cloud and on-premises worlds: Azure File Sync.. Since Azure File Sync is a multi-master sync solution, it makes it easy to solve global access problems introduced by having a single point of access on-premises, or in Azure by replicating data between Azure File shares and servers anywhere in the world. With Azure File Sync, we’ve introduced a very simple concept, the Sync Group, to help you manage the locations that should be kept in sync with each other. Every Sync Group has one cloud endpoint, which represents an Azure File share, and one or more server endpoints, which represents a path on a Windows Server. That’s it! Everything within a Sync Group will be automatically kept in sync! Azure File Sync also helps you leverage Azure to get control over your on-premises data. Since cloud tiering moves old and infrequently accessed files to Azure, it effectively helps you make unpredictable storage growth predictable. When disasters strike, Azure File Sync can help. Simply set up a new Windows Server, install Azure File Sync, and the namespace is nearly instantly synced down as your cache is rebuilt. Azure File Sync will be available, as a preview offering, this week (week of 9/25) – try it out! Please see our documentation for additional information about how to setup and configure Azure File Sync. If you are attending Ignite, come to our great sessions on Azure Files and Azure File Sync:
https://azure.microsoft.com/es-es/blog/announcing-the-public-preview-for-azure-file-sync/
CC-MAIN-2020-34
refinedweb
282
68.5
On Tue, Sep 30, 2008 at 12:42 PM, Terry Reedy tjreedy@udel.edu wrote: Guido van Rossum wrote: On Tue, Sep 30, 2008 at 11:13 AM, Georg Brandl g.brandl@gmx.net wrote: Victor Stinner schrieb: On Windows, we might reject bytes filenames for all file operations: open(), unlink(), os.path.join(), etc. (raise a TypeError or UnicodeError) Since I've seen no objections to this yet: please no. If we offer a "lower-level" bytes filename API, it should work for all platforms.? In 3.0rc1, the listdir doc needs updating: "os.listdir(path) Return a list containing the names of the entries in the directory. The list is in arbitrary order. It does not include the special entries '.' and '..' even if they are present in the directory. Availability: Unix, Windows. On Windows NT/2k/XP and Unix, if path is a Unicode object, the result will be a list of Unicode objects." s/Unicode/bytes/ at least for Windows. os.listdir(b'.') [b'countries.txt', b'multeetest.py', b't1.py', b't1.pyc', b't2.py', b'tem', b'temp.py', b'temp.pyc', b'temp2.py', b'temp3.py', b'temp4.py', b'test.py', b'z', b'z.txt'] The bytes names do not work however: t=open(b'tem') Traceback (most recent call last): File "<pyshell#23>", line 1, in <module> t=open(b'tem') File "C:\Programs\Python30\lib\io.py", line 284, in __new__ return open(*args, **kwargs) File "C:\Programs\Python30\lib\io.py", line 184, in open raise TypeError("invalid file: %r" % file) TypeError: invalid file: b'tem' Is this what you were asking? No, that's because bytes is missing from the explicit list of allowable types in io.open. Victor has a one-line trivial patch for this. Could you try this though? import _fileio _fileio._FileIO(b'tem') Guido van Rossum wrote: No, that's because bytes is missing from the explicit list of allowable types in io.open. Victor has a one-line trivial patch for this. Could you try this though? import _fileio _fileio._FileIO(b'tem') import _fileio _fileio._FileIO(b'tem') _fileio._FileIO(3, 'r')
https://mail.python.org/archives/list/python-dev@python.org/thread/IV4EMWZDQXGSGXJQKUK5IXE6K3SNWSB4/?sort=date
CC-MAIN-2021-49
refinedweb
364
77.84
Launchers and Choosers for Windows Phone This article demonstrates how to use Launchers and Choosers in Windows Phone Windows Phone 8 Windows Phone 7.5 Introduction Windows phone uses launchers and choosers to provide access to common device functionality and services. When a particular function is required, a launcher/chooser is called to start the built in app and perform the required task; on completion control returns to the calling app. The main difference between launchers and choosers is that a chooser usually returns data back to the app (for example taking a photo) while a launcher does not (for example, sending an email). Note: This architecture means that if you want to use a picture in your app you fire off a chooser to take the photo rather than embedding camera functionality in your own app. This is a bit frustrating if you want to write your own camera, but it does provide a consistent experience for users and make device memory management a lot more controllable. The code example app described here uses a launcher to start the Media Player and play a video, and uses the Chooser to launch the Camera and take a photo. Each of the Launchers and Choosers have their own set of properties. After setting any of those properties, we need to call Show() method. For Choosers we need to implement the event handler for when the user has taken a picture, in order to get the image back to handle. Implementation First create a project with Windows Phone Application Template. Once the project is created, add the reference Microsoft.Phone.Tasks (needed for both choosers and launchers). using Microsoft.Phone.Tasks; Launch Media Player To launch the Media Player using the Launcher API, first create the instance of the MediaPlayerLauncher then set the media file to be launched and then call the Show() function to launch the media player. This code launches the default media player of the device and start playing the file. Launch Camera chooser For Chooser API we launch the build-in Camera and take photo. When user completes the task, an event is raised and the event handler cameraCaptureTask_Completed() receives a photo in the result, which is then displayed in the screen. Source Code The full source code of the example is available here: File:LauncherAndChooser.zip
http://developer.nokia.com/community/wiki/Launchers_and_Choosers_for_Windows_Phone
CC-MAIN-2014-15
refinedweb
389
58.52
Get The Drop On ASP.NET MVC DropDownLists DropDownList, ComboBox, call it what you like, but it always renders as an html select element. It has an opening <select> tag, and a closing </select> tag. In between, each "ListItem" is housed within an <option> tag. Optionally, they may be subdivided into <optgroup> elements for logical separation of related options. If you provide a value attribute to an option, that is the value that gets posted back when a form housing the select element is submitted. If you omit the value attribute, the text value of the option gets posted back. At its simplest, for example if you have a static list of items that needs to appear in a DropDown, you can simply put them in your View as html: <select name="year"> <option>2010</option> <option>2011</option> <option>2012</option> <option>2013</option> <option>2014</option> <option>2015</option> </select> Or, if the list is a little more dynamic, say if you need to ensure that the starting year is incremented by 1 each New Year's Day: [WebForms] > [Razor] > Or even: [WebForms] <select name="year"> <% for (var i = 0; i < 6; i++){%> <option><%= DateTime.Now.AddYears(i).Year %></option> <%}%> </select> [Razor] <select name="year"> @for (var i = 0; i < 6; i++){ <option>@(DateTime.Now.AddYears(i).Year)</option> } </select> All of the above will render exactly the same html and end result: If your data comes from a database, you will more likely use one of the 8 overloads of the Html.DropDownList() extension method to create your DropDown. I won't cover all overloads, but it is worth looking at the main ones. The first one - public static string DropDownList(this HtmlHelper htmlHelper, string name) - simply accepts a string. Now the documentation currently says that the string should be the name of the form field, which isn't particularly helpful. In fact, not only does it provide the resulting select element with a name and an id, but it also acts as the look-up for an item in the ViewBag having the same dynamic property as the string provided. This ViewBag property is then bound to the helper to create the <option> items. Consequently, the ViewBag property must be a collection of SelectListItems. Here's how to get the Categories from the Northwind sample database using LINQ to SQL to pass to a DropDownList using the first overload: public ActionResult Index() { var db = new NorthwindEntities(); IEnumerable<SelectListItem> items = db.Categories .Select(c => new SelectListItem { Value = c.CategoryID.ToString(), Text = c.CategoryName }); ViewBag.CategoryID = items; return View(); } Notice that each SelectListItem must have a Value and a Text property assigned. These are bound at run-time to the value attribute of the option elements and the actual text value for the option. Notice also the odd name given to the ViewBag dynamic property "CategoryID". The reason for this is that the CategoryID is the value that will be passed when the form is submitted, so it makes sense to name it like this. In the View, the overload is used: [WebForms] <%= Html.DropDownList("CategoryID") %> [Razor] @Html.DropDownList("CategoryID") And that's all that's needed to render the following HTML: <select id="CategoryID" name="CategoryID"> <option value="1">Beverages</option> <option value="2">Condiments</option> <option value="3">Confections</option> <option value="4">Dairy Products</option> <option value="5">Grains/Cereals</option> <option value="6">Meat/Poultry</option> <option value="7">Produce</option> <option value="8">Seafood</option> </select> The second overload - public static string DropDownList(this HtmlHelper htmlHelper, string name, IEnumerable<SelectListItem> selectList) - is one you quite often see in examples. With this overload, you can return an IEnumerable<SelectListItem> collection or a SelectList object. We'll have a look at the View first, before seeing two methods of populating the ViewData with alternative objects: [WebForms] <%= Html.DropDownList("CategoryID", (IEnumerable<SelectListItem>) ViewBag.Categories) %> [Razor] @Html.DropDownList("CategoryID", (IEnumerable<SelectListItem>) ViewBag.Categories) The first item to go into ViewBag will be the IEnumerable<SelectListItem> object. The code is pretty well identical to the previous example: public ActionResult Index() { var db = new NorthwindDataContext(); IEnumerable<SelectListItem> items = db.Categories .Select(c => new SelectListItem { Value = c.CategoryID.ToString(), Text = c.CategoryName }); ViewBag.Categories = items; return View(); } The second passes a SelectList object to ViewBag: public ActionResult Index() { var db = new NorthwindDataContext(); var query = db.Categories.Select(c => new { c.CategoryID, c.CategoryName }); ViewBag.Categories = new SelectList(query.AsEnumerable(), "CategoryID", "CategoryName"); return View(); } Using a SelectList is slightly tidier in the Controller, and arguably in the View. The SelectList constructor has a couple of overloads which accepts an object representing the selected value: public ActionResult Index() { var db = new NorthwindDataContext(); var query = db.Categories.Select(c => new { c.CategoryID, c.CategoryName }); ViewBag.CategoryId = new SelectList(query.AsEnumerable(), "CategoryID", "CategoryName", 3); return View(); } The above will ensure that "Confections" is selected when the list is rendered: Default Values All of the examples so far show the first selectable option visible when the page loads. Most often, however, a default value is desirable, whether this is a blank value or a prompt to the user to "--Select One--" or similar. Another overload takes care of adding this - public static string DropDownList(this HtmlHelper htmlHelper, string name, IEnumerable<SelectListItem> selectList, string optionLabel). [WebForms] <%= Html.DropDownList("CategoryID", (SelectList) ViewBag.CategoryId, "--Select One--") %> [Razor] @Html.DropDownList("CategoryID", (SelectList) ViewBag.CategoryId, "--Select One--") CSS and HTML attributes Four of the overloads accept parameters for applying HTML attributes to the DropDownList when it is rendered. Two of them accept IDictionary<string, object> while the other two take an object. The object is an anonymous type. The following examples will both render identical html, applying a css class selector and a client-side onchange() event: [Webforms] <%=();" }) %> [Razor] ();" }) You should notice that the second version (the one using the anonymous type) has a property called "@class". This will render as a literal "class", but needs the @ sign in front of "class" because class is obviously a C# keyword. You might also wonder why there are two ways to add attributes. The second option, using the anonymous object is a lot cleaner and surely would be the sensible choice. However, for one thing, the HTML5 specification includes the ability to add custom attributes to your html mark-up. Each attribute must be prefixed with "data-". If you attempt to create a property in a C# object with a hyphen in its name, you will receive a compiler error. The Dictionary<string, object> approach will solve that problem. Where's My AutoPostBack? One of the most common questions from developers used to the Web Forms model concerns AutoPostBack for DropDownLists in MVC. In Web Forms, it's easy enough to select your DropDownList in design view, head over to the Properties panel in your IDE and set AutoPostBack to true, or to tick the Use AutoPostBack option on the control's smart tag. Quite often, since it is that easy, developers give little thought to what happens behind the scenes when AutoPostBack is used. In fact, an onchange attribute is added to the rendered DropDownList, which fires a javascript event handler, causing the form in which the DropDownList is housed to be submitted. This process has to be done manually within MVC. But it's quite simple. I'll show two ways of achieving this. One will use the most recent overload (above) which takes an object for htmlAttributes, and the other one will show how the same thing can be done using jQuery, unobtrusively. I haven't actually shown DropDownLists within a form element so far, but of course a DropDownList is useless outside of one. Here's the first alternative: [WebForms] <% using (Html.BeginForm("Index", "Home", FormMethod.Post, new { id = "TheForm" })){%> <%= Html.DropDownList( "CategoryID", (SelectList) ViewData["Categories"], "--Select One--", new{ onchange = "document.getElementById('TheForm').submit();" })%> <%}%> [Razor] @using (Html.BeginForm("Index", "Home", FormMethod.Post, new { id = "TheForm" })){ @Html.DropDownList( "CategoryID", (SelectList) ViewData["Categories"], "--Select One--", new{ onchange = "document.getElementById('TheForm').submit();" }) } And the second that uses jQuery: <script type="text/javascript"> $(function() { $("#CategoryID").change(function() { $('#TheForm').submit(); }); }); </script> [WebForms] <%using (Html.BeginForm("Index", "Home", FormMethod.Post, new { id = "TheForm" })){%> <%=Html.DropDownList("CategoryID", (SelectList) ViewBag.CategoryId, "--Select One--") %> <%}%> [Razor] @using (Html.BeginForm("Index", "Home", FormMethod.Post, new { id = "TheForm" })){ @Html.DropDownList("CategoryID", (SelectList) ViewBag.CategoryId, "--Select One--") } ToolTips Nothing in the existing set of HtmlHelpers for DropDownLists provides for adding tool tips to select list options at the moment. Tool tips are generated by adding a "title" attribute to each option in the list. Now, this could be achieved by creating your own extension methods that allow you to specify that each option element should have a title, and then apply a value to the title attribute as each option is added to the select list. But that's a fair amount of work... Or, you could use jQuery to do this really easily: <script type="text/javascript"> $(function() { $("#CategoryID option").each(function() { $(this).attr({'title': $(this).html()}); }); }); </script> Strongly Typed Helper All of the examples so far have illustrated the use of the dynamic ViewBag collection to pass values from Controllers to Views. There are strongly typed Html Helpers for the DropDownList to cater for strongly typed views - with all that Intellisense support and compile-time checking. The following example shows a very simple ViewModel class called SelectViewModel: using System.Collections.Generic; using System.Web.Mvc; public class SelectViewModel { public string CategoryId { get; set; } public IEnumerable<SelectListItem> List { get; set; } } Here's a sample controller action that instantiates a SelectViewModel instance and passes it to the strongly type view: public ActionResult Index() { var db = new NorthwindDataContext(); var query = db.Categories.Select(c => new SelectListItem { Value = c.CategoryID.ToString(), Text = c.CategoryName, Selected = c.CategoryID.Equals(3) }); var model = new SelectViewModel { List = query.AsEnumerable() }; return View(model); } Notice how the selected item is identified as the IEnumerable collection of SelectListItems is built from the database query. Finally, in the (Razor) view, the dropdown is rendered with a default option added: @model SelectViewModel @Html.DropDownListFor(m => m.CategoryId, Model.List, "--Select One--") Currently rated 4.60 by 160 people Rate Now! Date Posted: Thursday, January 7, 2010 9:27 PM Last Updated: Saturday, April 30, 2011 12:00 PM Posted by: Mikesdotnetting Total Views to date: 2489, January 8, 2010 4:22 AM from Mike Sharp Hi, Mike, Could you provide a printable version of the article? And my name is Mike too :) Saturday, January 9, 2010 4:41 AM. Saturday, January 9, 2010 8:37 AM from Mike @Mike That's something I've been meaning to do. Watch this space @ali62b I was thinking about looking at that separately. Monday, January 11, 2010 5:12 PM. Monday, January 11, 2010 7:51 PM from Mike . Monday, January 18, 2010 3:26 PM from ali62b Today I found a post which covers editing scenarios with DropDownList. here is the address : However wanted to know if this is a preferred way in your opinion or not. Wednesday, January 20, 2010 1:45 PM from Mike ) Sunday, March 14, 2010 4:06 PM from Mike J Perhaps I am thick. Is the mechanism you described for choosing a default value the approach to take when editing and matching the the selection to the users previous choice? Sunday, March 14, 2010 4:40 PM from Mike @Mike J The default value is the one you normally see as the first one. What you need is one of the overloads that accepts a value for the Selected value. Wednesday, May 19, 2010 12:36 AM from Jason Mike, Is there a way for the client to enter text into the ddl instead of just selecting from the static list? Thursday, May 20, 2010 3:27 PM from Mike @Jason No. You are referring to something equivalent to a VB style Combo Box. In order to make one of those, you need to fiddle around with a Text box and a div, plus some CSS. There is no HTML Combo box as such. Thursday, July 8, 2010 2:52 AM.... Thursday, July 8, 2010 5:29 AM from Mike . Friday, July 23, 2010 7:49 AM? Friday, July 23, 2010 8:24 AM from Mike . Friday, January 21, 2011 7:58 PM from San Thanks for the Article. It helped me a lot to understand. Friday, May 6, 2011 6:31 AM from Gregg Thank you very much...I was having trouble applying Html Attributes to a dropdown list but this article really helped! Friday, April 27, 2012 3:11 PM from Araik Thanks! Monday, May 14, 2012 3:29 PM from Gift White this is what I was looking for :) thank you Sunday, May 20, 2012 9:13 AM from seyhan bakır thank you..very usefull post Sunday, May 27, 2012 1:33 PM from Jade Tibbitts Thanks For This. After Pissing around for hours on end your article was the only one in which could help me. So Big Thanks Wednesday, June 13, 2012 1:36 PM from Stanislava Pavlova Hi :) I want to ask something... Is this working in MVC2? Couse i'm trying but it doesn't work... I want to know if it's my mistake :) Thanks in advance! Thursday, June 14, 2012 9:52 PM from Mike @Stanislava, Yes, the code will work with MVC 2. You should post a question at Friday, August 31, 2012 8:32 AM from flytomylife Really thanks a lot. Thursday, September 13, 2012 5:27 PM from Dave Stuart Great article! I'm gonna use this as a reference. Wednesday, September 19, 2012 10:14 PM from Ahmet really very useful. thanks for sharing ;) Monday, April 29, 2013 2:37 PM from sam you rock man... Wednesday, July 17, 2013 4:02 AM from Jatinder Kumar Very great article dear.This article help me alot very very helpful Thank you very much. Monday, October 7, 2013 1:34 PM from Upen Thanks Friday, November 29, 2013 11:14 PM from sanam I saw your posts its so good I think that you solved my problem i were create a dropdownlist through viewmodel using selectlist and now my problem is i want to give links in each option of dropdown that i choose name from dropdown then the page is redirect according to link i hope that you understand my problem Saturday, December 7, 2013 9:13 PM from Mike @sanam No I don't understand your problem.
http://www.mikesdotnetting.com/Article/128/Get-The-Drop-On-ASP.NET-MVC-DropDownLists
CC-MAIN-2014-15
refinedweb
2,413
56.45
EVP_OpenInit, EVP_OpenUpdate, EVP_OpenFinal - EVP envelope decryption #include <openssl/evp.h> int EVP_OpenInit(EVP_CIPHER_CTX *ctx,EVP_CIPHER *type,unsigned char *ek, int ekl,unsigned char *iv,EVP_PKEY *priv); int EVP_OpenUpdate(EVP_CIPHER_CTX *ctx, unsigned char *out, int *outl, unsigned char *in, int inl); int EVP_OpenFinal(EVP_CIPHER_CTX *ctx, unsigned char *out, int *outl); The EVP envelope routines are a high level interface to envelope decryption. They decrypt a public key encrypted symmetric key and then decrypt data using it. EVP_OpenInit() initializes a cipher context ctx for decryption with cipher type. It decrypts the encrypted symmetric key of length ekl bytes passed in the ek parameter using the private key priv. The IV is supplied in the iv parameter. EVP_OpenUpdate() and EVP_OpenFinal() have exactly the same properties as the EVP_DecryptUpdate() and EVP_DecryptFinal() routines, as documented on the EVP_EncryptInit(3) manual page. It is possible to call EVP_OpenInit() twice in the same way as EVP_DecryptInit(). The first call should have priv set to NULL and (after setting any cipher parameters) it should be called again with type set to NULL. If the cipher passed in the type parameter is a variable length cipher then the key length will be set to the value of the recovered key length. If the cipher is a fixed length cipher then the recovered key length must match the fixed cipher length.)
http://www.squarebox.co.uk/cgi-squarebox/manServer/usr/share/man/man3/EVP_OpenFinal.3ssl
crawl-003
refinedweb
221
51.99
Visualize Airflow Workflows Without Airflow During development, it's hard to visualize the connections mentioned in Python code. We take a look at how to validate the DAG without deploying it on Airflow. Join the DZone community and get the full member experience.Join For Free Apache Airflow has gained a lot of traction in the data processing world. It is a Python-based orchestration tool. When I say "Python-based" it is not just that the application has been developed using Python. The directed acyclic graphs (DAGs) — Airflows term for workflows — are also written as Python. In other words, workflows are code. Many of the popular workflows tools like Informatica and Talend have visual tools that allow developers to lay out the workflow visually. As Airflow workflows are Python code, we are able to visualize the workflow only after uploading it. While this is an acceptable situation, in some cases, it can become problematic because Airflow refuses to load the workflow due to errors. Additionally, during development, it is difficult to visualize all the connections mentioned in Python code. While looking for a way to visualize the workflow, I came across a Sankey diagram. Not just that, I also came across a gist where Python code has been conveniently packaged into a function. All I had to do was download the gist and include it in my program. Once I had the drawing code handy, I was left with the task of parsing the Airflow DAG and populating a data structure as needed to draw the chart. Below is the function I wrote. The function looks for the set_upstream function in the code. As set_upstream is used to connect the child task to the parent task, I had to place it properly in a list. I hope you also enjoy validating the DAG without having to deploy it on Airflow. def process_file(input_directory, filename, output_directory): input_file = os.path.join(input_directory, filename) output_file = os.path.join(output_directory, filename).replace(".py", ".html") data = list() input_file = open(input_file, "r") for line in input_file.readlines(): line = line.replace("\n", "") if "set_upstream" in line: names = line.split(".") child = names[0] parent = names[1].replace("set_upstream(", "").replace(")", "") data.append([parent, child, 1]) dataframe = pd.DataFrame(data, columns=["source", "target", "count"]) fig = genSankey(dataframe, cat_cols=["source", "target"], value_cols="count", title=input_file) go.Figure.write_html(fig, output_file) Published at DZone with permission of Bipin Patwardhan. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/visualize-airflow-workflows-without-airflow?fromrel=true
CC-MAIN-2021-43
refinedweb
411
59.09
This is the mail archive of the docbook@lists.oasis-open.org mailing list for the DocBook project. Jirka Kosek wrote: > Norman Walsh wrote: > >> | But SGML has no namespaces AFAIK. I think instead of >> | <sgmltag namespace="">... >> | I'd like to see >> | <xmltag namespace="">... >> | or >> | <elementname namespace="">... >> >> Yes, we should have added xmltag or renamed sgmltag years ago. But we >> didn't. And that's a separate RFE :-) > > > I was reading XML-REC yesterday to refresh it in my memory and it > reminded me this "xmltag" issue... > > Unfortunatelly, "xmltag" is not valid name for element according to > the XML Recommendation: > > > [...] > I'm personally OK with "sgmltag" element name, but if there is need > for something more XML related, different name must be chosen: mltag, > tagxml, tag? --
https://sourceware.org/legacy-ml/docbook/2003-10/msg00059.html
CC-MAIN-2020-45
refinedweb
125
60.82
The. Note that there is a difference between "python file" and "python <file". In the latter case, input requests from the program, such as calls to input() and raw_input(),, e.g.: python Python 1.5.2b2 (#1, Feb 28 1999, 00:02:06) [GCC 2.8.1] on sunos5 Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam >>>.. Note that the hash, or pound, character, "#", is used to start a comment in Python., e.g. "if os.path.isfile('.pythonrc.py'): execfile('.pythonrc.py')". If you want to use the startup file in a script, you must do this explicitly in the script: import os filename = os.environ.get('PYTHONSTARTUP') if filename and os.path.isfile(filename): execfile(filename)
http://docs.python.org/release/2.1.2/tut/node4.html
CC-MAIN-2013-20
refinedweb
117
67.86
1052/list-dictionaries-find-minimum-calue-common-dictionary-field) You could try using the AST module. ...READ MORE You can use the exponentiation operator or ...READ MORE Hello @kartik, import operator To sort the list of ...READ MORE lst = [{'price': 99, 'barcode': '2342355'}, {'price': ...READ MORE suppose you have a string with a ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE n=[1,2,3,4,5,6,7,8,9] print(len(n)) =9 READ MORE You probably want to use np.ravel_multi_index: [code] import numpy ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/1052/list-dictionaries-find-minimum-calue-common-dictionary-field
CC-MAIN-2021-31
refinedweb
136
62.04
/* rtlinux.c: Copyright (C) 1995 Jonathan Moh */ /* RTAUDIO.C for Linux */ /* This module is included when RTAUDIO is defined at compile time. It provides an interface between Csound realtime record/play calls and the device-driver code that controls the actual hardware. */ #include "cs.h" #include "soundio.h" #include <unistd.h> #include <fcntl.h> #include <errno.h> #define DSP_NAME "/dev/dsp" static int dspfd_in = -1, dspfd_out = -1; void setsndparms(int, int, int, float, unsigned); void setvolume(unsigned); #ifdef never static int ishift = 0, oshift = 0; #endif static int oMaxLag; extern OPARMS O; #ifdef PIPES # define _pclose pclose #endif #ifdef never static int getshift(int dsize) /* turn sample- or frame-size into shiftsize */ { switch(dsize) { case 1: return(0); case 2: return(1); case 4: return(2); case 8: return(3); default: die(Str(X_1169,"rtaudio: illegal dsize")); return(-1); /* Not reached */ } } #endif void recopen_(int nchanls, int dsize, float sr, int scale) /* open for audio input */ { oMaxLag = O.oMaxLag; /* import DAC setting from command line */ if (oMaxLag <= 0) /* if DAC sampframes ndef in command line */ oMaxLag = IODACSAMPS; /* use the default value */ /* Jonathan Mohr 1995 Oct 17 */ /* open DSP device for reading */ if ((dspfd_in = open(DSP_NAME, O_RDONLY)) == -1 ) { perror("unable to open soundcard for audio input\n"); die(Str(X_1307,"unable to open soundcard for audio input")); } /* initialize data format, channels, sample rate, and buffer size */ setsndparms(dspfd_in, O.informat, nchanls, sr, oMaxLag * O.insampsiz); /* ishift = getshift(dsize); */ } void playopen_(int nchanls, int dsize, float sr, int scale) /* open for audio output */ { oMaxLag = O.oMaxLag; /* import DAC setting from command line */ if (oMaxLag <= 0) /* if DAC sampframes ndef in command line */ oMaxLag = IODACSAMPS; /* use the default value */ /* J. Mohr 1995 Oct 17 */ /* The following code not only opens the DSP device (soundcard and driver) for writing and initializes it for the proper sample size, rate, and channels, but allows the user to set the output volume. */ { /* open DSP device for writing */ if ((dspfd_out = open(DSP_NAME, O_WRONLY)) == -1 ) die(Str(X_1308,"unable to open soundcard for audio output")); /* set sample size/format, rate, channels, and DMA buffer size */ setsndparms(dspfd_out, O.outformat, nchanls, sr, O.outbufsamps * O.outsampsiz); /* check if volume was specified as command line parameter */ if (O.Volume) { /* check range of value specified */ if (O.Volume > 100 || O.Volume < 0) die(Str(X_524,"Volume must be between 0 and 100")); setvolume( O.Volume ); } /* 'oshift' is not currently used by the Linux driver, but ... */ /* oshift = getshift(nchanls * dsize); */ } } int rtrecord_(char *inbuf, int nbytes) /* get samples from ADC */ { /* J. Mohr 1995 Oct 17 */ if ( (nbytes = read(dspfd_in, inbuf, nbytes)) == -1 ) die(Str(X_736,"error while reading DSP device for audio input")); return(nbytes); } void rtplay_(char *outbuf, int nbytes) /* put samples to DAC */ /* N.B. This routine serves as a THROTTLE in Csound Realtime Performance, */ /* delaying the actual writes and return until the hardware output buffer */ /* passes a sample-specific THRESHOLD. If the I/O BLOCKING functionality */ /* is implemented ACCURATELY by the vendor-supplied audio-library write, */ /* that is sufficient. Otherwise, requires some kind of IOCTL from here. */ /* This functionality is IMPORTANT when other realtime I/O is occurring, */ /* such as when external MIDI data is being collected from a serial port. */ /* Since Csound polls for MIDI input at the software synthesis K-rate */ /* (the resolution of all software-synthesized events), the user can */ /* eliminate MIDI jitter by requesting that both be made synchronous with */ /* the above audio I/O blocks, i.e. by setting -b to some 1 or 2 K-prds. */ { /* J. Mohr 1995 Oct 17 */ if (write(dspfd_out, outbuf, nbytes) < nbytes) printf(Str(X_177,"/dev/dsp: could not write all bytes requested\n")); nrecs++; } void rtclose_(void) /* close the I/O device entirely */ { /* called only when both complete */ /* J. Mohr 1995 Oct 17 */ if (dspfd_in >= 0 && (close(dspfd_in) == -1)) die(Str(X_1306,"unable to close DSP device")); if (dspfd_out >= 0 && (close(dspfd_out) == -1)) die(Str(X_1306,"unable to close DSP device")); dspfd_in = dspfd_out = -1; if (O.Linein) { #ifdef PIPES if (O.Linename[0]=='|') _pclose(Linepipe); else #endif if (strcmp(O.Linename, "stdin")!=0) close(Linefd); } }
http://csound.sourcearchive.com/documentation/1:4.23f12-3/rtlinux_8c-source.html
CC-MAIN-2017-26
refinedweb
677
53
for connected embedded systems recv() Receive a message from a socket Synopsis: #include <sys/types.h> #include <sys/socket.h> ssize_t recv( int s, void * buf, size_t len, int flags );. Library: libsocket Use the -l socket option to qcc to link against this library. Description: The recv() function receives a message from a socket. It's normally used only on a connected socket -- see connect() -- and is identical to recvfrom() with a zero from parameter. This routine returns the length of the message on successful completion. If a message is too long for the supplied buffer, buf, then excess bytes might be discarded, depending on the type of socket that the message is received from; see socket(). If no messages are available at the socket, the receive call waits for a message to arrive, unless the socket is nonblocking--see ioctl()--in which case -1 is returned and the external variable errno is set is: connect(), ioctl(), getsockopt(), read(), recvfrom(), recvmsg(), select(), socket()
http://www.qnx.com/developers/docs/6.3.2/neutrino/lib_ref/r/recv.html
crawl-003
refinedweb
163
62.38
hashed comments are incorrect. They read: # add i as a friend of j # add j as a friend of i They should read: # add j as a friend of i # add i as a friend of j Note from the Author or Editor:agreed, those two comments should be switched The statement "For example, Thor (id 4) has no friends in common with Devin (id 7) , . . ." is incorrect. They share the friend Clive (id 5). Note from the Author or Editor:good point, change that sentence to For example, Hero (id 0) has no friends in common with Klein (id 9), but they share interests in Java and big data. In the histogram code, please add the following to resolve Counter(): from collections import Counter Note from the Author or Editor:I don't care that much either way, I sort of assumed importing Counter was implied, but I don't mind adding a from collections import Counter to the start of the example The following sentence at the top of the page: "The dot product measures how far the vector v extends in the w direction." is usually false, but can be true if w is a unit vector. Alternatively, the following correction would make the statement true: "The dot product of v and w, divided by the magnitude of w, measures how far the vector v extends in the w direction." Or alternatively: "Given two vectors w and v, if w is a unit vector, then the dot product measures how far the vector v extends in the w direction." Note from the Author or Editor:yeah, this is a fair criticism. I would simply change the first sentence on the page to If _w_ has magnitude 1, the dot product measures how far the vector _v_ extends in the _w_ direction. The list x contains second value 1, whereas it should contain second value -1. Note from the Author or Editor:yes, this is a mistake, x should be [-2, -1, 0, 1, 2] In the first sentence of paragraph #2, it says: "For our purposes you should think of probability as a way of quantifying the uncertainty associated with events chosen from a some universe of events." ('universe' is italicized) Does 'a some <i>universe</i>' include an extra word, or does it have a special meaning in this context? Note from the Author or Editor:the "a" should not be there, it should just say "chosen from some universe" The sentence "It has the distribution function:" would be improved by substituting "probability density" in place of "distribution". In the preceding section the author introduced the "probability density function" and the "cumulative distribution function". Given that context the reader might incorrectly infer that the equation following the second paragraph is the Normal cumulative distribution function. Note from the Author or Editor:agree, should change to It has the probability density function: Values are assigned to low_p and hi_p, but these are never used. Statements that refer to low_p and hi_p should be simplified. Note from the Author or Editor:agree with this, revised version at In: " X should be distributed approximately normally with mean 50 and standard deviation 15.8:" mean should be 500 not 50 Note from the Author or Editor:confirmed, the mean should be 500 both 50 should be 500 Note from the Author or Editor:agreed, change both 50 to 500 The title of a section has disappeared between the 2nd and 3rd paragraphs. It should be p-values. This title appears at the end of the 2nd paragraph with its markup before: === Note from the Author or Editor:yes, looks like the markup wasn't quite right. If beta_pdf is called with [x=0 and alpha<1] or with [x=1 and beta<1], the function crashes, because python does not permit 0 to be raised to a negative power. An easy fix is to change the 1st statement to if x <= 0 or x >= 1: Note from the Author or Editor:I agree, change the first line of the beta_pdf function to if x <= 0 or x >= 1: for line in file: should be for line in f: Note from the Author or Editor:agree, should be for line in f: # look at each line in the file The script in lines 1 through 9 on page 108 is incorrect, because it does not produce the results printed on lines 11 through 14. Instead, the lines of text in bad_csv.txt get merged into a single line of text - as if the f.write("\n") were missing. One way to correct the error is to change the 6th line to "with open('bad_csv.txt', 'w') as f:" (omitting 'b' from open's 2nd argument). This script does not need 'wb', because it does not use the CSV module. Another (less elegant) resolution is to replace line 9 with f.write("\r\n"). Note from the Author or Editor:I cannot reproduce this (the code as written works for me); however, I agree that the 'b' parameter does not need to be there, and I am ok with getting rid of the b and changing the line of code to `with open('bad_csv.txt', 'w') as f:` figure out what do should be figure out what to do Note from the Author or Editor:just like the errata says all 'spam' in this paragraph should be 'non-spam' Note from the Author or Editor:agree, in that paragraph both "additional spams" should instead be "additional non-spams" (the spam in "spam probabilities" can stay as is) The predictions would tend to be too small for users who work many hours and too large for users who work few hours, because Beta(2) > 0 and we "forgot" to include it. In the actual model, Beta(2) is < 0 (on page 182 in the example, this is confirmed), since in the example "people who work more hours spend less time on the site" Note from the Author or Editor:yes, that whole paragraph is wrong, it should be Think about what would happen if we made predictions using the single variable model with the "actual" value of beta_1. (That is, the value that arises from minimizing the errors of what we called the "actual" model.) The predictions would tend to be way too large for users who work many hours and a little too large for users who work few hours, because beta_2 < 0 and we "forgot" to include it. Because work hours is positively correlated with number of friends, this means the predictions tend to be way too large for users with many friends, and only slightly too large for users with few friends. 'unemployed' should be 'work_hours' Note from the Author or Editor:agree, the comment on the third line should say work hours # 0.131, # work hours, actual error = 0.127 ``` # back-propagate errors to hidden layer hidden_deltas = [hidden_output * (1 - hidden_output) * dot(output_deltas, [n[i] for n in output_layer]) for i, hidden_output in enumerate(hidden_outputs)] ``` `output_layer` is not defined in the `backpropagate` function, and not passed in, hence running the example produces `NameError: global name 'output_layer' is not defined` Note from the Author or Editor:the line of code dot(output_deltas, [n[i] for n in output_layer]) should be replaced with dot(output_deltas, [n[i] for n in network[-1]]) For the function of matrix_multiply_mapper, two matrix indexes should be passed: the row number of A and column number of B For any nonzero A_ij, all C_ik may be affected, with k being any column index of B Similarly, for any nonzero B_ij, all C_kj may be affected, with k being any row index of A In the text, the common dimension was used, which is wrong. Also, for the function of matrix_multiply_reducer, m is not used. Note from the Author or Editor:yes, the code is wrong. here is a fixed version of the functions and then at the very bottom of the page you need to change the definition of mapper mapper = partial(matrix_multiply_mapper, 2, 3) and at the top of the next page change the definition of reducer reducer = matrix_multiply_reducer © 2017, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.oreilly.com/catalog/errata.csp?isbn=0636920033400
CC-MAIN-2017-39
refinedweb
1,390
58.45
On Mon, Feb 27, 2012 at 08:50:00PM +0000, Thorsten Glaser wrote: > reassign 317466 src:eglibc > forcemerge 534521 317466 > thanks > > This is not a bug in pax, but eglibc _really_ should do something. I disagree. Pax isn't built with large file support (because <fts.h> doesn't allow that), so even if eglibc is fixed, pax would need to be fixed. Blocking bug should be used instead. > How about something like this (for all affected functions)? > > #ifdef __USE_FILE_OFFSET64 > FTSENT *fts_read (FTS *) __asm__("fts_read64"); > #else > FTSENT *fts_read (FTS *); > #endif > > Then just compile the *.c twice. > The problem is not having fts_read64, but having a different FTSENT structure. This mean changing the glibc ABI, not something we want to do only in Debian. This has to be dealt upstream. -- Aurelien Jarno GPG: 1024D/F1BCDB73 aurelien@aurel32.net
https://lists.debian.org/debian-glibc/2012/02/msg00144.html
CC-MAIN-2016-30
refinedweb
137
75.5
Note: While writing this post, a new proposal has been published, that uses a different syntax for lambdas, removes function types and uses more type inferencing. I'll post a short description of the changes soon. The new proposal would change some of the examples in this post, but the examples are still essentially valid and useful. The Collections API is one of the most used APIs around in Java. Amongst its mostly used collection types are lists, sets and maps. Also the Collectionsclass, which provides utility methods for operating on these collections is commonly used. The reason why I chose the Collections API as an example on how lambdas might influence existing APIs is that certain kinds of operations on collections can be written much more natural in a functional programming style than in an imperative style. This is also one of the reasons for the growing number of projects that aim to bring functional elements to Java in the form of libraries, e.g. Google Collections or lambdaj. Status quo One of the common operations on lists is filtering a list for elements matching a certain condition, say for example filtering a list of integers for the even ones. We could implement a method findEvenlike this: List<Integer> findEven(List<Integer> list) { List<Integer> result = new ArrayList<Integer>(); for (Integer n : list) { if (n % 2 == 0) result.add(n); } return result; }We can call it with a list of integers and it will return a list containing the even integers. Works just fine, but what if we'd also want a method that returns only the odd elements? There is no straight way of building an abstraction for that, so we'd write a method findOddwhich would look 99.3865% the same as findEven. Google Collections to the rescue we can use the Iterables.filtermethod which takes a list and filters it by a given Predicate. Iterable<Integer> even = Iterables.filter(list, new Predicate<Integer>() { @Override public boolean apply(final Integer n) { return n % 2 == 0; } });This filter method is way more flexible as it can take any predicate you like to filter by. But actually it really doesn't look nice, at least unless you build your own library of reusable predicates. Simplification through lambdas Now, if we have a look at it, the essential part of the predicate is just the expression n % 2 == 0. The rest is just boilerplate. With lambdas we can actually remove all(!) of this boilerplate. To do that, we implement a filter method quite similar to that of the Google Collections, but instead of taking a predicate of type Predicateas the second argument, it'll take the predicate as a lambda: public static <T> List<T> filter(List<T> list, #boolean(T) predicate) { List<T> result = new ArrayList<T>(); for (T t : list) { if (predicate.(t)) result.add(t); } return result; }Say we have implemented this filter method in a class called FCollections, calling it looks like this then: List<Integer> list = Arrays.asList(4,3,2,6,4); List<Integer> even = FCollections.filter(list, #(Integer n)(n % 2 == 0));Compare that to calling the Google Collections filter method. It's much more concise, readable and expressive. Extension methods Before we have a look at more operations on collections of this sort, I want to mention a feature that will probably also be introduced in JDK7 - extension methods. Notice that we have put our filtermethod in a utility class FCollections. But filtershould probably rather be a method of Listor Collectionitself. Unfortunately, adding a new method to these interfaces would break existing sub classes of Listor Collection, something we definitively wouldn't want. Now, extension methods allow for adding a new method to an interface and specifying a default implementation of this method: public interface List<E> extends Collection<E> { List<E> filter(#Boolean(E) predicate) import static Collections<E>.filter; ...Now, if a sub class of Listdoesn't implement a filter method itself Collections.filteris called by default. Collections.filterwould look like our FCollections.filtermethod - it takes a list as the first argument followed by the predicate lambda and it must return a List. So with extensions methods it would be possible to add new operations on collection types like those showed in this post without breaking existing sub classes (at least as long as these subclasses didn't implement an incompatible filter method before). But extension methods are not the primary topic of this post (search the web for extension methods or public defender methods for more details), let's go back and see some more useful operation on collections. More operations There's a bunch of operations that can be implemented in a more functional style, e.g. iterating a list and applying a function to each element. public class MyList<T> extends ArrayList<T> public void each(#void(T) f) { for (T e : this) { f.(e); } }Instead of iterating with a forloop, we can call the eachmethod like this: list.each(#(Integer n)(System.out.println(n)));Another example is the mapfunction that takes a function, applies it to each element of the list and returns the result as a list: public <S> List<S> map(#S(T) f) { List<S> result = new List<S>(); for (T e : this) { result.add(f.(e)); } return result; }We could use it to create a list of squares from a list of integers: List<Integer> squares = list.map(#(Integer n)(n*n)); foldLeftis a another method that combines the elements of a list together using a function ffrom left to right ( foldRightdoes the same from right to left) starting with a value z, i.e. the result is f(f(f(f(f(z,a1),a2),a3),a4),a5) public <S> S foldLeft(S z, #S(S, T) f) { S acc = z; for (T e : this) { acc = f.(acc, e); } return acc; }A simple way to use it would be to calculate the sum of the elements: Integer sum = list.foldLeft(new Integer(0), #(Integer a, Integer b)(a+b)); reducedoes the same, but without an initial value. public T reduceLeft(#T(T, T) f) { if (isEmpty()) throw new UnsupportedOperationException("reduce on empty list"); if (size() == 1) return get(0); T acc = get(0); for (T e : subList(1, size())) { acc = f.(acc, e); } return acc; } Integer sum = list.reduceLeft(#(Integer a, Integer b)(a+b));The findmethod searches for the first element matching a given predicate and returns it: public T find(#Boolean(T) predicate) { for (T e : this) { if (predicate.(e)) return e; } return null; } Integer found = list.find(#(Integer n)(n>5));There are many more possibilities to use predicates in this way, here are some short examples: boolean hasPrime = list.exists(#(Integer n)(isPrime(n)); boolean allPrimes = list.forall(#(Integer n)(isPrime(n)); boolean countPrimes = list.count(#(Integer n)(isPrime(n)); boolean withoutPrimes = list.remove(#(Integer n)(isPrime(n)); Pair<List<Integer>> oddAndEven = list.partition(#(Integer n)(n%2==0)); To summarize, as you can see the Collections API could benefit in a lot of ways from the addition of lambdas. For some sort of operations, lambdas would increase readability, conciseness and expressiveness. Extension methods could make it possible to have some of the new operations directly in the collection classes instead of in utility classes like Collections. 3 comments: Why use the predicates directly ? Why would you want to re-implement all that when this would work : List list = ... Iterables.filter(list, #(Integer)(n%2==0)) This will work because it will result in the automatic conversion of the lambda into a predicate (since predicate has only a single "pure virtual" method (sorry I come from C++), and it will fill that in) Yes, re-implementing any existing interfaces wouldn't make much sense. Same with the FileFilter example in part 2 (as mentioned). The Predicate interface is part of the Google Collections library though. I'd rather see it as part of the Collections API itself. Currently I'm working on the 4th part, where this will also be a topic, again. So, stay tuned. The sheer amount of extra method calls is overwhelming. It looks like Lambdas rely heavily on the JVM ability to inline code. I hope this is taken into account and that Lambda expressions optimization is in place.
http://stronglytypedblog.blogspot.com/2010/07/lambdas-in-java-preview-part-3.html?showComment=1279116490505
CC-MAIN-2017-17
refinedweb
1,387
54.32
10000 Update() calls Unity has so-called Messaging system which allows you to define a bunch of magic methods in your scripts which will be called at specific events while your game is running. This is a very simple and easy to understand concept especially good for new users. Just define an Update method like this and it will be called once a frame! void Update() { transform.Translate(0, 0, Time.deltaTime); } For an experienced developer this code is a bit odd. - It’s not clear how exactly this method is called. - It’s not clear in what order these methods are called if you have several objects in a scene. - This code style doesn’t work with intellisense. How Update is called No, Unity doesn’t use System.Reflection to find a magic method every time it needs to call one. Instead, the first time a MonoBehaviour of a given type is accessed the underlying script is inspected through scripting runtime (either Mono or IL2CPP) whether it has any magic methods defined and this information is cached. If a MonoBehaviour has a specific method it is added to a proper list, for example if a script has Update method defined it is added to a list of scripts which need to be updated every frame. During the game Unity just iterates through these lists and executes methods from it — that simple. Also, this is why it doesn’t matter if your Update method is public or private. In what order Updates are executed The order is specified by Script Execution Order Settings (menu: Edit > Project Settings > Script Execution Order). It might be not the best way to manually set the order of 1000 scripts but if you want one script to be executed after all other ones this way is acceptable. Of course, in the future we want to have a more convenient way to specify execution order, using an attribute in code for example. It doesn’t work with intellisense We all use an IDE of some sort to edit our C# scripts in Unity, most of them don’t like magic methods for which they can’t figure out where they are called, if at all. This leads to warnings and makes it harder to navigate the code. Sometimes developers add an abstract class extending MonoBehaviour, call it BaseMonoBehaviour or alike and make every script in their project extend this class. They put some basic useful functionality in it along with a bunch of virtual magic methods like so: public abstract class BaseMonobehaviour : MonoBehaviour { protected virtual void Awake() {} protected virtual void Start() {} protected virtual void OnEnable() {} protected virtual void OnDisable() {} protected virtual void Update() {} protected virtual void LateUpdate() {} protected virtual void FixedUpdate() {} } This structure makes using MonoBehaviours in your code more logical but has one little flaw. I bet you already figured it out… All your MonoBehaviours will be in all update lists Unity uses internally, all these methods will be called each frame for all your scripts, mostly doing nothing at all! One might ask why should anyone care about an empty method? The thing is that these are the calls from native C++ land to managed C# land, they have a cost. Let’s see what this cost is. Calling 10000 Updates For this post I created a small example project which is available on Github. It has 2 scenes which can be changed by tapping on a device or pressing any key in editor: (1) In the first scene 10000 MonoBehaviours are created with this code inside: private void Update() { i++; } (2) In the second scene another 10000 MonoBehaviours are created but instead of having an Update they have a custom UpdateMe method which is called by a manager script every frame like so: private void Update() { var count = list.Count; for (var i = 0; i < count; i++) list[i].UpdateMe(); } The test project was run on 2 iOS devices compiled to Mono and IL2CPP in non-Development mode in Release configuration. Time was measured as following: - Set up a Stopwatch in the first Update called (configured in Script Execution Order), - Stop the Stopwatch at LateUpdate, - Average the timings over a few minutes. Unity version: 5.2.2f1 iOS version: 9.0 Mono WOW! This is a lot! There must be something wrong with the test! Actually, I just forgot to set Script Call Optimization to Fast but no Exceptions, but now we can see what impact on performance this particular setting has… not that anyone cares anymore with IL2CPP. Mono (fast but no exceptions) OK, this is better. Let’s switch to IL2CPP. IL2CPP Here we see two things: - This particular optimization still makes sense in IL2CPP. - IL2CPP still has room for improvement and as I’m writing this post Scripting and IL2CPP teams are working hard to increase performance. For example the latest Scripting branch contains optimizations making the test run 35% faster. I’ll explain what Unity is doing under the hood in a few moments. But right now let’s change our Manager code to make it 5 times faster! Interface calls, virtual calls and array access If you haven’t read this great series of posts about IL2CPP internals you should do it right after you finish reading this one! It turns out that if you’d wanted to iterate through a list of 10000 elements every frame you’d better use an array instead of a List because in this case generated C++ code is simpler and array access is just faster. In the next test I changed List<ManagedUpdateBehavior> to ManagedUpdateBehavior[]. This looks much better! Update: I ran the test with array on Mono and got 0.23ms. Instruments to the rescue! We figured out that calling functions from C++ to C# is not fast, but let’s find out what Unity is actually doing when calling Updates on all these objects. The easiest way to do this is to use Time Profiler from Apple Instruments. Note that this is not a Mono vs. IL2CPP test — most of the things described further are also true for a Mono iOS build. I launched the test on iPhone 6 with Time Profiler, recorded a few minutes of data and selected a one minute interval to inspect. We are interested in everything starting from this line: void BaseBehaviourManager::CommonUpdate<BehaviourManager>() If you haven’t used Instruments before, on the right you see functions sorted by execution time and other functions they call. The most left column is CPU time in ms and % of these functions and functions they call combined, second left column is self execution time of the function. Note that since CPU wasn’t fully used by Unity during this experiment we see 10 seconds of CPU time spent on our Updates in a 60 seconds interval. Obviously we are interested in functions taking most time to execute. I used my mad Photoshop skills and color coded a few areas for you to better understand what’s going on. UpdateBehavior.Update() In the middle you see our Update method or how IL2CPP calls it — UpdateBehavior_Update_m18. But before getting there Unity does a lot of other things. Iterate over all Behaviours Unity goes over all Behaviours to update them. Special iterator class, SafeIterator, ensures that nothing breaks if someone decides to delete the next item on the list. Just iterating over all registered Behaviours takes 1517ms out of total 9979ms. Check if the call is valid Next, Unity does a bunch of checks to make sure that it is calling a valid existing method on an active GameObject which has been initialized and its Start method called. You don’t want your game to crash if you destroy a GameObject during Update, do you? These checks take another 2188ms out of total 9979ms. Prepare to invoke the method Unity creates an instance of ScriptingInvocationNoArgs (which represents a call from native side to managed side) together with ScriptingArguments and orders IL2CPP virtual machine to invoke the method (scripting_method_invoke function). This step takes 2061ms out of total 9979ms. Call the method scripting_method_invoke function checks that passed arguments are valid (900ms) and then calls Runtime::Invoke method of IL2CPP virtual machine (1520ms). First, Runtime::Invoke checks if such method exists (1018ms). Next, it calls a generated RuntimeInvoker function for method signature (283ms). It in turn calls our Update function which according to Time Profiler takes 42ms to execute. And a nice colorful table. Managed Updates Now let’s use Time Profiler with the manager test. You can see on the screenshot that there are the same methods (some of them take less than 1ms total so they are not even shown) but most of the execution time is actually going to UpdateMe function (or how IL2CPP calls it — ManagedUpdateBehavior_UpdateMe_m14). Plus, there’s a null check inserted by IL2CPP to make sure that the array we are iterating over is not null. The next image uses the same colors. So, what do you think now, should one care about a little method call? A few words about the test To be honest, this test is not completely fair. Unity does a great job guarding you and your game from unintended behavior and crashes: Is this GameObject active? Wasn’t it destroyed during this update loop? Does Update method exist on the object? What to do with a MonoBehaviour created during this update loop? — my manager script doesn’t handle anything of that, it just iterates through a list of objects to update. In real world manager script probably would have been more complicated and slower to execute. But in this case I am the developer — I know what my code is supposed to do and I architect my manager class knowing what behavior is possible and what isn’t in my game. Unity unfortunately doesn’t possess such knowledge. What should you do? Of course it all depends on your project, but in the field it’s not rare to see a game using a large number of GameObjects in the scene each executing some logic every frame. Usually it’s a little bit of code which doesn’t seem to affect anything, but when the number grows very large the overhead of calling thousands of Update methods starts to be noticeable. At this point it might already be too late to change the game’s architecture and refactor all these objects into manager pattern. You have the data now, think about it at the beginning of your next project. boris February 11, 2016 at 4:06 am / Hey i don’t understand this managed update concept. If i have 50 gameobjects with a script component that has update() function and inside the update it accesses gameobject position for example. In this case the gameobject position is unique to each gameobject. I need to have 50 update() calls, 1 for each gameobject. How do I implement an approach where i have only 1 update() in some manager and can access gameobject position of all objects??? i don’t understand how i would do this properly? Would i have to keep all 50 gameobjects in an array and access their positions in a loop and then pass data back into the gameobject? Is this better than having 50 update() funtions? Valentin Simonov February 11, 2016 at 3:14 pm / I think you are mixing two tasks together: 1. A manager script which has an array of your script components on gameobjects which calls UpdateMe() function on each component every frame (as it was done in the example from the post). 2. A manager script which has an array of gameobjects which does some logic (involving transforms) on all gameobjects in turn. The difference is that in (1) each script component does the work and accesses its own transform, while in (2) the manager does something with each transform it gets from array. In both cases every transform belongs to individual gameobject and is unique. Jonney Shih January 31, 2016 at 10:24 pm / Why not implement an in-house callback manager, that’s what a game engine is supposed to do! Johan M January 24, 2016 at 2:46 pm / As an old-style developer I highly prefer to have my own handled managers, and only have one Update() on the top-level manager, that I then cascade down to the other classes as needed, only when needed. This blog post proves that this is a sane choice to make, and not implement Update() in each and every class, and have guard-clauses to return for every case when we don’t want it to actually run for that instance etc. David January 27, 2016 at 5:58 am / This ^^^ I’ve been building games this way for something around 20 years. It’s the correct approach for any game, not just unity games. This whole ‘attach every script to an object and use its Update()’ method really strikes me as the wrong way to do things. Aaron January 12, 2016 at 9:40 pm / Just please make sure to provide plenty of tutorials and update the old ones accordingly to demonstrate. I promise you there are plenty of newer “developers” like myself that haven’t made it that far into UNITY yet that we wont have to re-train ourselves to this new approach. I don’t have a horse in this race yet so it doesn’t effect my interests yet. However I am all for improving Unity and making it better. Re-emphasizing – update old tutorials (re do them, please take the time to do this without simply adding annotations as Unity progresses (it only helps old tutorials so much before it isn’t anything like the original version) / this wont scare off potential new developers when something breaks from an old tutorial. I may be putting too much stock into what changes / approach the scripting team is doing though. Imi January 7, 2016 at 5:19 pm / So, after this article… Is Unity finally going to change the default Script template that ships to remove the empty Start() and Update() functions? Please? (I know that the templates can be changed, but srsly… why?) FrkTheMantis January 7, 2016 at 8:38 pm / Still useful for newcomers ;) adsamcik January 11, 2016 at 12:20 pm / I agree, it really helped me when I started with Unity because there are lots of things/functions you still need to know. It doesn’t take that much time to remove them and also, how often do you create a script? Every hour? I doubt it. Adam Tuliper January 15, 2016 at 11:30 pm / Aren’t empty methods typically optimized out by any compiler though? Rodrigo January 4, 2016 at 2:00 pm / Has anyone tried this on PC? Is it normal that I’m getting 12ish ms on an i7 4770k, for the Update scene? focus January 3, 2016 at 12:16 am / Thanks for your article, Valentin! =) Going to bookmark it for colleagues who likes to leave empty Start() and Update() methods from the new C# script template even if they don’t need them. Mahan Moghaddam January 2, 2016 at 7:10 pm / Unity is a very good tool, I don’t know why some people complain. Compared to other tools created in the past decade, Unity is one of the best ones. Go ahead and run Unreal 3, it’s like you’re working with something from the 90’s. Unity has issues, but not that big, plus it’s a good practice for engine developers to follow Unity’s architecture, it worked fine for me. Name January 1, 2016 at 5:43 am / Interesting that unity posts things like this on their blog. The post basically says “the framework is old, bad, and slow, the new framework is even slower, here’s a half finished hack you can use instead”. Robert January 5, 2016 at 9:21 am / +1 Manny December 29, 2015 at 10:42 pm / First of all — great article. I’m always thrilled to learn something new, especially when its something that has a potentially huge impact. Second, has this “Managed Update” concept been some sort of Open Secret? This is the first I’ve read about it. All the articles I’ve read about Unity Performance Optimizations did not mention this, and I’ve read quite a few… or did I just miss it? I’ve just tried to search for more articles on the topic but there appears to be very little. I guess better late to the party than never. And last, (this is the part where I put on my fireproof pants), does this hold true if we’re developing using Javascript/Unityscript? My guess would be “yes” since for the final device build, I’m using IL2CPP. Just confirming. Cheers, Manny Valentin Simonov December 30, 2015 at 12:53 pm / Correct. C# and UnityScript are compiled to the same bytecode. Also it is worth mentioning that we’d like everyone to use C# instead of UnityScript. Pogo January 6, 2016 at 9:38 am / Stop writing documentation examples in Uscript and you’ll see newcomers using c# by default. Vincent January 13, 2016 at 9:56 pm / Hi Valentin, While it look like you will drop support for UnityScript, do you have a plan in the future to support newer version of javascript instead? (es2015 and more) Or you just want a single language used to build a game with Unity? Kind Regards. BlackCyrax December 26, 2015 at 3:46 pm / I try to always use unity ‘tools’ instead of my customized ones … so, I don’t need to face problems because of my “scapes” LOL. Actually, I found messaging system really useful and clear to understand. SomeDude December 25, 2015 at 4:58 am / “The scripting team are designing a whole new approach” Unity team: don’t change anything. Implementing managers in managed code – that’s what many developers already doing. In our case we have whole framework on top of unity api implemented in managed code: for fsm, messaging, etc. Unity is already great! Most important – it’s currently free! Bob December 25, 2015 at 9:06 am / Collision events are completely out of our control. So yes, they DO have to fix this SomeDude December 25, 2015 at 9:57 am / Collision detection already performed by physics engine. You want to perform polling yourself (in managed code)? To avoid undesired collision/trigger events – just disable that object/component when you don’t need it. Bob December 25, 2015 at 6:09 pm / I mean collision events also suffer a severe performance loss due to being called by the messaging system. And we can’t make a “manager” for that like we can with Update calls. This is something only unity can fix. (Also sorry for accidental double post) Bob December 25, 2015 at 9:12 am / We have no control over OnCollisionStay events, for example. So they DO have to fix this Bob December 24, 2015 at 10:13 pm / This is extremely depressing. Depressing enough to make me want to switch to another game engine and tell all my acquaintances to avoid Unity. Completely replacing this atrocious messaging system should be Unity’s #1 priority right now, even if it pushes back everything everything else and/or isn’t backwards compatible. Can you at least reassure us that the team is working on a fix? Richard Fine December 25, 2015 at 3:08 am / The scripting team are designing a whole new approach that will (amongst other things) allow for faster scripting invocations, yes. Pablo December 25, 2015 at 3:45 am / Can i say something? This is not so bad…think about it…when you profile an application you see it…if you had profile some of your games, you already know that the Update call’s isn’t a bottleneck or something that decreases the performance deeply. It’s just consuming a little bit of time of the proccesor. So, it’s not so chaotical. Christian December 25, 2015 at 9:13 am / Please, if you make such core change make it optional! For many people who already work smartly the current version is perfectly fine and the new version will for sure introduce new issues. So please if you do make such a big change make it completely separated from the main code like IL2CPP so that it won’t break existing systems. I couldn’t stand to wait god know how many version for a stable version because of a feature I don’t need. Trigve January 5, 2016 at 7:17 pm / I know that folks here wouldn’t agree with me but I would really like to have a C API (not C++ but pure C) exposed, too. We’re using unity but only as “backend” and all the game core (90%) is done in C++ (as a native plugin) using mono embedded API. Another advantage of C API would be one can implement their own scripting subsystem. I don’t/can’t/want to use Unreal 4 because of some missing features/complications. David January 10, 2016 at 4:43 am / Please don’t do that. Seriously. Don’t. All you’re going to do is introduce bugs and other issues into games that are already working just perfectly fine with the existing infrustructre just to accomodate a very small minority. If you want to do something, redesign Unity’s device input system. That needs a massvie overhaul. How about adding the ability to call gameObject.Find() on objects that are inactive? Lots of things that need improving, but the core architecture of your system works just fine. While I obviously can’t speak for everyone, I personally have no interest in rewriting my whole game to accomodate a massive change that isn’t being asked for. SomeDude December 25, 2015 at 4:48 am / You can use single MonoBehavior for game logic (not talking about triggers/collision detection). Treat it’s Update() like WinMain(). Manually update your subsystems, like: //not a real code, just example void Update() { FSM.update(); EntityManager.update(); … } David January 10, 2016 at 4:35 am / This is what I’m doing. I have 1 monobehaviour script attached to 1 empty game object in the scene. That passes its Update() call to the various loaded managers and tasks the characters have to perform. Only the loaded classes get the calls. So there’s never a problem with 10000 update calls. There’s 1, and it calls the methods it needs to. I’m getting outstanding performance even at the highest rendering settings (I never drop below Unity’s capped 60 frames a second). There is **zero** reason to completely rewrite the core of Unity. It would very likely do nothing but break many millions of lines of code. Kumo December 29, 2015 at 9:54 am / Don’t forget that a lot of game studios made successful games even with these “awful” limitations. Unity is already pretty good as it is, any improvements will just make it better, not “more acceptable”. Pete December 24, 2015 at 6:05 pm / Thanks a lot for this post since it clarifies a lot of things. Having said that, I don’t think the comparison is fair with Mono, for the following reasons: 1. The version of Mono that Unity is using is rather old. The newest one 4.2.1 has lots of improvements: These include, inter ália: better implementations of GC, optimizations in AOT compilation, SIMD and the inclusion of MSFT’s open sourced .NET code. 2. Any interop call -either from .NET to Native or viceversa- has a penalty, due to, for example, the overhead of mapping unmanaged datatypes to managed ones. So, it’s not a surprise that updating behaviors from within a managed handler turns out to be faster than an interop call between the native part of the engine and behaviors. 3. C# it-self, as a language, has been and still is improved over time, what in turn means that its methods are periodically reviewed and refactored to increase performance. What is more, the current trend is to go “Universal” with .NET Native so that, marginally, perf diferences between “pure” native code and “converted” one tend to zero. Something similar to what you’re doing with IL2CPP (and as said before, Mono has AOT compilation). 4. Let me remind you that Unity also has its own version of .NET assemblies (based on C# 3.5) which, compared to .NET’s and Mono’s, could introduce unwanted overhead, specially in comparison to the latest version of the language: C# 6. 5. UnityEngine was conceived and developed to do interop with other languages than C# (that is, UnityScript and Boo) -I don’t know whether the fact that the first versions of the Unity Editor were meant for MacOSX strongly influenced that decision or not, but its architecture wasn’t tailored to fit C# practices. As a matter of fact, in the attempt of avoiding deep hierarchies in favor of an entity-component-system approach, it does not fully comply with OOP. The fact that you have to query behavior in search of certain operations (like Update) as well as approaches like SendMessage/BroadcastMessage seem like a hack that had to be used as a result of that, which in turn doesn’t bring the best UX out-of-the-box on any IDE. 6. A corolary of 5 above is how Unity handles events. Not only doesn’t it use C# approach to events but also it avoid the use of the Observable pattern at all. In fact, it goes the other way around: in the editor, for an event like button pressed, you have to specify not only the objects but the operations of those objects to call, instead of just registering as an observer and sending the same customized event args to all observers when the event happens. The result? A cumbersome way of handling events on the editor, in particular, when classes and operations change during game development, and finally, 7. Lookups on arrays and lists shouldn’t differ in a way that using lists over arrays lead to overheads when traversing lists since the only moment when such penalties should exist is when adding elements obliges the list to expand its inner array. If they differ, then you should check up the version of the Mono compiler you are using, since lookups are optimized in .NET for lists to avoid such thing (and I guess that also in current versions of Mono). Again, this post is valuable since it help us deal with perf with Unity, but it also exposes some issues that could be considered design flaws. I don’t want to offend anyone with this. On the contrary, I want it to become a better engine. In this sense, imvho, performance of the UnityEngine not only could be improved with native compilation (IL2CPP) but also by a full redesign so that on the behaviors side it fully complies with OOP and C# standards avoiding interop calls as much as possible. In short: a) get rid of SendMessage approach, b) implement a managed handlers for calls to Update, LateUpdate, and so on so forth, c) either include Update (and other) methods where appropriate in Monobehaviors or create interfaces for them, d) re-implement how you handle events, e) support c# 6, f) upgrade your Mono backend to 4.x, and g) try not to use you own versions of .NET assemblies unless really necessary and optimized. I hope you guys consider/are considering this. Thanks again. Brendan December 24, 2015 at 4:47 pm / Valentin I know back at one of the Unite roadmap sessions this year they had mentioned that there was work being started on a MonoBehaviour replacement/alternative where they might move away from magic methods and make some new choices, do you think that we will here more about it in the near future? Valentin Simonov December 25, 2015 at 7:04 am / Sure, as Richard said one comment above the scripting team are designing a whole new approach to MonoBehaviours. Expect to hear about it soon. haim December 24, 2015 at 11:22 am / OK, i read about this somewhere else, so in my current project instead of using Update() as the main loop for my characters i start coroutine in Start(), the coroutine stops when the characters is dead. should this improve performance? Seneral December 24, 2015 at 12:09 pm / I took a look at StartCoroutine, it’s actually a call into the native code. That means it’s handled on the unmanaged side so it should be about the same system as Update, only temporarily. Maybe Overhead is a bit lower, as it does not need to fetch Update from your MonoBehaviour explicitly, but idk. I wouldn’t count on it, but you could actually perform a similar test;) Valentin Simonov December 24, 2015 at 12:55 pm / I updated the test on GitHub and it looks that this Coroutines approach is 5 times slower than using Updates. Logically it should be slower at least because the engine has to make 2 calls to the returned enumerator: moveNext and current. Teal Rogers January 3, 2016 at 9:24 pm / There’s also the fact that co-routines have a per frame GC alloc, whereas Update does not. Seb December 24, 2015 at 10:54 am / I wouldn’t expect this kind of articles on the unity blog. It’s like to admit that the framework is old and inefficient, but instead to do something about it, let you know that there are alternatives. Luckily there are alternatives. Valentin Simonov December 25, 2015 at 7:20 am / Developers want to make beautiful and successful games, we help them providing the tools and support services so they’d use the tool right. Meanwhile we are working on improving various subsystems of the engine. Kailas Dierk December 24, 2015 at 4:33 am / Kailas Dierk December 24, 2015 at 4:36 am / Screwed up the formatting on that. I made an asset which does just that. It would be pretty cool if the attribute they implement can be used on individual methods. So you could have a script Awake before everything else, but Update after. JMellow December 24, 2015 at 3:06 am / Nice. These are the types of posts I’d like to see more of. Pablo December 24, 2015 at 12:30 am / What happens with the Update methods from built in scripts that we cannot edit (Camera component, Collider component, MeshRenderer, etc) ? Richard Fine December 24, 2015 at 3:50 am / They’re not scripts, so they don’t work like MonoScripts do. Instead, all components of a particular type are added to a ‘manager’ when they’re enabled, and the manager is called directly at the appropriate point in the main game loop; it then does the work for the components in the most efficient way possible (looping through them one at a time, dispatching them to worker threads, or whatever makes sense for the subsystem in question). Pablo December 24, 2015 at 4:27 am / And what about other functions that are Mono Behaviour called repeatedly like LateUpdate (We must manage them too?)? Thank you Richard. Valentin Simonov December 24, 2015 at 9:03 am / Yes. Update, LateUpdate, FixedUpdate — these go through the same code path. Tristan December 24, 2015 at 12:09 am / Wow! Thanks for the tip! Tomer Barkan December 23, 2015 at 10:35 pm / Unless I’m missing something, it seems that having a manager in managed code that calls all the update functions is significantly more efficient than letting the engine call them all from c++ code. In that case, I ask myself, why not create this manager, in managed code, as part of Unity’s engine, and have it always be the one that calls update? It sounds to me like this should be an architecture decision by Unity, that update calls (and other calls that happen a lot) should be handled from a central part of the engine that runs in managed code to start with. Is this something you guys in Unity are thinking about? Valentin Simonov December 25, 2015 at 7:13 am / See the previous reply. Lior Tal December 23, 2015 at 10:15 pm / Nice article, although I personally still hadn’t hit this limit myself. A couple of questions: 1. You mention that you cache the method reference and then invoke it from a list; in case of an empty method – can’t you discard the method so it won’t even be considered for execution? (you’re already using mono.cecil for a bunch of internal tools, you could use it to check if the method body is empty). 2. Did you consider creating a managed-side Update manager that will avoid most of these costs of native-to-managed calls? (or are those checks expensive for other reasons too?) 3. The title for one of the sections is “INTERFACE CALLS, VIRTUAL CALLS AND ARRAY ACCESS”, but nothing is mentioned regarding those topics… how do interface/virtual calls affect the performance ? Again, nice post and keep ’em coming. Valentin Simonov December 25, 2015 at 7:11 am / 1. I suspect that nobody did this because why would you have an empty Update method in the first place?! o.O But this seems to be a good candidate for the next Hack Week. 2. I don’t know all the details but it’s not that simple to make it easy and fast. The scripting team is designing the whole new approach as we speak. 3. This was meant to be the reference to since list[i] is an interface call. Florian December 28, 2015 at 5:43 pm / Regarding #1, I added that in the base ruleset in “Unity Gendarme” (a fork of the fantastic Gendarme analysis tool). I did that was a while ago while between jobs, so the code is probably a bit crap, but apparently relevant to the discussion. Funnily enough, empty methods are mode widespread that you might think at first. Like having a “create new script” template with all the methods by default, so you don’t have to look them up. One of the projects I worked on had several empty methods, and I regularly check some released Unity games to check that. You’d be surprised. Cameron Bonde January 7, 2016 at 4:05 am / If you did detect empty functions, you could actually do the virtual methods interface without punishment. anonymized to prevent reprisals January 19, 2016 at 11:24 am / There might be an empty Update() method because every single Unity-generated script contains Start() and Update() definitions by default; this was actually mentioned previously in the thread. Not to be rude, but your question seems nonsensical, condescending, asinine, and a bit obtuse. Unity3D might do well to reconsider having you in a customer-facing position, or at least moderate your comments to make sure you aren’t alienating new developers with your inconsiderate and thoughtless remarks. Pablo December 23, 2015 at 9:39 pm / Very interesting article. Am i wrong of concluding that if we have a lot of Update calls on MonoBehaviours it is better to use our own implementation of the Update Calls (Iterates trough monobehaviours with simple arrays and call UpdateMe and so on). I am about to check the project that Valentin uploaded but first i have this doubt. And in the other hand, i think that this is something that Unity has to solve as fast as possible. Thank you and happy holidays. Adam December 23, 2015 at 8:15 pm / I realize this post isn’t meant to compare Mono and IL2CPP, but man, it sure does make IL2CPP look bad. It’s supposed to be innately faster than Mono, yet months into its existence it’s 100% slower in this scenario? Calling MonoBehaviour methods isn’t exactly an edge case. Valentin Simonov December 24, 2015 at 9:54 am / Well, IL2CPP is faster than Mono in a lot of things: In this particular case it might not be very well optimized yet. I ran the test with array on Mono and got 0.23ms vs. 0.22ms in IL2CPP. But once again, the post is not to compare virtual machines. Arthur Brussee December 23, 2015 at 6:45 pm / Great to see unity taking a look at this – I’ve been reporting the issue of slow callbacks through bugs, feedback site and talking directly to Unity devs. The Update calls are bad enough, but everything using SendMessages (OnEnable etc.) are even worse to a point where we had to seriously re-engineer stuff to make streaming not hickup on unity calling OnEnable. Not the code in of OnEnable, them, just _calling_ them. This was for about 2000 objects on PC level hardware, and ~10 ms was spent on calling those. Yikes. The last paragraph slighty concers me. Are we really expected to avoid using Unity callbacks for performance? 10K is not a lot, 2k OnEnable’s is nothing. We can do some special case handling but it feels like it really shouldn’t be a concern to use these callbacks. It feels like Unity really needs to take a hard look on how to improve the performance on these callbacks. Looking at the numbers there’s a lot of stuff in there that seems duplicate (checking if the call is valid, then if it exists, then if the arguments are valid….), redundant (lots of time spent in argument handling – when there’ll never be arguments to Update) or cachable (whether method exists etc.). It’s good to hear that the IL2CPP runtime is being optimized for this case but it seems that generally it needs to be considered how to structure the engine to make this faster. And good to see it’s at least on the radar that it’s slow now :) Valentin Simonov December 24, 2015 at 9:19 am / OnEnable actually uses the same mechanism as Update. Have you profiled the code and witnessed that it was really just calling this method and not awaking other components on your game objects? Physics component, Animator and other components are pretty costly to create, enable and disable. > Are we really expected to avoid using Unity callbacks for performance? No, you need to think about this limitation when architecting your game and vigorously profile. If this particular issue affects your game you know what to do. Also, as I said recent scripting optimizations remove some of the overhead you mentioned. Nico December 23, 2015 at 5:38 pm / It turns out that if you’d wanted to iterate through a list of 10000 elements every frame you’d better use an array instead of a List because in this case generated C++ code is simpler and array access is just faster. This kind of comment irks me to no end. On the one hand, Unity is saying “hey, you can program your game with C#, which is an easy-to-use, safe, productive and powerful language!”. But then it’s like “oh yeah, all those features that make the language so easy and productive, yeah, you shouldn’t use those because they’re slow”. It sends a conflicting message and leads to a lot of unnecessary arguments about what ‘good’ code is. At this point everyone would be better off if we could just write our games directly in C++. Not only that, but a List in C# is nothing more than a wrapper around an array with automatic array resizing features. If the IL2CPP compiler had a special case to recognize and deal with Lists specifically (which is what a lot of Microsoft’s .NET compiler tools do too), then there’s no reason why iterating over a List should be any slower than iterating over an array. Arthur Brussee December 23, 2015 at 6:56 pm / I’m guessing it’s because indexing a list is technically a member function, which means il2cpp will do a null check before each call, and maybe not properly inlining this call etc. However, I agree that it’s really bad that we have to worry about that. Unity mentioned they’d like to special case optimize IL2CPP for unity code. Seems list access should definitely be one of them. Nico December 23, 2015 at 7:59 pm / Yes, indexing a List invokes a call to the getter of the Item property, which in turn indexes the _items array field inside the List object. And since the Item property is an implementation of the IList interface, that call is a virtual method call too, bringing with it all sorts of overhead. Null checks don’t even make the difference here; C# being a safe language will also null-check a plain array when indexing it. This is precisely why you would want the IL2CPP compiler to know about Lists and create a shortcut for the indexing operator that accesses the internal array directly. From a puristic standpoint, it’s not nice to give a compiler knowledge about classes inside the framework, but it does open up a lot of possibilities for optimization. And the .NET compiler does it too, e.g. by translating large switch statements into a Dictionary-based lookup table. My bigger issue however is that with all these edge cases, people end up writing C# code for Unity as if it is plain old C, which basically means throwing out everything that makes C# worthwhile as a language in the first place. Writing idiomatic C# code (using LINQ, foreach statements, relying on the GC for memory management, etc) in Unity is considered by many as ‘wrong’, leading to a lot of micro-optimization work, and you kind of end up with the worst of both worlds. It makes you wonder why we’re still using Mono at all. Valentin Simonov December 24, 2015 at 9:33 am / Nico, I understand your concerns and I am the person usually talking to customers explaining the things you mentioned advising them to “make their code dumber”. There’s nothing inherently bad in Lists, Lambdas, LINQ, foreach and other .NET features, but considering mobile devices limitations one have to use the most CPU and GC efficient code they can write. This is how mobile devices and Unity as a platform work right now and developers need to know that. Not to mention that to write efficient and error-prone code in C++ you need to have a lot of knowledge and experience as well. We are working on improving current situation, on Mono/.NET upgrade, implementing various scripting and IL2CPP optimizations. But scripting is only a small part of the engine, we also have a lot of other really important things to do and unfortunately not enough people. Patience, my friend (8 Valentin Simonov December 24, 2015 at 9:57 am / Another thing is that Lists are just slower or more precisely our Mono compiler generates slower code for them. I ran the test with array on Mono and got 0.23ms (was 0.52ms with List) vs. 0.22ms in IL2CPP. Chris Sinclair December 23, 2015 at 5:29 pm / Instead of using an abstract “BaseMonobehaviour” class which implements all these methods, I recommend creating granular, individual interfaces for the particular behaviours you want your objects to implement. For example: public interface IUpdate { void Update(); } public interface IAwake { void Awake(); } Any “MonoBehaviour” you create implements only the interfaces you intend for it to do: public class MyBehaviour : MonoBehaviour, IUpdate { public void IUpdate() { //do something } } Of course at this point, you have compile-time safety to catch those nasty typo’d case sensitive names (Anyone out there spend too much time trying to figure out and fix a “void update()”?) Regarding intellisense/tooling, with Visual Studio (I can’t speak for the other IDEs), you can auto-implement the interface to deposit the placeholder method automatically. Furthermore, this also gives you a maintenance benefit in that it becomes trivial to find all objects that are “Updating” or “Awaking” or have “FixedUpdate” (just “Find All References” to the particular interface) Finally, if you find you keep implementing the same combination of interfaces, you can always combine them to a common interface: public interface IStandardBehaviour : IAwake, IStart, IUpdate, IOnEnable { } public class MyBehaviour : MonoBehaviour, IStandardBehaviour { //implement all 4 interfaces inherited by IStandardBehaviour } (Sorry if the code formatting above got botched) Chris Sinclair December 23, 2015 at 5:31 pm / Woops, typo’d my first “MyBehaviour” class implementation’s “Update” method name. It should read: public class MyBehaviour : MonoBehaviour, IUpdate { public void Update() { //do something } } John December 23, 2015 at 5:20 pm / Very interesting. Is there anyway to increase performance for OnCollision events? I have a project with a large number (10000+) of GameObjects that have no Update() but all have OnCollisionEnter() and a performance boost would be amazing. Valentin Simonov December 23, 2015 at 5:47 pm / OnCollisionEnter and its friends are implemented through SendMessage so it should be even worse*. Of course Unity doesn’t try to send these messages to scripts which don’t have such callbacks, also one rarely has 10000 objects colliding with each other EVERY frame. When working with 2D and 3D physics you should look at other things which reduce performance like not moving static colliders or 2D colliders inside a Rigidbody2D. Robert Cummings December 23, 2015 at 6:53 pm / Very interesting article. But regarding physics OnCollision/OnStay – especially stay, needs optimising Unity’s side. It is grotesquely inefficient to the point where we had to change the gameplay and there wasn’t alternatives for it (think physical interactions between characters). We really needed stay but it crapped out with just 30 characters on an i7. So going forward, if you’re using Sendmessage internally for collision, I don’t think that’s very optimal, do you? Perhaps a better pattern can be achieved? Thanks for hard work, regardless :) Morten Skaaning December 24, 2015 at 12:51 pm / Hi, Can you open a bug with a test scene in it, when we can get to work on it. Also post the bug Id here. Regards, Morten Nick Clark December 24, 2015 at 1:29 am / “2D colliders inside a Rigidbody2D” Can you elaborate on this? Don’t colliders need to have a rigidbody to register collisions? Valentin Simonov December 24, 2015 at 9:37 am / Right, dynamic 2D colliders must be children of a Rigidbody2D. And if you move objects with 2D colliders make sure that you move the objects with Rigidbody2D components and not colliders inside Ribidbody2D themselves. If you move a collider manually Box2D has to destroy it and create again every time since there’s no way to move colliders (or fixtures as they are called) in Box2D. This will give you a very noticeable overhead in Profiler if done every frame.
http://blogs.unity3d.com/2015/12/23/1k-update-calls/
CC-MAIN-2016-26
refinedweb
7,887
68.81
Download ····· … Theory And Design For Mechanical Measurements 5th Solutions Manuel Torrent Pdf [WORK] ☝🏿 Theory And Design For Mechanical Measurements 5th Solutions Manuel Torrent Pdf [WORK] ☝🏿 Theory And Design For Mechanical Measurements 5th Solutions Manuel Torrent Pdf …. 9th edition trumpet pdf with solutions???. Mahendra Rao · PDF (94 KB). Theory of Fun and Games 3rd Edition.. The design of an information and data system is the initial. Design of an information and data system is the initial step. For information about the present edition of this book, see the. solutions manual. . ITC Data, Inc, solution manual. is based on the. FEOT 2005.pdf pages: 1. Solutions Manual; Rev. B, A. Repair of a Broken Mechanical System. 7th Edition Book Description. B.W. and K.K. Soong answer the question: What is the Newtonian. This manual contains solutions to the questions from the book.. If a dielectric test may be performed with the. Lawrence d’hondt solution manual pdf pdf.4.pdf. Theory And Application Of Damping For Control Systems With And Without Feedback. solutions manual pdf — The Fundamentals Of Heat And Mass Transfer — 5th Edition. Mechanical Vibrations Theory And Applications Solutions Kelly. . Writing Theses Karson, Containing A Complete and Satisfactory Treatise Of Acoustics (London: Longman, Green, Longmans, 1925. As a writer, Mr. and a personal guide for students. A solution manual that gives. solution manual pdf. Sold by the State of Michigan. 6th Edition The Bark Trade.Q: selector with array of method names There are similar questions, but I have a problem, more like a problem of asking I am not quite sure how to ask. I am trying to use reflection to access a method. I have a class like this: public class A{ public void Method1(){} public void Method2(){} } Then I need to select it like this: A myAObject = new A(); Action[] actions = new Action[2]; actions[0] = myAObject.Method1; actions[1] = myAObject.Method2; In this action[] there are two methods. I would like to use reflection to define a selector which will select only one from the array and invoke it. Download The Future Of Employment Without Skills 4.0 free torrent . there are important . ideas and technologies that could help solve. are not learning about the issues or learning about the. Singapore, we must empower students to be ready for the. and labour market should be driving policy at all levels of. The partnership expects to draw on a full range of opportunities provided by the Framework, to develop a training program tailored for each subject-area in. Pauline M. Gardner, RN, MN, university associate professor of nursing.. Digital and information technology. 5th edition (Blum. In this work we report on the. Research of the Teacher Librarian project. library education needs to be considered in light of the changing demands of the digital environment. Theory and Design for Mechanical Measurements (ISBN 0-07-068454-7) PDF. Install torrent or any other torrent from the Windows Applications Download Windows 10 Pro Rs3 V.1709.16299.547 En-us X64 July2018 Pre-activ torrent or any other torrent from the Applications Windows. Download Windows 10 Pro Rs3 V.1709.16299.547 En-us X64 July2018 Pre-activ torrent or any other torrent from the Applications Windows. The latest Windows 10 Enterprise Edition is required to download. This download includes: MSDNAA and RSATRP software and the MSDNAA update software. The latest Windows 10 Enterprise Edition is required to download. This download includes: MSDNAA and RSATRP software and the MSDNAA update software. Download WishesCumTrue – Alexis Silver – Pure Busty Wishes Torrent – RARBG.. Category: XXX (18+).. free torrent Ab f30f4ceada
https://madisontaxservices.com/theory-and-design-for-mechanical-measurements-5th-solutions-manuel-torrent-pdf-work
CC-MAIN-2022-40
refinedweb
600
61.63
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. No module named q on account_transfer module Hi all, I have Odoo running on a server and my installation includes account_transfer module. It works fine. Since the addons, server and web are on a very old version (about one year), I have decided to upgrade de addons/server/web addons to newest versions, and before doing it on the server itself, I am testing everything on a virtual machine on another computer. I have everything setup with new addons/server/web addons, and copied the additional addons I use, including the account_transfer module, but when I try to access the restored database I get the error "No module named q" on the login page. This error was raised whille loading the account_transfer module. After analyse the log, I found that the account_transfer has the following lines on account_transfer.py: from osv import osv, fields from tools.translate import _ import decimal_precision as dp import time import q For testing purposes I commented the line #import q, and tested again and it worked. I was logged onto the system and can use all modules and aplications I have. The problem is that when I try to use the account_transfer module, I get the error: NameError: global name 'q' is not defined So, I found that "q" is not a module. Instead, it is a global name. How can I fix this since it works fine on the server? It works fine on the server and the problem is only on the new virtual machine I am testing. Thank you all Regards Paulo Dear Mariusz Mizgier You save the day. Installed the packege q according to your instructions and it's workins like a charm. Best regards Paulo Matos About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/no-module-named-q-on-account-transfer-module-59726
CC-MAIN-2018-17
refinedweb
342
62.38
Getting Started¶ Note Make sure you have at least version 3.5 of Python. TBone uses the async/await syntax, so earlier versions of python will not work. Selecting a web server¶ TBone works with either Sanic or Aiohttp . It can be extended to work with other nonblocking web servers. If you’re new to both libraries and want to know which to use, please consult their respective documentation to learn which is more suited for your project. Either way, this decision won’t affect the you will use TBone. Model Definition¶ TBone provides an ODM layer which makes it easy to define data schemas, validate and control their serialization. Unlike ORMs or other MongoDB ODMs, such as MongoEngine, The model definition is decoupled from the data persistency layer, allowing you to use the same ODM with persistency layers on different document stores. Defining a model looks like this: from tbone.data.fields import * from tbone.data.models import * class Person(Model): first_name = StringField(required=True) last_name = StringField(required=True) age = IntegerField()
https://tbone.readthedocs.io/en/latest/source/getting_started.html
CC-MAIN-2019-09
refinedweb
172
57.98
Topic:Classical studies Welcome to the Department of Greek Classics. Department description [edit] The Department of Greek Classics is a Wikiversity content development project where participants create, organize and develop learning resources for Greek Classics. The Department of Greek Classics is devoted to offering courses on the Ancient Greek language, history, religion, art, and literature. The courses will generally be divided between Language and History. Most conceivable courses can easily be reconciled with these groups. These courses only serve as an outline and have been arranged in an order from beginner onward. They are by no means written in stone and if better lesson plans or names arise feel free to edit. The "topic" namespace contains pages that are for management and organization of small academic units at Wikiversity such as departments (see: Wikiversity:Topics). Department news [edit] - 14 October. Ancient Greek Language - Introductory Ancient Greek Language - This course will attempt to familiarize the student with the Greek alphabet, voices, cases, noun declensions, and verb conjugations. - Intermediate Ancient Greek Language - This course will focus primarily on the moods and the more complex sentence structuring such as clauses. - Advanced Ancient Greek Language - The intent of this course is to expand the students studying beyond this website by encouraging a self taught battery of the more obscure verb forms and translations of actual Greek texts. - Homeric Greek - The previous courses having dealt with Attic Greek will have prepared the student for Homeric Greek. Ancient Greek History - Survey of Ancient Greek and Near Eastern History - This course will hopefully offer a very broad historical outline of the region to be delved into more fully in the later courses. - Pre-Hellenic Aegean Civilizations - This will focus on the civilizations of the Myceneans and Minoans. Generally granting the student handy knowledge of events leading up to the era known as the classical period of Greece. - The Persian War - This course and the next may be combined into one and could be called Ancient Greek Military History - The Peloponessian War - Ancient Greek Religion - Life of Alexander - This course will hopefully deal with his entire life from his birth and relationship with Philip right up to his death and the aftermath. It will deal with all aspects of his life including social, political, military, and religious. - The Hellenistic Era - This course will deal with the different kingdoms that arose after the death of Alexander and their relationships with each other and the other cultures around them. This course wil hopefully deal with material up to the aquisition of Egypt by Rome. Remember, Wikiversity has adopted the "learning by doing" model for education. Lessons should center on learning activities for Wikiversity participants. We learn by doing. Select a descriptive name for each learning project. Active Members/Current Projects [edit] This is only necessary as long as the Department is being created in order to more easily contact each other for ideas and such. JManning 00:08, 15 October 2006 (UTC) - I plan to begin writing a basic lesson plan for the Introductory Greek course ASAP Strothatynhe 03:48, 3 July 2007 (UTC) - Have created a course outline for Alexander the Great PoBoy321 03:37, 19 May 2008 (UTC) - I will soon begin creating outlines for Intermediate Greek courses.
http://en.wikiversity.org/wiki/Topic:Greek_Classics
CC-MAIN-2013-20
refinedweb
540
50.67
EAP7 how to expose both JAX-RS and JAX-WS service with CXFNéstor Almeida Jul 7, 2017 8:46 AM Hi, I'm trying to expose a service using both restful and soap webservices apis (jax-rs and jax-ws). Currently I made it works using only a few JAX-* annotations: @Path("/alertas") @WebService public class AlertasApi { ... @GET @Consumes({ "application/json", "application/xml" }) @Produces({ "application/json", "application/xml" }) @WebMethod public Alertas listarAlertas() { return alertasService.listarAlertas(); }; ... I can run both services without write any kind of configuration based schema (I only need a bunch of microservices and serve a maven site webpage for their documentation). I know that JBOSS EAP have a Apache CXF based API called JBOSSWS-CXF. My next steps are secure these services with authentication and authorization so I want to use CXF to have a single security context. I want to try to make it simply as possible, using annotations or api (method) based configuration but my major problem was that I can't use the JBOSSWS-CXF module. I specified the EAP server BOM in my Maven pom file: <dependency> <groupId>org.jboss.spec</groupId> <artifactId>jboss-javaee-7.0</artifactId> <version>${version.jboss.spec.javaee.7.0}</version> <type>pom</type> <scope>import</scope> </dependency> I can't see any reference to cxf or jbossws in my project imported libraries. Im getting crazy because I looked for any reference and I doesn't found anything. In resume: JBOSS EAP 7.0.0 WAR deployment for REST and SOAP microservices Both uses the same java method and are configured by JAX annotations Using Maven for project dependencies I want to secure the services so I want to use a API that implements security on both (rest and soap) exposed services I want to make it as simply as possible Wich is is best approach to make it work? CXF? In this case, how can I use native JBOSS CXF (JBOSSWS-CXF) in my Maven project. Thanks you in advance. 1. Re: EAP7 how to expose both JAX-RS and JAX-WS service with CXFAnupKumar Dey Jul 11, 2017 7:54 AM (in response to Néstor Almeida) See the documentation: Developing Web Services Applications - Red Hat Customer Portal 2. Re: EAP7 how to expose both JAX-RS and JAX-WS service with CXFNéstor Almeida Jul 11, 2017 8:41 AM (in response to AnupKumar Dey) I was reading about it last days and I only found this: 2.11.2. Securing JAX-RS Web Services Using Annotations - Enable role-based security. - Add security annotations to the JAX-RS web service. And 3.8.3. Security Token Service (STS) The Security Token Service (STS) is the core of the WS-Trust specification. It is a standards-based mechanism for authentication and authorization. But my goal is to have a single shared security configuration. I expected that Apache CXF/JBOSSWS proviedes it but I can't found anything like this. I want a EJB approach, not a Servlet to delegate as much as possible in the JBOSS EAP server. Regards.
https://developer.jboss.org/thread/275444
CC-MAIN-2019-22
refinedweb
511
51.48
* Lars Marius Garshol | | IMHO we should use the namespace support that is built-in to expat. | Anything else is bound to slow us down. * Paul Prescod | | Unfortunately expat's namespaces support is broken from the point of | view of SAX and DOM. I know, but it's much better to simply modify the output from expat (preferably in C source) than to implement namespaces in Python. Remember: we have to map from the 'uri localname' to a tuple for every single tag in the entire XML document. That is going to have an appreciable performance hit if you implement it in Python no matter how well you implement it. If you do something once for every element it has a performance impact. This is done twice, and it's rather complex. * Lars Marius Garshol | | Whoops! parseFile() no longer exists! We now use the InputSource class | instead. * Paul Prescod | | InputSource seemed like overkill to me. More of a Java-ish type | safety thing. I'd appreciate your opinion. This was what I thought initially as well, but it turns out that InputSource is in fact extremely useful. The trouble is that getting a stream is not enough in the general case. You need to know the base URI. You may want to know the public id. You may need to know the encoding. InputSource is very handy in that it bundles all that information in a single object, making both parse(...) and resolveEntity(...) much more elegant than they would otherwise be. | In my opinion, parse() should accept a string or a stream. If a | string, it should be treated as a URL or filename and opened. Accepting a string is what it does right now. Streams I think should not be directly accepted, but a convenience function or method for them is OK. | We will also provide a convenience method parseString() that parses an | XML string (probably by wrapping it in a cStringIO. Sounds good, as did the rest of the mail. --Lars M.
https://mail.python.org/pipermail/xml-sig/2000-June/002853.html
CC-MAIN-2016-40
refinedweb
333
76.11
The MEfftoffH class is the base class for vector boson fusion type processes in Herwig. More... #include <MEfftoffH.h> The MEfftoffH class is the base class for vector boson fusion type processes in Herwig. Definition at line 25 of file MEfftoff. Reimplemented in Herwig::MEPP2HiggsVBF, Herwig::MEPP2HiggsVBFPowheg, and Herwig::MEee2HiggsVBF.HiggsVBF. Access to the vector ParticleData objects. Access to the data Definition at line 173 of file MEfftoffH.h. The vertices for the calculation of the matrix element. Vertex for fermion-fermion-W Definition at line 313 of file MEfftoffH.h. The intermediate vector bosons. Definition at line 288 of file MEfftoffH.h. The static object used to initialize the description of this class. Indicates that this is an abstract class with persistent data. Definition at line 251 of file MEfftoffH.h.
https://herwig.hepforge.org/doxygen/classHerwig_1_1MEfftoffH.html
CC-MAIN-2019-30
refinedweb
132
53.58
Register fork handlers #include <process.h> int pthread_atfork( void (*prepare)(void), void (*parent)(void), void (*child)(void) ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The pthread_atfork() function registers fork handler functions to be called before and after a fork(), in the context of the thread that called fork(). You can set one or more of the arguments to NULL to indicate no handler. You can register multiple prepare, parent, and child fork handlers, by making additional calls to pthread_atfork(). In this case, the parent and child handlers are called in the order they were registered, and the prepare handlers are called in the reverse order.
http://www.qnx.com/developers/docs/qnxcar2/topic/com.qnx.doc.neutrino.lib_ref/topic/p/pthread_atfork.html
CC-MAIN-2022-27
refinedweb
117
63.59
Connect the device to the PC, open the device manager to install FTDI driver for the device.). For MacOS users, please tick System Preferences -> Security and Privacy -> General -> Allow downloadable apps from the following locations -> App Store and Approved Developer Options . Please click the button below to download the appropriate M5Burner firmware burning tool according to the operating system you are using. Open the application after decompression. Pay attention: MacOSAfter the user has completed the installation, please put the application into the Application,As shown in the following figure。 LinuxFor users, please switch to the decompressed file path and run in the terminal. ./M5Burner, run the application。 Double-click to open the Burner burning tool, ① select the corresponding device class in the left menu, ② select the firmware version you need, and ③ click the download button to download. Connect the M5 device to the computer through the Type-C data cable, ④ select the corresponding COM port, the baud rate can use the default configuration in M5Burner, ⑤ after the configuration is complete, click "Burn" to burn. You can fill in the WIFI information that the device will connect later during the firmware burning phase. (This information will be used for devices to connect to the network, and in this tutorial, we will program in USB mode, which is not required.). When the burning log prompts Successfully, it means that the firmware has been burned. When the first burning or the firmware program runs abnormally, you can erase the flash memory by clicking "Erase" in the upper right corner. In subsequent firmware updates, you do not need to erase it again. If you need to modify the configuration file, please connect your M5 device to your computer through the Type-C cable and select the corresponding COM port,⑦Then you can click configuration to modify it. APIKey: Communication credentials of M5 devices when programming with UIFlow web. Start Mode: Configurable mode to enter after startup. Quick Start: You can choose Quick start to skip the startup interface. Server: Server selection. Wifi: Configure SSID and Password for Wifi. Download. VSCode IDE: Install the M5Stack plug-in: Search the plug-in market for M5Stack and install the plug-in, as shown below. Click the power button on the left side of the device to restart, quickly click the right button to switch mode after entering the menu, and selectUSB mode。 Click the Add M5Stack option in the lower left corner and select the corresponding device port to complete the connection. After completing the above steps, let's implement a simple lighting case program, open the M5Stack file tree, and type in the following program. Click Run in M5stack to easily light the built-in lights. If the device is reset, click the refresh button to reopen the file tree. from m5stack import * from m5ui import * from uiflow import * M5Led.on()
https://docs.m5stack.com/en/quick_start/m5stickc_plus/mpy
CC-MAIN-2022-27
refinedweb
480
51.68
The homework is to find the 10 errors and fix them and make a comment saying how you fixed it. It complies now just have to find the 10 errors. import java.util.Scanner; import java.util.regex.Pattern; /** * A class designed to help a teacher processes grades. It provides methods * for converting a numeric grade into a letter grade, for generation an * appropriate message to the student based on the letter grade and a report * that includes all three items. */ public class GradeReporter { // The limit is the inclusive lower limit for each letter // grade -- this means that 89.5 is an 'A' not a 'B' public static final int A_LIMIT = 90; public static final int B_LIMIT = 80; public static final int C_LIMIT = 70; public static final int D_LIMIT = 70; /** Converts a numeric grade into a letter grade. Grades should be rounded to * nearest whole number * * @param a numeric grade in the range of 0 to 100 * @returns a letter grade based on the numeric grade, possible grades are A, B, C, D and F. */ public char letterGrade(double numberGrade) { char letterGrade = 'A' ; int grade = (int)numberGrade; if (grade >= A_LIMIT) letterGrade = 'A'; else if (grade >= B_LIMIT) letterGrade = 'B'; else if (grade > C_LIMIT) letterGrade = 'C'; else if (grade >= D_LIMIT) letterGrade = 'D'; return letterGrade; } /** Prepares a message appropriate to a particular letter grade * @param letterGrade a letter grade, known grades are A, B, C, D and F. * @returns a message to the student based on the grade. */ public String message(char letterGrade) { char grade= 'A'; String message = ""; switch(grade) { case 'A': message = "Excellent!"; break; case 'B': message = "Very Good."; case 'C': message = "Average."; case 'D': message = "You are in danger of failing the course."; break; case 'F': message = "Please see me."; break; default: message = "unknown grade"; } if (letterGrade == 'A' && letterGrade == 'B') message += " Keep up the good work."; if (letterGrade == 'C' && letterGrade == 'D') message += " Please try harder."; message += " You have room to improve."; return message; } /** Prepares a report from a numeric grade. * @param numGrade a numeric grade in the range of 0 to 100 * @returns the numeric grade, its letter grade and a message */ public String report(double numGrade) { return "Grade: " + numGrade + " " + letterGrade(numGrade) + " " + message(letterGrade(numGrade)); } /** User repeatedly inputs numeric grades and the program prints out the * grade, the letter grade and a comment until the user enters input * beginning with the letter Q. * * It uses something called a "regular expression" to confirm that * the input is numeric before attempting to convert the String input * into a double. This is a powerful tool, and althought not covered * in this class, it can be very useful to learn. */ public static void main(String[] args) { GradeReporter reporter = new GradeReporter(); Scanner console = new Scanner(System.in); String input = ""; String NUMERIC_ONLY = "[+-]?\\d*\\.?\\d+"; // a regular expression pattern // you would read this as: an optional sign ([+-]?), followed by 0 or more digits (\\d*), // followed by an optional decimal point(.?), followed by one or more digits(\\d+). //Keep repeating until the user enters a q. do { System.out.print("Enter grade(Q to quit): "); input = console.nextLine(); // if input is numeric, convert it to double and print the // report that reporter generates if (Pattern.matches(NUMERIC_ONLY, input)) { double numGrade = Double.parseDouble(input); System.out.println(reporter.report(numGrade)); } } while (input.toLowerCase().startsWith("q") ); System.out.println("Good bye."); } }
http://forums.codeguru.com/printthread.php?t=502622&pp=15&page=1
CC-MAIN-2015-11
refinedweb
550
55.24
Windows Communication Foundation From the Inside Back to errors and faults for a bit with this two part series on modifying the HTTP status code used for fault messages. First, we'll need some background. What happens at the HTTP level when a web service encounters a problem? That's a good question because it's not clear at all from programming the service what's going to happen under the covers. Let's build a service and find out. Here's the simplest web service I could think of that has a slight problem. I'll be hosting this in IIS so there's no other code needed to get going. You can imagine the configuration that goes along with this service, but I'm going to omit any discussion of that because it won't be relevant to anything we have to look at. [ServiceContract]public interface IService{ [OperationContract(Action = "*", ReplyAction = "*")] Message Action(Message m);}public class Service : IService{ public Message Action(Message m) { throw new FaultException("!"); }} We can cut all the crud out of the messages by using a POX binding: a text encoder with MessageVersion set to None and a normal HTTP transport. Now, we can run this service and see what happens. I'm just going to telnet to the service address so that we can easily see all of the HTTP headers. HTTP/1.1 500 Internal Server ErrorContent-Type: application/xml; charset=utf-8Server: Microsoft-IIS/7.0X-Powered-By: ASP.NETDate: Tue, 10 Jan 2007 06:25:16 GMTConnection: closeContent-Length: 159<Fault xmlns=""><Code><Value>Sender</Value></Code><Reason><Text xml:!</Text></Reason></Fault> This shows us that our service fault exception results in an HTTP status code of 500 for the response. The body of the message is something that looks a lot like a SOAP fault, but smaller because we said we weren't going to use SOAP. Inside the fault message you can see the fault elements that we talked about in past articles, and it's clear that we can modify anything inside the fault. It's not clear though what we modify to alter the framing of the HTTP response. That brings us to the actual question for this pair of articles. How do I modify the HTTP status code that gets sent back with a fault? We'll answer that question next time, which is going to require writing a bit more code. Next time: Modifying HTTP Error Codes, Part 2 I've enabled tracing in my application, but I don't see entries in the trace log show up until I restart I haven't forgotten about the goal to put together a table of contents for all of these articles. The
http://blogs.msdn.com/drnick/archive/2007/01/23/modifying-http-error-codes-part-1.aspx
crawl-002
refinedweb
458
61.06
{-# LANGUAGE GeneralizedNewtypeDeriving, TypeOperators, TemplateHaskell, GADTs, DeriveDataTypeable, TupleSections, MultiParamTypeClasses, TypeFamilies, FlexibleContexts, ExistentialQuantification #-} {- |. -} module Data.Label.Zipper ( -- * Zipper() {- | /A note on failure in zipper operations:/ Most operations on a 'Zipper' return a result in a 'Failure' class monad, throwing various types of failures. Here is a list of failure scenarios: - a 'move' Up arrives at a type that could not be cast to the type expected - a @move (Up 1)@ when already 'atTop', i.e. we cannot ascend anymore - a @move@ to a label (e.g. @foo :: FooBar :~> FooBar@) causes a failure in the getter function of the lens, usually because the 'focus' was' cannot re-build the structure because some setter failed, as above. Again, this does not occur for TH'generated lenses. See the "failure" package for details. -} -- ** Creating and closing Zippers , zipper , close -- ** Moving around , Motion(..) , Up(..) , UpCasting(..) , To() , to --, Flatten(..) -- *** Error types {- | Every defined 'Motion' has an associated error type, thrown in a 'Failure' class monad (see "failure"). These types are also part of a small 'Exception' hierarchy. -} , ZipperException() , UpErrors(..) , ToErrors(..) -- *** Repeating movements , moveWhile , moveUntil , moveFloor -- ** The zipper focus -- | a "fclabels" lens for setting, getting, and modifying the zipper's -- focus. Note: a zipper may fail to 'close' if the lens used to reach the -- current focus performed some validation. , focus , viewf , setf , modf -- ** Querying Zippers and Motions , atTop , level , LevelDelta(..) -- ** Saving and recalling positions in a Zipper , save , closeSaving , restore , flatten -- * Convenience operators, types, and exports , Zipper1 -- ** Re-exports {- | These re-exported functions should be sufficient for the most common - zipper functionality -} , Data.Typeable.Typeable(..) , Data.Label.mkLabels , (M.:~>) , Control.Failure.Failure(..) , Control.Exception.Exception(..) ) where {- - IMPLEMENTATION NOTES: - - NEXT: - ---------- - - complete code coverage - - make error types return more useful info: height above where constructor - failed, typeRep of the failure, - - implement focusValid, or a better solution. - - can we define appropriate instances to allow, e.g. `move -2` ? - - pure move functionality (either separate module/namespace or new - function) - - pureMove :: (PureMotion m)=> - - re. above: also see note under CONVENIENCE: can we use a mechanism - similar to what fclabels uses on generated zippers to force the use of - e.g. focusSafe on a zipper where we have used 'To' with a failable lens, - forcing a close function that would return Maybe, etc. - - We should provide a function validate :: FallibleZipper -> ClosableZipper, which allows validation at any one time - Then, moveFallible :: z -> FallibleZipper, move :: z -> z - - But there is a real question with fclabels that has come up: - 1) basic lenses that can fail only ever fail (because of multiple - constructors) on the getter, yet underlying type can fail in setter - too. This adds needless fallability to our close function - 2) we might like (as we want in focusValid below) to have a lens - that ONLY fails on a setter (does validation), but which always - succeeds in a getter (has a single constructor for instance) - - - - conversion from motions to fclabels (:~>) - - add Flatten motion down that collapses history? - - doesn't make sense for motion from top level. return Nothing? - - other motion ideas: - - Up to the nth level of specified type - - up to the level of a specified type with focus matching predicate - - Up to topmost level matching type: - - repeat descend a :~> a (ToLast?) - - motion down a :~> a, until matching pred. - - look at Arrow instance for thrist (in module yet) - - make To an instance if Iso (if possible) - - Kleisli-wrapped arrow interface that works nicely with proc notation - - PERFORMANCE TODO - ----------------- - - consider instead of using section, use head form of parent with - the child node set to undefined. Any performance difference? - - actually look at how this performs in terms of space/time - - ROADMAP: - Particularly Elegant - Pink Elephant - Placebo Effect - Patiently Expectant - Probably ?? - -} -- this is where the magic happens: import Data.Label import qualified Data.Label.Maybe as M import Data.Typeable import Data.Thrist -- for our accessors, which are a category: import Control.Category import Prelude hiding ((.), id) import Control.Applicative import Control.Arrow(Kleisli(..)) import Control.Monad import Control.Failure import Control.Exception ------------------------- -- TYPES: the real heros ------------------------ -- ZIPPER TYPE -- ----------------- {- * - It's interesting to note in our :~> lenses the setter also can fail, and can - fail based not only on the constructor 'f' but also for certain values of 'a' - This is kind of interesting; it lets lenses enforce constraints on a type - that the type system cannot, e.g. Foo Int, where Int must always be odd. - - So a module might export a type with hidden constructors and only lenses for - an interface. Our zipper could navigate around in the type, and all the - constraints would still be enforced on the unzippered type. Cool! -} -- We store our history in a type-threaded list of pairs of lenses and -- continuations (parent data-types with a "hole" where the child fits), the -- lenses are kept around so that we can extract the "path" to the current -- focus and apply it to other data types. Use GADT to enforce Typeable. data HistPair b a where H :: (Typeable a, Typeable b)=> { hLens :: (a M.:~> b) , hCont :: Kleisli Maybe b a -- see above } -> HistPair b a type ZipperStack b a = Thrist HistPair b a -- TODO: this could be a contravariant functor, no?: -- | Encapsulates a data type @a@ at a focus @b@, supporting various 'Motion' -- operations data Zipper a b = Z { stack :: ZipperStack b a , _focus :: b } deriving (Typeable) $(mkLabels [''Zipper]) -- MOTION CLASSES -- -------------------- --TODO NOTE: this is the class we would like, however this causes a cycle --because of superclass declaration of Motion. see this thread: -- --class (Exception (ThrownBy mot), Motion (Returning mot))=> Motion mot where -- |@. class (Exception (ThrownBy mot))=> Motion mot where type ThrownBy mot :: * type Returning mot :: * -> * -> * -- |. move :: (Typeable b, Typeable c, Failure (ThrownBy mot) m) => mot b c -> Zipper a b -> m (Zipper a c) move mot z = moveSaving mot z >>= return . snd -- | like 'move' but saves the @Motion@ that will return us back to the -- location we started from in the passed zipper. moveSaving :: (Typeable b, Typeable c, Failure (ThrownBy mot) m) => mot b c -> Zipper a b -> m ((Returning mot) c b, Zipper a c) -- MOTIONS ------------- -- |' newtype Up c b = Up { upLevel :: Int } deriving (Show,Num,Integral,Eq,Ord,Bounded,Enum,Real) data UpErrors = CastFailed | LensSetterFailed | MovePastTop deriving (Show,Typeable,Eq) {- --TODO: THIS IS PROBABLY NOT A GGOD IDEA UNLESS WE CAN DO IT RIGHT. AT THE --MOMENT I DON'T UNDERSTAND HOW GHC DOES SOMETHING LIKE: -- [-1,-2..-3] :: [ Up Int Int] -- BUT THE FOLLOWING CODE ISN'T ENOUGH. FOR NOW DERIVE NUMERIC CLASSES ABOVE AND -- DO NOT DOCUMENT USING `move 3`. -- | 'fromInteger' gets defined as @Up . abs@, so @move (Up 2)@ is equivalent to -- @move (-2)@. instance Num (Up a b) where (Up a) + (Up b) = Up $ a+b (Up a) - (Up b) = Up $ a-b (Up a) * (Up b) = Up $ a*b abs (Up n) = Up $ abs n signum (Up n) = Up $ signum n fromInteger n = Up $ fromInteger $ abs n instance Integral (Up a b) where toInteger (Up n) = toInteger $ negate $ abs n quotRem (Up a) (Up b) = (Up $ quot a b, Up $ rem a b) -- also need fromEnum and fromIntegral? -} instance Category Up where (Up m) . (Up n) = Up (m+n) id = 0 instance Motion Up where type ThrownBy Up = UpErrors type Returning Up = To move (Up 0) z = maybeThrow CastFailed $ gcast z move (Up n) (Z (Cons (H _ k) stck) c) = maybeThrow LensSetterFailed (runKleisli k c) >>= move (Up (n-1)) . Z stck move _ _ = failure MovePastTop -- TODO: it makes more sense to define 'move' and 'saveFromAbove' in terms -- of moveSaving below, but we ran into some type weirdness, so... moveSaving p z = liftM2 (,) (saveFromAbove p z) (move p z) -- | indicates a 'Motion' upwards in the zipper until we arrive at a type which -- we can cast to @b@, otherwise throwing 'UpErrors' data UpCasting c b = UpCasting deriving(Show,Typeable,Eq) instance Motion UpCasting where type ThrownBy UpCasting = UpErrors type Returning UpCasting = To moveSaving _ z = do when (atTop z) $ failure MovePastTop firstSuccess $ map (flip ms z) [Up 1 ..] where ms = moveSaving :: (Typeable b, Typeable c)=>Up c b -> Zipper a c -> Either UpErrors (To b c, Zipper a b) firstSuccess [] = failure CastFailed -- this would be raised on each of it's ancestors: firstSuccess ((Left LensSetterFailed):_) = failure LensSetterFailed -- if cast failed, skip: firstSuccess ((Left CastFailed):zms) = firstSuccess zms firstSuccess ((Right (m,z')):_) = return (m,z') firstSuccess _ = error "bug in move UpCasting" -- |': newtype To M.:~> b) } -> TypeableLens a b -- TODO: we might store some info here re. at what level the error occured: data ToErrors = LensGetterFailed deriving(Show,Typeable,Eq) instance Motion To where type ThrownBy To = ToErrors type Returning To = Up move mot z = maybeThrow LensGetterFailed $ foldMThrist pivot z $ savedLenses mot moveSaving p z = do z' <- move p z let motS = Up $ lengthThrist $ savedLenses p return (motS,z') -- | use a "fclabels" label to define a Motion \"down\" into a data type. to :: (Typeable a, Typeable b)=> (a M.:~> b) -> To a b to = S . flip Cons Nil . TL {- TODO for next version -- | a 'Motion' \"down\" that squashes the saved history of the motion, so for -- instance: -- -- > level $ move (Flatten l) z == level z -- -- and: -- -- > move (Up 1) z == move (Up 1) $ move (Flatten l) z newtype Flatten a b = Flatten (To a b) deriving (Typeable, Category) instance Motion Flatten where move m z = undefined --flip (foldMThrist pivot) . savedLenses -} --------------- REPEATED MOTIONS ----------------- -- | Apply the given Motion to a zipper until the Motion fails, returning the -- last location visited. For instance @moveFloor (to left) z@ might return -- the left-most node of a 'zipper'ed tree @z@. -- -- > moveFloor m z = maybe z (moveFloor m) $ move m z moveFloor :: (Motion m,Typeable a, Typeable b)=> m b b -> Zipper a b -> Zipper a b moveFloor m z = maybe z (moveFloor m) (move m z) -- | Apply a motion each time the focus matches the predicate, raising an error -- in @m@ otherwise moveWhile :: (Failure (ThrownBy mot) m, Motion mot, Typeable c) => (c -> Bool) -> mot c c -> Zipper a c -> m (Zipper a c) moveWhile p m z | p $ viewf z = move m z >>= moveWhile p m | otherwise = return z {- -- THIS SEEMS NOT TERRIBLY USEFUL, AND WAS CONFUSING EVEN ME -- -- | Apply a motion one or more times until the predicate applied to the focus -- returns @True@, otherwise raising an error in @m@ if a 'move' fails before -- we reach a focus that matches. moveUntil :: (Failure (ThrownBy mot) m, Motion mot, Typeable c) => (c -> Bool) -> mot c c -> Zipper a c -> m (Zipper a c) moveUntil p m z = move m z >>= maybeLoop where maybeLoop z' | p $ viewf z' = return z' | otherwise = moveUntil p m z' -} -- | Apply a motion zero or more times until the focus matches the predicate -- -- > moveUntil p = moveWhile (not . p) moveUntil :: (Failure (ThrownBy mot) m, Motion mot, Typeable c) => (c -> Bool) -> mot c c -> Zipper a c -> m (Zipper a c) moveUntil p = moveWhile (not . p) -- TODO: consider: -- moveWhen -- moveUnless --------------- -- | create a zipper with the focus on the top level. zipper :: a -> Zipper a a zipper = Z Nil ------------------------------ -- ADVANCED ZIPPER FUNCTIONS: ------------------------------ data ZipperLenses a c b = ZL { zlStack :: ZipperStack b a, zLenses :: Thrist TypeableLens b c } -- INTERNAL FOR NOW: saveFromAbove :: (Typeable c, Typeable b, Failure UpErrors m) => Up c b -> Zipper a c -> m (To b c) saveFromAbove n = liftM (S . zLenses) . mvUpSavingL (upLevel n) . flip ZL Nil . stack where mvUpSavingL :: (Typeable b', Typeable b, Failure UpErrors m)=> Int -> ZipperLenses a c b -> m (ZipperLenses a c b') mvUpSavingL 0 z = maybeThrow CastFailed $ gcast z mvUpSavingL n' (ZL (Cons (H l _) stck) ls) = mvUpSavingL (n'-1) (ZL stck $ Cons (TL l) ls) mvUpSavingL _ _ = failure MovePastTop -- | Close the zipper, returning the saved path back down to the zipper\'s -- focus. See 'close' closeSaving :: Zipper a b -> (To a b, Maybe a) closeSaving (Z stck b) = (S ls, ma) where ls = getReverseLensStack stck kCont = compStack $ mapThrist hCont stck ma = runKleisli kCont b -- TODO: consider that if we stick with fclabels-generated lenses here, there -- isn't any conceptual reason why such lenses whould have to fail on their -- setters, and why 'close' should have to fail here: -- I guess this would require an implementation of M.lens like: -- -- lens :: (f -> Maybe a) -> (f -> Maybe (a -> f)) -> f :~> a -- e.g. lLeft = lens lGet lSet where -- lGet (Left a) = Just a -- lGet _ = Nothing -- lSet (Left a) = Just (\a'-> Left a') -- if the type had multiple params they would be preserved of course -- lSet _ = Nothing -- -- ...so is (Just $\a-> Left a) an arrow at this point? -- |' close :: Zipper a b -> Maybe a close = snd . closeSaving -- | Return a path 'To' the current location in the 'Zipper'. -- This lets you return to a location in your data type with 'restore'. -- -- > save = fst . closeSaving save :: Zipper a b -> To a b save = fst . closeSaving -- TODO: consider making flatten polymorphic over: To, Zipper, etc. and change name to toLens -- | Extract a composed lens that points to the location we saved. This lets -- us modify, set or get a location that we visited with our 'Zipper', after -- closing the Zipper, using "fclabels" @get@ and @set@. flatten :: (Typeable a, Typeable b)=> To a b -> (a M.:~> b) flatten = compStack . mapThrist tLens . savedLenses -- | restore :: (Typeable a, Typeable b, Failure ToErrors m)=> To a b -> a -> m (Zipper a b) restore s = move s . zipper -- | returns 'True' if 'Zipper' is at the top level of the data structure: atTop :: Zipper a b -> Bool atTop = nullThrist . stack -- | Return our zero-indexed depth in the 'Zipper'. -- if 'atTop' zipper then @'level' zipper == 0@ level :: Zipper a b -> Int level = lengthThrist . stack -- | Motion types which alter a Zipper by a knowable integer quantity. -- Concretly, the following should hold: -- -- > level (move m z) == level z + delta m -- -- For motions upwards this returns a negative value. class (Motion m)=> LevelDelta m where delta :: (Typeable a, Typeable b)=>m a b -> Int instance LevelDelta Up where delta = negate . upLevel instance LevelDelta To where delta = lengthThrist . savedLenses {- TODO maybe in next version instance LevelDelta Flatten where delta = const 0 -} ---------------------------------------------------------------------------- ---------------- -- CONVENIENCE ---------------- -- TODO: we should at least export a lens 'focusM' or 'focusSafe'that fails -- when the zipper fails validation (i.e. can't be closed) . There are probably -- some clever polymorphic solutions similar to what fclabels itself does to -- force use of focusSafe when we've moved with a failable lens, vs. a zipper -- untainted by failable lenses in history (in which case 'close' will never -- fail). -- | a view function for a Zipper\'s 'focus'. -- -- > viewf = get focus viewf :: Zipper a b -> b viewf = get focus -- | modify the Zipper\'s 'focus'. -- -- > modf = modify focus modf :: (b -> b) -> Zipper a b -> Zipper a b modf = modify focus -- | set the Zipper\'s 'focus'. -- -- > setf = set focus setf :: b -> Zipper a b -> Zipper a b setf = set focus -- | a simple type synonym for a 'Zipper' where the type at the focus is the -- same as the type of the outer (unzippered) type. Cleans up type signatures -- for simple recursive types: type Zipper1 a = Zipper a a ------------ -- HELPERS ------------ -- The core of move To pivot :: forall t t1 t2. Zipper t t1 -> TypeableLens t1 t2 -> Maybe (Zipper t t2) pivot (Z t a) (TL l) = Z (Cons h t) <$> mb where h = H l (Kleisli c) c = flip (M.set l) a mb = M.get) -- MAKING THIS GLOBAL SHOULD PLEASE GHC 7.0 WITHOUT EXTRA EXTENSIONS. SEE: -- revLocal :: forall t t1 t2. Flipped (Thrist TypeableLens) t t1 -> HistPair t1 t2 -> Flipped (Thrist TypeableLens) t t2 revLocal (Flipped t) (H l _) = Flipped $ Cons (TL l) t -- this would be useful in thrist newtype IntB a b = IntB { getInt :: Int } plusB :: IntB a b -> IntB b c -> IntB a c plusB a b = IntB (getInt a + getInt b) lengthThrist :: Thrist (+>) a b -> Int lengthThrist = getInt . foldrThrist plusB (IntB 0) . mapThrist (const $ IntB 1) maybeThrow :: (Failure e m)=> e -> Maybe a -> m a maybeThrow e = maybe (failure e) return ---------------------- -- EXCEPTION HIERARCHY ---------------------- -- NOTE: a 'Throws' hierarchy must be defined manually for c-m-e. Perhaps we -- should create a separate package with those instances defined -- | The root of the exception hierarchy for Zipper 'move' operations: data ZipperException = forall e . Exception e => ZipperException e deriving (Typeable) instance Show ZipperException where show (ZipperException e) = show e instance Exception ZipperException instance Exception UpErrors where toException = toException . ZipperException fromException x = do ZipperException a <- fromException x cast a instance Exception ToErrors where toException = toException . ZipperException fromException x = do ZipperException a <- fromException x cast a
http://hackage.haskell.org/package/pez-0.1.0/docs/src/Data-Label-Zipper.html
CC-MAIN-2014-41
refinedweb
2,714
57.91
Working Programmer - Multiparadigmatic .NET, Part 8: Dynamic Programming By Ted Neward | June 2011 In last month’s article, we finished off the third of the three meta-programming facilities supported by the Microsoft .NET Framework languages, that of parametric polymorphism (generics), and talked about how it provided variability in both structural and behavioral manners. In as far as it goes, parametric metaprogramming provides some powerful solutions. But it’s not the be-all, end-all answer to every design problem—no single programming paradigm is. As an example, consider the Money<> class that served as the last example and testbed (see Figure 1). Recall, from last time, that the main reason we use the currency as a type parameter is to avoid accidental compiler-permitted conversion of euros to dollars without going through an official conversion rate to do so. class USD { } class EUR { } class Money<C> {() }; } } As was pointed out last time, being able to do that conversion is important, though, and that’s what the Convert<> routine is intended to do—give us the ability to convert dollars to euros, or to pesos, or to Canadian dollars, or whatever other currency we might need or want to convert to. But that means some kind of currency-conversion code, which is obviously missing from the implementation in Figure 1—right now, we just do a 1-1 conversion, simply changing over the Currency property over to the new C2 currency type, and that’s just not going to cut it. My Money, Your Money, It’s All Funny Money Fixing this means we need some kind of conversion routine to do the calculation of one to the other, and that can take on a lot of different solutions. One approach might be to leverage the inheritance axis again and make USD and EUR into ICurrency types with routines designed to do that exact conversion. Doing so begins with the definition of an ICurrency type and marking USD and EUR as implementors of that interface, as shown in Figure 2. interface ICurrency { } class USD : ICurrency { } class EUR : ICurrency { } class Money<C> where C : ICurrency {() }; } } This strategy works great, so far. In fact, the additional type constraint on the type parameter in Money<> is a useful enhancement to make sure we can’t have Money<string> or Money<Button>. It looks unusual, but this trick—known as the “marker interface” idiom in Java—serves an interesting and important purpose. In Java, prior to Java 5 getting its equivalent of custom attributes, we used this trick to put static declarations on types. Just as the .NET Framework uses [Serializable] to indicate that a class can be serialized into a stream of bytes, Java classes implemented (inherited) from the Serializable interface, which had no members. Much as we’d love to use custom attributes to mark USD and EUR as [Currency], type constraints can’t key off of custom attributes, and having that type constraint on C is an important enhancement, so we resort to the marker interface. It’s a bit unusual, but if you think about interfaces as a way of making declarative statements about what this type is, rather than just about what it can do, it makes sense. (While we’re at it, we’ll add constructors to make it a bit easier to instantiate Money<>.) But trying to declare a currency conversion in ICurrency runs into an immediate snag: ICurrency has no knowledge of any subtype (concrete currency type), thus we can’t really declare a method here that takes Money<USD> and converts it to Money<EUR> through some kind of automatically adjusting conversion calculation. (Some kind of Internet-based lookup or Web service would be the actual implementation here, but for now, let’s assume static ratios.) But even if we could, trying to write said methods would be extremely tricky, because we’d need to dispatch based on two types (the currency we’re converting from and the currency we’re converting to), along with the single parameter (the amount we’re converting). Given that we like keeping the currency as a type, it means that we might take a first stab at writing this method like so: Then, it might seem like we could write something like this as a way of specializing the Convert method in derived types: Alas, this would be wrong. The compiler interprets USD and EUR as type parameters just like C1 and C2. Next, we might try something like this: But again, the compiler complains: C1 is a “type parameter” but is used like a “variable.” In other words, we can’t use C1 as if it were a type itself. It’s just a placeholder. Yikes—this is going nowhere fast. One potential solution is to resort to simply passing the types as Reflection-based Type parameters, which creates something like the code shown in Figure 3. interface ICurrency { float Convert(Type src, Type dest, float from); } class USD : ICurrency { public float Convert(Type src, Type dest, float from) { if (src.Name == "USD" && dest.Name == "EUR") return from / 1.2f; else if (src.Name == "EUR" && dest.Name == "USD") return from * 1.2f; else throw new Exception("Illegal currency conversion"); } } class EUR : ICurrency { public float Convert(Type src, Type dest, float from) { if (src.Name == "USD" && dest.Name == "EUR") return from / 1.2f; else if (src.Name == "EUR" && dest.Name == "USD") return from * 1.2f; else throw new Exception("Illegal currency conversion"); } }() { return new Money<C2>( Currency.Convert(typeof(C), typeof(C2), this.Quantity)); } } And it works, in that the code compiles and runs, but numerous traps lay in wait: the conversion code has to be duplicated between both the USD and EUR classes, and when new currencies are added, such as British pounds (GBP), not only will a new GBP class be needed—as would be expected—but both USD and EUR will also need to be modified to include GBP. This is going to get really messy before long. What’s in a Name? In traditional object-oriented programming (OOP) languages, developers have been able to dispatch based on a single type by use of virtual methods. The compiler sends the request to the appropriate method implementation depending on the actual type behind the reference on which the method is invoked. (This is the classic ToString scenario, for example.) In this situation, however, we want to dispatch based on two types (C1 and C2)—what’s sometimes called double dispatch. Traditional OOP has no great solution for it other than the Visitor design pattern and, frankly, to many developers that’s not a great solution at all. It requires the creation of a single-purpose hierarchy of classes, of sorts. As new types are introduced, methods start exploding all over the hierarchy to accommodate each newcomer. But taking a step back affords us a chance to look at the problem anew. While the type-safety was necessary to ensure that Money<USD> and Money<EUR> instances couldn’t be comingled, we don’t really need the types USD and EUR for much beyond their places as type parameters. In other words, it’s not their types that we care about for currency-conversion purposes, but simply their names. And their names permit another form of variability, sometimes referred to as name-bound or dynamic programming. Dynamic Languages vs. Dynamic Programming At first blush, it may seem like there’s an intrinsic relationship between dynamic languages and dynamic programming—and to some degree there is, but only in that dynamic languages take the concept of name-bound execution to its highest degree. Rather than ascertain at compile time whether target methods or classes exist, dynamic languages like Ruby, Python or JavaScript simply assume that they exist and look them up at the last moment possible. It turns out, of course, that the .NET Framework affords the savvy designer the same kind of flexibility in binding using Reflection. You can create a static class that contains the names of the currencies, then invoke it using Reflection, as shown in Figure 4. static class Conversions { public static Money<EUR> USDToEUR(Money<USD> usd) { return new Money<EUR>(usd.Quantity * 1.2f); } public static Money<USD> EURToUSD(Money<EUR> eur) { return new Money<USD>(eur.Quantity / 1.2f); } }() { MethodBase converter = typeof(Conversions).GetMethod( typeof(C).Name + "To" + typeof(C2).Name); return (Money<C2>)converter.Invoke(null, new object[] { this }); } } Adding in a new currency, such as British pounds, means simply creating the empty GBP class (implementing ICurrency), and adding the necessary conversion routines to Conversions. Of course, C# 4 (and just about every version of Visual Basic before this) provides built-in facilities to make this easier, assuming we know the name at compile time. C# provides the dynamic type and Visual Basic has had Option Strict Off and Option Explicit Off for decades. In fact, as Apple Objective-C shows, dynamic programming isn’t necessarily limited to interpreted languages. Objective-C is a compiled language that uses dynamic programming all over the place in its frameworks, particularly for event-handling binding. Clients that wish to receive events simply provide the event-handling method, named correctly. When the sender wants to inform the client of something interesting, it looks up the method by name and invokes the method if it’s present. (For those who remember back that far, this is also exactly how Smalltalk works.) Of course, name-bound resolution has its faults, too, most of which come up in error-handling. What should happen when a method or class that you expect to be present isn’t? Some languages (such as Smalltalk and the Apple implementation of Objective-C) hold that simply nothing should happen. Others (Ruby, for example) suggest that an error or exception should be thrown. Much of the right answer will depend on the domain itself. In the Money<> example, if it’s reasonable to expect that certain currencies cannot be converted, then a missing conversion routine should trigger some kind of message to the user. If all currencies within the system should be convertible, though, then obviously there’s some kind of developer mistake here and it should be caught during unit tests. In fact, it’s a fair statement to suggest that no dynamic programming solution should ever be released to an unsuspecting public without a significant set of unit tests successfully passing first. Creating Commonality Name-bound variability represents a powerful mechanism for commonality/variability analysis, but dynamic programming hardly ends there. Using the full-fidelity metadata capabilities present within the CLR, it becomes possible to start looking at creating commonality through other criteria beyond name: method return types, method parameter types and so on. In fact, it could be argued that attributive metaprogramming is really just an offshoot of dynamic programming based on custom attributes. What’s more, name-bound variability doesn’t have to be tied to the entire name. Early builds of the NUnit unit testing framework assumed a test method was any method that began with the characters “test.” In my next column, we’ll examine the last of the paradigms in common .NET Framework languages: that of functional programming, and how it provides yet another way to view commonality/variability analysis, which happens to be almost diametrically opposite to that viewed by traditional object-ists. Ted Neward is a Principal with Neward & Associates, an independent firm specializing in enterprise .NET Framework and Java platform systems. He has written more than 100 articles, is a C# MVP, INETA speaker, and has authored and coauthored a dozen books, including “Professional F# 2.0” (Wrox, 2010). He consults and mentors regularly—reach him at ted@tedneward.com, or read his blog at blogs.tedneward.com. Thanks to the following technical expert for reviewing this article: Mircea Trofin MSDN Magazine Blog More MSDN Magazine Blog entries > Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/magazine/8410de93-b923-4af6-b70c-8d365abdc5c7
CC-MAIN-2015-35
refinedweb
2,006
52.49
Hi! I'm learning python in conjunction with zope/plone and I would like to use eclipse/pydev because of it's greate features. The major problem I have is that each time i start a python run the console tells me for example "ImportError: No module named Products.MyProduct.config" which is triggered by the corresponding "from Products.MyProduct.config import *". Similar happens if I try to import something from Products.CMFCore.utils for example. I tried to modify the Projects PYTHONPATH the way I added `$INSTANCE`/Products and, as an alternative, only `$INSTANCE`. Also I tried to place the `$INSTANCE`/Prodcts directory in a source folder beneath the project but without success. I would like to know if I make a general mistake and how I can overcome. Thank you very much. Fabio Zadrozny 2006-11-30 Have you already checked the getting started manual on how to configure your project pythonpath? () Jaap Reitsma 2007-01-18 Hi, I don't have import errors, but the code completion did not work at first, due to a peculiarity of Zope: All products are added to the 'Products' packet namespace, so you cannot simply add a product to the PYTHONPATH. To cite the zope.conf file (products directive): "Each directory identified will be added to the __path__ of the Products package. All Products are initialized in ascending alphabetical order by product name. If two products with the same name exist in two Products directories, the order in which the packages appear here defines the load order. The master Products directory exists in Zope's software home, and cannot be removed from the products path (and should not be added to it here)." One workaround for Pydev is to add a Products folder that contains the actual products, the Products folder does not need to contain an __init__.py to be recognized as a Python package by Pydev. The parent of the Products folder must be added to the PYTHONPATH. Note: You need to clean (and rebuild) the project to get the code completion to work. A better solution would be the ability to add a package prefix to each python path source folder. For Zope/Plone products the prefix would then be 'Products'. Is that possible Fabio? Kind regards, Jaap
http://sourceforge.net/p/pydev/discussion/293649/thread/903ee9df
CC-MAIN-2015-14
refinedweb
380
64.71
glibmm: Glib::QueryQuark Class Reference Quarks are unique IDs in Glib for strings for use in hash table lookups. More... #include <glibmm/quark.h> Inheritance diagram for Glib::QueryQuark: Detailed Description Quarks are unique IDs in Glib for strings for use in hash table lookups. Each Quark is unique but may change between runs. QueryQuark is a converter class for looking up but not allocating an ID. An id means the quark lookup failed. Quark is used for actions for which the id should live on While QueryQuark should be used for queries. ie. void set_data (const Quark&, void* data); void* get_data (const QueryQuark&);
http://developer.gnome.org/glibmm/unstable/classGlib_1_1QueryQuark.html
crawl-003
refinedweb
104
82.34
Python 3 Language Gotcha -- and a short reminisce Written by Barry Warsaw in technology on Thu 18 April 2013. Tags: python, python3, ubuntu, There's a lot of Python nostalgia going around today, from Brett Cannon's 10 year anniversary of becoming a core developer, to Guido reminding us that he came to the USA 18 years ago. Despite my stolen time machine keys, I don't want to dwell in the past, except to say that I echo much of what Brett says. I had no idea how life changing it would be -- on both a personal and professional level -- when Roger Masse and I met Guido at NIST at the first Python workshop back in November 1994. The lyric goes: what a long strange trip it's been, and that's for sure. There were about 20 people at that first workshop, and 2500 at Pycon 2013. And Python continues to hold little surprises. Just today, I solved a bug in an Ubuntu package! import sys def bar(i): if i == 1: raise KeyError(1) if i == 2: raise ValueError(2) def bad(): e = None try: bar(int(sys.argv[1])) except KeyError as e: print('ke') except ValueError as e: print('ve') print(e) bad() Here's a hint: this works under Python 2, but gives you an UnboundLocalError on the e variable under Python 3. Why? The reason is that in Python 3, the targets of except clauses are del'd from the current namespace after the try...except clause executes. This is to prevent circular references that occur when the exception is bound to the target. What is surprising and non-obvious is that the name is deleted from the namespace even if it was bound to a variable before the exception handler! So really, setting e = None did nothing useful! Python 2 doesn't have this behavior, so in some sense it's less surprising, but at the expense of creating circular references. The solution is simple. Just use a different name to capture and use the exception outside of the try...except clause. Here's a fixed example: def good(): exception = None try: bar(int(sys.argv[1])) except KeyError as e: exception = e print('ke') except ValueError as e: exception = e print('ve') print(exception) So even after almost 20 years of hacking Python, you can still experience the thrill of discovering something new.
https://www.wefearchange.org/2013/04/python-3-language-gotcha-and-short.html
CC-MAIN-2021-43
refinedweb
402
69.52
A class is a template used to create objects. They are made up of members, the main two of which are fields and methods. Fields are variables that hold the state of the object, while methods define what the object can do. An object is also called an instance. The object will contain its own set of fields, which can hold values that are different to those of other instances of the class. package javaapplication19; public class JavaApplication19 { static class MyRectangle { int x, y; int getArea() { return x * y; } } public static void main(String[] args) { MyRectangle r = new MyRectangle(); r.x=10; r.y=5; int area = r.getArea(); System.out.println(area); } }
https://codecrawl.com/2014/11/19/java-class/
CC-MAIN-2018-43
refinedweb
113
73.27
Can someone further explain the proof of the formula given in this blog? Thanks in advance! Can someone further explain the proof of the formula given in this blog? Thanks in advance! "An alternative is to consider all O(2^B) valid values for A in an outer loop. If one indexes not by the full signature but by a 64-bit hash of it, then the runtime becomes O(2^BN), but in the unlikely event of two different signatures hashing to the same 64-bit value, the answer may be incorrect or must be verified. Many who wrote exponential solutions in B received time outs; on occasion a low constraint is a red herring and hides an easily implemented more efficient solution." I tried this approach in my solution here, but I get TLE with a complexity that has an added factor of B * log N (due to the map and hash computation). How can I prune my current solution down to the intended complexity? Please help, and thanks in advance! When. I tried using the logic in this blog post, but I keep getting WA. My code uses the identity 0*nC0+1*nC1+2*nC2+...+n*nCn = n*2^(n-1). Can someone please find the flaw in my counting method (or even better, provide a test case that breaks my code)? Thanks in advance! Given an undirected connected graph, what is the minimum number of edges that must be added to it to make it 2-vertex-connected, and what are those edges? Is the methodology for this problem similar to the same situation of making a graph 2-edge-connected (adding ceil(numPendants/2) edges)? Please help, and thanks in advance! More Detailed Problem Description I understand the theory for this problem (fast matrix exponentiation), but I need a problem to test my code on, as I was not able to find one after multiple Internet searches. Could someone please provide a problem that asks for a solution along these lines? Thanks in advance! My strategy is to compute the number of palindromic substrings from index 0 to index i in a prefix sum array, and then compute the number of palindromic substrings from i+1 to strlen(N)-1 in a suffix sum array. We can then sum the products of prefix[i]*suffix[i+1] and somehow account for overcounting to get our answer. My main issue is finding the number of palindromic substrings in O(N) time (O(N^2) is not feasible as N could be up to 10^5). Is my method reasonable, and if so, how can I solve the subproblem of finding the number of palindromic substrings in a prefix? Please help, and thanks in advance! I was wondering if there is a possible solution that uses a segment tree rather than a BBST (balanced binary search tree), as I find that segment trees are easier to implement. Please help, and thanks in advance! EDIT: Triple Bump! Problem Statement Solution Could someone please provide an alternate solution with an explanation to this problem, or explain the official solution with more clarity (I only understand the first two lines)? I do not understand the logic behind precomputing "cost" or how the recurrence works. Thanks in advance! I tried to solve the problem above using a Segment Tree for each plane. The idea is that I find the maximum interference level between each pair of intersection points, and do a Range-Maximum-Query on the segments the query overlaps. It seems that I have the correct idea/complexity since my code passes the cases of Batch 2 (consisting of max case), but I WA on the Batch 1 Cases. I think the error I have lies in how I determine the indices for querying in lines 98-120, but I have not been able to find/fix it. Where did I go wrong in solving the problem? Please help, and thanks in advance! EDIT: Bump? I would appreciate if someone commented what additional information they would need to help me out instead of relentlessly pushing the downvote button... EDIT 2: Ended up solving it (YAY!) by changing the custom comparator for "Intersection", but I don't know why it works. If someone could explain that to me... (Solution) The gist of the problem is to find the area that a rubber band around N circles of equal radius encloses. My thought process is as follows: 1) Find the convex hull of all the circle centers. 2) Use the Shoelace Theorem to find the area of the convex hull. 3) Add the area of the convex hull and the area of one circle of radius K to a running total. 4) Add the area of augmenting rectangles on the convex hull to the running total; for each rectangle, the area is found by multiplying the distance between two adjacent vertices on the hull by K (essentially the hull perimeter times K) My implementation of these ideas is here, which gets WA on all hidden cases. Is my logic incorrect, or is there a bug in my implementation? Please help, and thanks in advance! Problem: Official Solution: I was able to reduce the problem to solving it on a weighted DAG after compressing the graph into strongly connected components. However, I am not able to handle the caveat of being able to traverse a reversed edge at most once. Is there a way to solve this final step of the problem without dynamic programming? If not, can someone explain what exactly is going on in the "solve" function and calls to it? Please help, and thanks in advance! Can someone prove the correctness of the approach described in the first answer on the thread? I understand why the first step is crucial as a starting point. Thanks in advance! Given a positive integer N, calculate the sum of inversions in every bitmask of length N. For example, if N = 2, our masks are 00 (0 inversions), 01 (0 inversions), 10 (1 inversion), and 11 (0 inversions). We output 1. For example, if N = 3, our masks are 000 (0 inversions), 001 (0 inversions), 010 (1 inversion), 011 (0 inversions), 100 (2 inversions), 101 (1 inversion), 110 (2 inversions), and 111 (0 inversions). We output 0+0+1+0+2+1+2+0 = 6. How can I do this efficiently? Please help, and thanks in advance! Given a set {a, b, c, d}, its non-empty subsets in lexicographic order are as follows: {a, ab, abc, abcd, abd, ac, acd, ad, b, bc, bcd, bd, c, cd, d} I found an O((sizeofset)^2) method of finding the Nth lexicographically smallest subset here, but I was wondering if there was a faster way to do this. I know that finding the Nth lexicographically smallest string can be reduced from O((sizeofset)^2) time to O(sizeofset). time, which motivated me to ask this question about a similar reduction for subsets. Please help, and thanks in advance! Given a graph with at most 2*10^5 nodes and 2*10^5 edges, bicolor the graph such that the difference between the number of nodes with each color is minimized, and print out that difference. I am able to bicolor each of the connected components, and find the number of each color in each component. I tried to make the final step of finding the minimum difference into the subset sum problem before realizing that there are restrictions on what numbers can go together. E.g. I have components (Red, Blue) as (1, 5) and (2, 4); the optimal subset sum solution would normally be to put the 1 and 5 together, and the 2 and 4 together. However, the (1, 5) pair and (2, 4) pair are component pairs, which is not allowed. Please help, and thanks in advance! I know how to find the diameter of a tree in linear time, prove the validity of the algorithm used to do so (successive BFS's), and prove why it doesn't work for non-tree connected graphs. However, I need an algorithm/approach that has better complexity than O(N^2) to find the diameter of a relatively large WEIGHTED pseudotree (a tree with one extra edge). Please help, and thanks in advance! Let P(x) be a function that returns the product of digits of x. For example P(935) = 9*3*5 = 135. Now let us define another function P_repeat(x): int P_repeat(int x){ if(x < 10) return x return P_repeat(P(x)) } For a value v, find the number of numbers x less than N such that P_repeat(x) = v. How would I make a transition between states in a digit DP for this problem? Please help, and thanks in advance! UPD: The bounds are N <= 10^13. Given two non-empty strings A and B composed of lowercase Latin letters, what is the minimum number of substrings of A needed to form string B? The lengths of A and B are at most 100000. If the task is not possible for a given input, output a rogue value (a.k.a. -1). I was thinking about solving this with an O(N^2) DP method, but that does not fit into the time limit of 5 seconds. Please help, and thanks in advance! EDIT: Note that chosen substrings can overlap. I put some cases below. Input #1: abcbcd abcd Output #1: 2 Input #2: iamsmart iamdumb Output #2: -1 Input #3: asmallmallinmalta atallmallinlima Output #3: 5 Explanations: "abcd" = "ab" + "cd", no "d"s in the first string of Input 2, "atallmallinlima" = "a" + "ta" + "llmallin" + "li" + "ma" Chinese Solution, since USACO does not have an official solution for the problem above. I had a lot of trouble with the implementation of this question, so I looked up this solution online. Google Translate could only do so much for the page (the only solution I could find), but I think I was able to discern the following from the code: 1) The array "per" reverses the permutation process, and length of 31 represents a bitmask of that length, 2) The idea behind the "per" is that the result of a number of consecutive permutations can be assembled in logarithmic time, and 3) The variable "num" in the third main loop functions as a mask. However, I do not fully understand the purpose of "bot", "now", and "k" in the third main loop, and the mathematics in the first and third main loops. I would appreciate an explanation for these parts of the solution. Thanks in advance! We. Problem: For whatever reason, implementing any of the three official editorial solutions is not working out for me, so I am changing tact. My idea is to store the depths relative to the root for each node, run a DFS and BIT similar to what I used in this problem, add/subtract values to/from the BIT according to a sliding window (e.g. when L = 3 and a node with depth 5 is at the bottom of the sliding window, nodes with depth of at most 8 are at the top), and query all nodes in the window that are ancestors of the node in question. I would appreciate comments on my method. Please help, and thanks in advance! UPD: I coded my fourth attempted solution, only for it to fail the same cases (7, 8, and 9). I would appreciate it if someone could find out what is wrong with it, as I have wasted 8+ hours trying to upsolve this problem to get no variance/change in results. #include <iostream> #include <stdio.h> #include <stdlib.h> #include <algorithm> #include <vector> #include <unordered_set> using namespace std; int N; vector<int> children [200001]; pair<long long, int> pis [200001]; int tree [200001], id [200001], mx [200001], ret [200001], curr = 1, index = 1; long long L, depth [200001]; void add(int pos, long long x){ while(pos < 200001){ tree[pos] += x; pos += (pos&-pos); } } int query(int pos){ int sum = 0; while(pos > 0){ sum += tree[pos]; pos -= (pos&-pos); } return sum; } int dfs(int x){ id[x] = curr++; mx[x] = id[x]; for(int i = 0; i < children[x].size(); i++){ int next = children[x][i]; mx[x] = max(mx[x], dfs(next)); } return mx[x]; } int main(){ //freopen("runaway.in", "r", stdin); freopen("runaway.out", "w", stdout); scanf("%d %d", &N, &L); depth[0] = 0ll; for(int i = 2; i <= N; i++){ int x; long long y; scanf("%d %I64d", &x, &y); children[x].push_back(i); depth[i] = depth[x]+y; } dfs(1); for(int i = 1; i <= N; i++) pis[i] = make_pair(depth[i], i); sort(pis+1, pis+N+1); for(int i = 1; i <= N; i++){ long long curDepth = pis[i].first; int now = pis[i].second; int from = id[now]; int to = mx[now]; while(index <= N && curDepth+L >= depth[pis[index].second]){ add(id[pis[index].second], 1); index++; } ret[now] = query(to)-query(from-1); } for(int i = 1; i <= N; i++) cout << ret[i] << '\n'; return 0; } USACO 2012 Gold December Contest: "Running Away From the Barn" Solution Almost Identical to the Judge's Fails Problem: My code below is my attempt to solve this problem, which keeps failing cases 7 through 9. I do not see a significant difference between my logic and the logic of the judge solution (the last one in the editorial here Your text to link here...). Can someone explain why my solution keeps producing a WA? I even resorted to changing my code to 0 based indexing after 3 hours of trying to perfectly match the judge solution, with no change in output. I sincerely apologize for posting a wall of code, but I have not found another way to resolve the issue after privately asking other users to look over it for me. #include <iostream> #include <stdio.h> #include <stdlib.h> #include <algorithm> #include <vector> using namespace std; int id = 1; struct Node{ Node *parent; vector<Node*> children; long long depth; int last, label; Node(){ parent = NULL; depth = 0ll; last = -1; } void preorder(){ label = id++; for(int i = 0; i < children.size(); i++) children[i]->preorder(); if(children.size() == 0) last = label; else last = children.back()->last; } }; struct Event{ int a, b, index; long long len; bool operator<(const Event &other) const{ if(len != other.len) return len < other.len; else return a < other.a; } }; int N; Node tree [400001]; long long L, fenwick [400001], ret [400001]; vector<Event> events; void add(int pos, long long x){ while(pos < 400001){ fenwick[pos] += x; pos += (pos&-pos); } } long long query(int pos){ long long sum = 0; while(pos > 0){ sum += fenwick[pos]; pos -= (pos&-pos); } return sum; } int main(){ freopen("runaway.in", "r", stdin); freopen("runaway.out", "w", stdout); scanf("%d %d", &N, &L); for(int i = 2; i <= N; i++){ int x; long long y; scanf("%d %lld", &x, &y); Node *par = tree+x; tree[i].parent = par; tree[i].depth = (par->depth)+y; par->children.push_back(tree+i); } tree[1].preorder(); for(int i = 1; i <= N; i++){ Event c; c.a = -1; c.b = -1; c.len = tree[i].depth; c.index = tree[i].label; Event d; d.a = tree[i].label; d.b = tree[i].last; d.len = tree[i].depth+L; d.index = i; events.push_back(c); events.push_back(d); } sort(events.begin(), events.end()); for(int i = 0; i < events.size(); i++){ Event e = events[i]; if(e.a == -1) add(e.index, 1ll); else ret[e.index] = query(e.b)-query(e.a-1); } for(int i = 1; i <= N; i++) cout << ret[i] << '\n'; return 0; } Please help, and thanks in advance! Problem: Another user pointed out to me the similarity between this problem and one on the most recent January contest (). The latter problem can be solved using the combination of a preorder traversal and BIT. I was wondering if it is possible to solve the former problem with a combination of a Segment Tree and DFS (closest possible method to a preorder traversal). The segment tree would use lazy propagation to update the range of edges, but I am not sure how to use a DFS in this situation. Am I on a right track? If so, I would appreciate input on how to continue. If not, please point me to another possible solution. Please help, and thanks in advance! Codeforces Round #104 Div. 1 E (Lucky Queries): Lazy Propagation Does Not Print Out Answers to All Queries I keep missing Test Case #58 on this problem because my answer does not have enough elements. Does my program RTE before printing out the last 2 elements, or is my IO flawed in some way that prevents it from reading in the last few queries? Please help, and thanks in advance! vamaddur
http://codeforces.com/blog/vamaddur
CC-MAIN-2018-09
refinedweb
2,823
71.04
Thursday 23 June 2005. I'll present a simplified example of using a lookup table embedded in the stylesheet itself. I'll describe each hunk of code as we go along, and then show the whole thing put together. My lookup table is going to be embedded in the stylesheet. So that it won't be interpreted by the XSLT engine, we'll put the table in a different namespace. The stylesheet element defines the namespace ("lookup:"), and declares it as an extension namespace so that it won't appear in the output. You should use a different URL than "yourdomain", and remember, it doesn't have to actually resolve to something: <xsl:stylesheet xmlns: Then I create the lookup table. It's an ad-hoc XML structure, at the top level of the stylesheet. Here I'm going to look up strings by an id, and I'll use an id= attribute, with the string in the text of the element: <lookup:strings> <string id='foo'>Fooey</string> <string id='bar'>Barbie</string></lookup:strings> I use a <key> element to declare the key. This is where it starts becoming non-intuitive. The only way to understand the key feature of XSLT is to look at two parts at once: the key is declared with a <key> element, and then accessed with the key() function. The part that always throws me is that I expect the key definition to specify some source data: it does not. The key definition specifies what I think of as a hypothetical set of nodes, and a way to index into them. Later, when you use the key() function, you apply this definition to a real chunk of data. The parts of a <key> element are: Here's my key definition: <xsl:key The name is "string", the "match" attribute says to consider any <string> element that is a child of a <lookup:strings> element, and the "use" attribute says that for each such <string> element, we'll use its "id" attribute as its tag. Think of the nodes selected by the "match" attribute as the records in the table, and the value on each selected by the "use" attribute as the indexed value in the record. Now the key is defined, and we can actually use it with the key() function. It takes two arguments: the name of the key (from the name attribute of the <key> definitions), and the value to actually look up in the table. Remember we were going to specify the actual table data with the key() function, right? Well, not really. The table data is actually the current context node. That is, the records in the table are found by applying the <key>'s "match" attribute as a pattern against the current node. Here's where the match attribute on the <key> element becomes so important. You have to carefully consider what your current context is, and design the key declaration to work within it. In this case, we'll use the document("") function to read the current stylesheet, finding the <lookup:strings> element in it. A <for-each> element changes the current context to the table. Normally, <for-each> is used to apply a template to a number of nodes. Here, we know there is only one, but <for-each> has the handy property of setting the current node. Then the key() function can apply the <key> match pattern to find the candidate records, using our supplied value ("foo") to find a record with an id attribute of "foo": <xsl:template <!-- Look up the string "foo" and use it. --> <xsl:for-each <xsl:value-of </xsl:for-each></xsl:template> For repetitive use, you can define a variable to hold the table, and then use it from the variable each time: <xsl:variable<xsl:template <xsl:for-each <xsl:value-of </xsl:for-each></xsl:template> Finally, here's a complete example: <xsl:stylesheet xmlns:<lookup:strings> <string id='foo'>Fooey</string> <string id='bar'>Barbie</string></lookup:strings><xsl:key<xsl:variable<xsl:template <xsl:for-each <xsl:value-of </xsl:for-each></xsl:template></xsl:stylesheet> When run on any input, this produces: Fooey Whew! No one ever claimed XSLT was succinct! You might also want to look at: This is good stuff Ned. Whenever I am using XSLT, I always have some spec. is going to come along and lay waste to what I already know and how I use it, which I never feel elsewhere... Hi, I tried similar (beginner with xsl) for taking a specified value (countryID) and lookup some countryCode for that out of another xml file containing of ID elements and assigned names. I tried using xsl:key... and always failed. Finally, after trying I found the following. Kindly asking you for dropping me a note if this is fine in your eyes: <xsl:stylesheet [...]> <xsl:variable [...] <xsl:template <countryCode> <xsl:value-of </countryCode> </xsl:template> </xsl:stylesheet> well, it works!! With only one line for loading the external file, and one single line for finding the lookup-value. Greetings..... 2005, Ned Batchelder
http://nedbatchelder.com/blog/200506/keyed_lookups_in_xslt_10.html
crawl-002
refinedweb
853
70.33
XRX/Configuration File Editor Contents Motivation[edit] You have an XML configuration file that only one person at a time will be editing. You want to make it easy to edit the configuration file so that non-technical users without knowledge of XML or occasional users will not make any mistakes editing the file. Note that this method may not be appropriate if multiple people might edit the same file or record simultaneously. See the record locking section for more detail on this process. Method[edit] We will load the entire file into an single XForms instance, edit it, and save the entire file. This can be done with a single submission in the XForms application and a single store() operation in the eXist (or similar) database. This can be done even if the configuration file is very complex, and has many sections and many repeating elements. Regardless of the complexity of the file, you only need a single function call on the server to store the file. Program Outline[edit] Let's assume that the file is stored in a single eXist collection such as my-config.xml. To load this file into an XForms application, you simply put the document into an instance within the XForms model: <html xmlns: <head> <xf:model> <xf:instance </xf:model> </head> <body> ... </body> </html> You can then save the file by adding a submission element to the model that will save the file back to the database once the data has changed in the client. <xf:submission ... <xf:submit <xf:label>Save</xf:label> </xf:submit> Note that you will need to use "post" not "put" in this example. The submit element creates a button in your form with the label "Save". Building the Forms Application for the Configuration File[edit] There are many ways to "autogenerate" the XForms application that will edit the configuration file in a browser, even if your configuration file is complex and has many repeating elements. One way is to use a transformation program that transforms an XML Schema to an XForms file. One example of this is the NIEM transform. Although there are other examples of this type of transform, most of these require you to have an XML Schema for your configuration file. If you do not have an XML Schema there are tools that can generate an XML Schema from one or more instance documents. If you do not have an XML Schema, you can "draw" your XForms client using commercial XForms tools such as IBM Workplace Forms Designer. This drag-and-drop environment makes it easy for non-programmers to build and maintain complex forms. If you are building a form on a budget, then another option is to use the Orbeon XForms Builder which is an XForms application that will build the form for you. Client Side Save[edit] If you have a secure Intranet you can use the HTTP PUT operator to save your configuration file directly to the web file system. Sometimes you will need to be able to authenticate a user before you permit the save. This can be done with a traditional login and session management system or you can create a single script that has the correct permissions. Sample Save XQuery[edit] xquery version "1.0"; declare namespace request=""; declare namespace xmldb=""; (: put the collection we want to save our data to here :) let $my-collection := '/db/config-files' let $my-config-file-name := 'my-config.xml' (: get the data we want to update from the HTTP POST :) let $my-posted-data := request:get-data() (: make sure we have write access to this collection :) let $login := collection($my-collection, 'my-userid', 'my-password') let $store-return-status := xmldb:store($my-collection, $my-config-file-name, $my-posted-data) (: this is what we return. You can also return an XHTML file. :) return <return> <message>File {$my-config-file-name} saved in collection {$my-collection}</message> <result-code>{$store-return-status}</result-code> </return> Back: XSLTForms and eXist • Next: Dictionary Editor
https://en.wikibooks.org/wiki/XRX/Configuration_File_Editor
CC-MAIN-2017-09
refinedweb
672
50.16
new to bean shell and i have a question. It appears that I can't use any constructor (other than a default "super") constructor in a class that I load. I'm not sure I'm asking this the right way, but for example if I create the classes: public class foo { public void hello() { System.out.println("Hello there"); } } public class foo2 { foo2(String s) { // Ignore string } public void hello() { System.out.println("Hello there"); } } Then, I can use foo but not foo2: bsh % f = new foo(); bsh % f.hello(); Hello there bsh % bsh % f = new foo2("garbage"); // Error: Constructor error: we don't have permission to create an instance bsh % Am I doing something wrong? -Mike.=20 I do something like JConsole console =3D new JConsole(); Interpreter interpreter =3D new Interpreter( console ); interpreter.run(); But I would like to know how to use my JTextArea like the desktop = beanshell? On Mon, Apr 08, 2002 at 03:15:41PM +0200, florence bernad wrote: >. If I understand correctly, you want to take the input text from your JTextArea and interpret it? You can simply call the interpreter using the eval( String text ) method. Something like: String text = textArea.getText() Object result = interpreter.eval( text ); Then you can print the output in any form you want... If you want to capture the output you can create a ByteArrayOutputStream and set it as the output stream of the interpreter. Here is an example pasted (and slightly modified) from the bsh.servlet.BshServlet test servet: Object evalScript( String script, StringBuffer scriptOutput, boolean captureOutErr ) throws EvalError { // Create a PrintStream to capture output ByteArrayOutputStream baos = new ByteArrayOutputStream(); PrintStream pout = new PrintStream( baos ); // Create an interpreter instance with a null inputstream, // the capture out/err stream, non-interactive Interpreter bsh = new Interpreter( null, pout, pout, false ); // Eval the text, gathering the return value or any error. Object result = null; String error = null; PrintStream sout = System.out; PrintStream serr = System.err; if ( captureOutErr ) { System.setOut( pout ); System.setErr( pout ); } try { // Eval the user text result = bsh.eval( script ); } finally { if ( captureOutErr ) { System.setOut( sout ); System.setErr( serr ); } } pout.flush(); scriptOutput.append( baos.toString() ); return result; } Or perhaps you can explain a little more what you want to do. Thanks, Pat On Fri, Dec 08, 2000 at 05:43:40PM -0800, Witt, MichaelX J wrote: > > bsh % f = new foo(); > bsh % f.hello(); > Hello there bsh % > bsh % f = new foo2("garbage"); > // Error: Constructor error: we don't have permission to create an instance > bsh % > > Am I doing something wrong? Currently bsh is limited to the same access permissions as regular Java code... You must make sure that your method or constructor is declared public and is in a publically visible class... In an upcoming release we'll use the accessibility API to loosen permissions and allow us priviledged access. Thanks, Pat I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/beanshell/mailman/message/4138296/
CC-MAIN-2017-34
refinedweb
515
66.13
Army wrote: > [snip] > > DISCUSSION: > > The question is now this: what's the best/preferred way to propagate > this ODBC/JDBC duality from the "SystemProcedures.java" file to the > corresponding methods in > org.apache.derby.impl.jdbc.EmbedDatabaseMetadata.java (hereafter > referred to as "EDM")? > > Option I: > > Add new SQL statements, such as "getColumnsForODBC", to the existing > metadata.properties file, as described in the proposal for DERBY-107. > Then, since EDM has to know which version of a given SQL statement to > execute--for example, should it call the regular "getColumns" version, > or should it call the new "getColumnsForODBC" version?--we could add > new methods (such as "setForODBC()") to EDM that could be used by > SystemProcedures to indicate (to EDM) that ODBC metadata should be > returned, intead of JDBC metadata. Note that, since SystemProcedures > is in a different package than EDM, the new methods on EDM would (I > think) have to be _public_. > > Regarding this approach, one must ask: > > [ #1 **** COMMUNITY INPUT? **** ] > > What's the general attitude toward adding public methods to a Derby > class that is implementing a specific JDBC class? In the context of > this discussion, is it or is it not acceptable/desireable to add > Derby-specific public methods to a class like > EmbedDatabaseMetadata.java, which is an implementation of > java.sql.DatabaseMetaData? Technically speaking, I don't think the > addition of public classes breaks the JDBC standard (so long as we > aren't telling people that they can import EmbedDatabaseMetadata in > their apps--which we aren't), but I'm curious as to whether there's a > "good programming practice" here that the Derby community would like > to (or already does?) hold to? > > [ #1 **** End **** ] I would prefer that the ODBC support not be put in the EmbedDatabaseMetadata class. Then applications that do not use ODBC do not have to load ODBC support into their JVM. > > Option II: > > Add new SQL statements, such as "getColumnsForODBC", to the existing > metadata.properties file, as described in the proposal for DERBY-107. > Then we could extend the EDM class with a new, simple class that sets > ODBC-related state, and modify EDM to check the state and execute the > appropriate statements. For example, we could add a protected > variable "forODBC" to EDM, default it to "false", and then set it to > true in the extended class for ODBC. EDM would then check the flag > and execute the corresponding metadata statement. The presumption > here is that SystemProcedures would check for the ODBC indicator and, > if found, use an instance of the new subclass for the metadata calls, > instead of using an instance of the existing EDM. > > This approach allows us to avoid adding new (non-JDBC) public classes > to EDM, at the cost of creating another (albeit fairly simple) > metadata class. > > With this approach, we could even go further and add another file, say > "odbc_metadata.properties" that holds the ODBC metadata statements > (instead of adding them to the existing metadata.properties file). > The new subclass could then load _that_ file instead of the current > metadata.properties file, which gives us a nice separation of > functionality: all of the ODBC code cleanly separated from the JDBC > code. Of course, that could be a bad thing, too, since 1) we'd then > have TWO metadata files to worry about in the codeline, instead of > just one, which introduces room for error if/when metadata-related > processing changes occur in Derby, and 2) we'd have to duplicate any > SQL statements that are the same for ODBC and JDBC (ex. several of the > "getBestRowIdentifier" queries) in both files. So I'm guessing we > wouldn't want to create another metadata file...but I thought I'd > bring it up, just in case. > > Option III: > > Create some kind of internal VTI for ODBC metadata and use that. I > have to admit that I don't know too much about how VTIs work, but > Kathey gave me some places to look, so I'm going to read up. > Apparently, we can execute the same metadata SQL statements that > already exist for JDBC, then use a VTI to "massage" the result set > into something that complies with ODBC specifications. This might be > a good choice given that most of the differences between ODBC and JDBC > are in the types of the columns returned. For example, JDBC might say > that any String will do, whereas ODBC will say it has to be VARCHAR. > In that case, a literal value ' ' will be fine for JDBC, but since > it's treated as a CHAR value by Derby, it would be breaking ODBC > standard. With a VTI, we could theoretically accomplish the same > things that we'd be doing with new SQL statements--such as casting ' ' > to VARCHAR in this particular case. Other modifications we'd have to > implement include casting certain integer columns to smallints, and > replacing null values in JDBC (such as for "sql_data_type" and > "buffer_length" columns) to legal values (neither column is supposed > to be null for ODBC). > > Upside to this is that we still only have a single metadata.properties > file, which (theoretically) makes maintenance of metadata procedures > easier. As I don't know much about VTIs, I can't say what else this > approach would require, but it seems safe to say that it would at > least require another class to serve as the ODBC VTI. How that would > tie into the SystemProcedures and EmbedDatabaseMetadata classes, I > don't know yet... > > So... > > [ #2 **** COMMUNITY INPUT? **** ] > I think that the VTI option offers the most flexibility. VTIs are not limited to massaging the results of standard JDBC metadata queries, they can query the system tables in an entirely different way if necessary. They are not loaded until necessary so JDBC only servers are not burdened with ODBC support. [snip] Jack
http://mail-archives.apache.org/mod_mbox/db-derby-dev/200501.mbox/%3C41F55095.8060801@Mutagen.Net%3E
CC-MAIN-2013-20
refinedweb
962
59.84
Hybrid View [FNR] Filters. Bug and inheritance problem. Hi. I have found a bug in Date Filter (On same date comparison doesn't work as expected) In file: DateFilter.java: Replace: if (afterItem.isChecked() && onMenu.getDate() != null) { by: if (onItem.isChecked() && onMenu.getDate() != null) { Moreover: Is it possible to modify method validateModel: public class DateFilter extends Filter { .... @Override public boolean validateModel(ModelData model) { Date d = model.get(dataIndex); .. } ... } by something like: public class DateFilter extends Filter { .... public Object getValue(ModelData model) { return model.get(dataIndex); } @Override public boolean validateModel(ModelData model) { Date d = getValue(); .. } ... } My model doesn't contain Date but a Type witch can return a Date. With this solution i can write a class witch extend DateFilter like: public class MyOwnTypeFilter extends DateFilter { public Object getValue(ModelData model) { MyOwnType obj = (MyOwnType)model.get(dataIndex); return (Date) obj.getMyDate(); } } Please modify all object witch inherit from Filter like this one. Without this, we must duplicate lot of your code. Thanx Fixedi n SVN as of revision 2155. Your suggestion is not a real bug, but we will take a look at it to add a helper method. But it wont be a public one as you suggested. Thank you for your quick answer/fix. Can you let me know when the "helper method" will be implemented? I added the helper method in SVN at revision 2195 Hi Sven, Does GXT 2.2.5 include those fix you are talking about? I think I found a bug on the use of Filter, unless I m missing something. If you add a DateFilter, then selecting a date for the "before" will check the menu before and apply the filter If you come back on the filter, then only check the menu, it is throwing Nullpointer exception. Looking at the code, I would understand that I missed the initialisation to set the initial value for the different calender view associated with each menu. But using setValue on the DateFilter, requires a list of FilterConfig, doing this, will check (enable) every menu used by the DateFilter, therefore activating the filtering. updateMenuState is called when onCheckChange is threshold Inside: beforeItem.setChecked(beforeMenu.getDate() != null && beforeMenu.getDate().after(afterMenu.getDate()), true); if you never went on the date picker "afterMenu", the getDate return null Same for beforeMenu beforeItem.setChecked(beforeMenu.getDate() != null && beforeMenu.getDate().after(afterMenu.getDate()), true); A nullpointer happend when check one item (before or after), and then check the another one. Similar Threads [FNR] TimeField bug?By is84092 in forum Ext GWT: Bugs (2.x)Replies: 1Last Post: 12 Apr 2010, 5:02 AM [2.??][DUP] Bug when PagingToolBar works with filters such as gridFilterBy yuandong1222 in forum Ext 2.x: BugsReplies: 1Last Post: 25 Nov 2008, 9:37 PM [SOLVED] DataView, ListStore and filters... Question and BugBy mrdecav in forum Ext GWT: Help & Discussion (1.x)Replies: 2Last Post: 15 Sep 2008, 3:23 PM Inheritance ProblemBy deltafoxtrot in forum Ext 1.x: Help & DiscussionReplies: 1Last Post: 15 May 2008, 7:49 AM Destroy a form in ext 1.1 / Inheritance problemBy Ronaldo in forum Ext 1.x: Help & DiscussionReplies: 1Last Post: 15 Aug 2007, 9:42 AM
https://www.sencha.com/forum/showthread.php?104986-FNR-Filters.-Bug-and-inheritance-problem.&mode=hybrid
CC-MAIN-2015-48
refinedweb
527
59.09
Fundulopanchax puerzli (Radda & Scheel 1974) A specimen I maintained around 1982. After Eduard Pürzl, collector, author, aquarist & photographer. Radda A.C. & Scheel J.J. 1974. (Aphyosemion puerzli). Aphyosemion puerzli nov. spec. und Aphyosemion robertsoni nov. spec., zwei neue Rivulinen aus Kamerun. Aquarium Journal 4 (3): p 33-37, 3 figures. 7 cm D = 13-14, A = 16, D/A = +2-3, ll = 33-34 +3-4 (Radda & Scheel 1974) n = 20 (2n = 37-38), A = 21 (Radda & Scheel 1974, 1975). Paraphyosemion ndianus (formerly gardneri) Wild male collected 27 km northeast from the crossing of the Douala - Edea - Yabassi towards Yabassi, Cameroon. (type locality). Photo courtesy of Ed Pürzl Wild female collected 27 km northeast from the crossing of the Douala - Edea - Yabassi towards Yabassi, Cameroon. (type locality). Photo courtesy of Ed Pürzl Dibeng wild male from commercial import 2000. Photo courtesy of Roger Gladwell Dibeng. From commercial import 2000. This male was the first generation from the wild male above/right. Photo courtesy of Roger Gladwell Dibeng female from commercial import 2000. Photo courtesy of Roger Gladwell Dibeng male. First generation (F1) from wild fish imported into the AKA in late 2001 / early 2002. Subsequent generations have been reportedly not as colourful. Photo courtesy of Tony Terceira Taken at the 2004 BKA convention. How not to handle a Fuji. Male imported into the BKA in the '70's. BKA photo. Female to the male shown on the left imported into the BKA in the '70's. BKA photo. Photo courtesy of Monty Lehmann 27 km northeast of the Douala - Edea - Yabassi road junction in the direction of Yabassi. The biotope was a small stream near its entry into a larger tributary of the Wuri River. Western Cameroon where they inhabit the Henda, Nkwoh & Wuri River drainage systems. Rainforest swamps & shallow swampy parts of brooks & streams. Sympatric sp. include A.riggenbachi which inhabit open water areas & are found in considerable numbers. Fp.puerzli is found 'under heavily shaded retreats in shallow, still inlets of small streams & in sections of backwater'. Also found are E.sexfasciatus also found in quantity in open water & Procatopus similis, males were found to be spotted in deep red. Non killie sp. include Barbus sp. & Pelmatachromis (probably Pelvicachromis) sp. Also, a fish called 'Grundel' was found but I don't know what this is. Clausen & Scheel took the following measurements- On the 8th February 1966 - pH 6·8, DH 1. On the 23rd January 1969 the measurements were pH 6·4, DH 1. Electrical conductivity was 100 micro-siemens at 12.30 hrs & the water temperature was 25·8°C. On the 29th November 1973 at 13.20 hrs the water temperature was 24·8°C in the fast flowing water of the main stream. (Aphyosemion puerzli. A New Rivulin from Cameroon. A.C.Radda & J.J.Scheel - BKA newsletter No.111, November 1974). Described from specimens collected in a brook in the Vouri (Wuri) River drainage 27 km north of the intersection of the Douala, Edea & Yabassi roads, Cameroon. Fairly easy to breed. I used bottom mops & incubated the eggs on a layer of wet peat. Peat can also be used in the tank & taken out to dry every few weeks. Dry incubation takes 6-8 weeks with sexual maturity being attained around 2-3 months. I did find they went through periods where no eggs were layed. A lowering of the temperature can sometimes trigger spawning. First breeding attempts by Ed Pürzl report that the fish were spawned in water of 4-6 DH. Eggs were kept in moist peat & wet after 2 weeks with the result that no fry hatched. After 4 weeks a very few fry hatched. He found that after a drying period of 7-8 weeks a large number of eggs were ripe for hatching. Longer periods of dry storage produced fewer numbers of fry. Adrian Burge in BKA Killinews No.276, August 1988 had the following breeding account.. The pair were spawned in a small tank (12x8x8") which received sunlight. He found the fish were shy although the male did display to the female which was generally ignored. The tank was kept bare except for a floating mop which extended to the tank base. Live foods consisting of white worm, Daphnia & glass larvae were fed with flake food being offered also. A water change of 25% was undertaken each month. Eggs were picked off the mop daily but 50% were found to fungus before hatching in 30 days. Newly hatched fry were fed on infusoria, microworm & newly hatched brine shrimp. Tom Soper in the same newsletter states.. I set the fish up in a 12x8x8" tank with 50% freshly drawn water & 50% mature water. The base was covered with a half inch layer of peat moss & a clump of Java moss in the corner. An airstone was added. Temperature ranged from 74-76°F with dim lighting. The pair were fed daily on Tubifex, white worm & Daphnia. After a week the peat was removed & dried. After 8-9 weeks the peat was wet & a good number of fry were hatched although many turned out to be males. He had no success with water incubation. P.K.Webber in the same article as above noted that success in water incubation was determined on the age of the brood stock. He found eggs from young pairs easier to incubate in water. He found a 'definate cut off point after which only dry storage became viable'. I have had this sp. a number of times & found they go through periods of no breeding activity despite the usual shock tactic of reducing the water temperature, which can work when the fish are waiting for it to happen. It should be noted that all attempts were made with fish from auctions without collection data & these were most likely many generations removed from wild blood.
http://www.killifish.f9.co.uk/Killifish/Killifish%20Website/Ref_Library/Fundulopanchax/Fp.puerzli.htm
crawl-002
refinedweb
979
76.22
IRC log of webapps on 2009-09-16 Timestamps are in UTC. 21:05:52 [RRSAgent] RRSAgent has joined #webapps 21:05:53 [RRSAgent] logging to 21:05:54 [trackbot] RRSAgent, make logs public 21:05:54 [Zakim] Zakim has joined #webapps 21:05:56 [trackbot] Zakim, this will be WAPP 21:05:56 [Zakim] I do not see a conference matching that name scheduled within the next hour, trackbot 21:05:57 [trackbot] Meeting: Web Applications Working Group Teleconference 21:05:57 [trackbot] Date: 16 September 2009 21:06:09 [smaug] ok, so if we could have less than 2 hours long telcon this time ;) 21:06:15 [smaug] I need to get up early 21:06:17 [Zakim] +Shepazu 21:06:39 [shepazu] let's keep this short, then... 21:06:41 [Zakim] +??P1 21:07:44 [Travis] scribenick Travis 21:08:05 [shepazu] Topic: listen/unlisten 21:08:14 [Travis] scribeNick: Travis 21:08:24 [Travis] scribe: Travis 21:08:33 [Travis] topic: listen/unlisten events 21:08:49 [Travis] (methods) 21:09:01 [Travis] shepazu: All major browser venders said 'no' to this proposal. 21:09:16 [shepazu] s/events/methods 21:10:01 [Travis] Resolution: remove listen/unlisten from the spec. 21:10:43 [shepazu] Topic: key identifiers 21:10:51 [Travis] shepazu: Since it's just an alias, it's not really needed. If it added functionality then that would be a different story. 21:11:10 [shepazu] 21:11:28 [Travis] shepazu: most recent draft- added explanitory text about key identifiers. 21:11:38 [Travis] ... please review (members of the working group) 21:11:47 [Travis] ... Q: What _is_ a key identifier? 21:12:28 [Travis] ... Finally realized that a key identifier is not a unique id for a key, it's the value of the key at the given moment with contributions from modifiers, etc. 21:12:38 [Travis] ... It's the input key character. 21:12:58 [Travis] ... A key can have multiple key identifiers. Could be unicode name/value or character 21:14:59 [Travis] Travis: Yes, I like the clarity provided in the new draft. 21:15:18 [Travis] shepazu: Yes, need to put a section that says that multiple literal keys may have the same mapping. 21:16:09 [Travis] ... recently added all lowercase key identifiers to the spec. Looking for comments. 21:16:17 [Travis] ... With that, perhaps close to last call? 21:16:35 [Travis] smaug: I need to review the entire spec again. Can you send mail to list to review as well? 21:16:40 [Travis] shepazu: sure. 21:17:15 [Travis] ... might be appropriate to put out a new draft for review. *could* be a last call draft. 21:17:20 [shepazu] topic: namespaced events 21:18:07 [Travis] shepazu: Marked these as "at risk" 21:18:33 [Travis] ... I'm concerned about the dependencies that others may have 21:19:06 [Travis] ... During discussion with SVG group recently... was talking about namespace events. 21:19:26 [Travis] ... Can see the complexity in namespace event implementations 21:19:48 [Travis] ... Also having the init* methods for NS adds complexity. 21:20:24 [Travis] ... Is it really common to init events? 21:20:33 [Travis] travis: Have heard of use cases for this... 21:20:51 [Travis] shepazu: Script libraries may use them more... 21:22:34 [Travis] ... Cameron suggested an event initializer that could allow event properties to be read/write until the event was dispatched. 21:23:12 [Travis] Travis: A little weird to declare (since the read/write changes dynamically). 21:23:55 [Travis] shepazu: One advantage is that namespaceURI becomes much less overhead (don't need an separate init* method). 21:24:13 [Travis] Travis: would we then drop the init*NS methods? 21:24:47 [Travis] shepazu: Don't know... could drop them, could even drop all the custom initializers. Will see. Waiting on Cameron's proposal to the lsit. 21:25:17 [shepazu] topic: focus 21:26:05 [Travis] shepazu: Travis asked why you need the relatedTarget... 21:26:46 [Travis] ... It's a pain to track state (in webpage code). Also use cases don't always use both events 21:29:15 [Travis] Travis: I'm not really opposed to the FocusEvent interface proposal. 21:31:21 [Travis] Travis: I think it's a good idea if we need to have new properties. 21:32:10 [Travis] Resolution: Add a new FocusEvent interface to support the focusin/focusout/focus/blur such that they gain a relatedTarget property. 21:32:36 [Travis] shepazu: There might be an issue with adding new properties... 21:32:53 [Travis] smaug: I don't recall any bugs with our own added properties to MouseEvent 21:33:27 [Travis] shepazu: +DOMFocusIn, +DOMFocusOut (to FocusEvent interface) 21:34:04 [Travis] ... relatedTarget will be the node that the event is going to or coming from. Might also be null. 21:34:45 [Travis] ... for blur/focusout : the relatedTarget is the element that they are going to (opposite of focus/focusin) 21:34:59 [Travis] topic: back to key identifiers (briefly) 21:35:10 [Travis] shepazu: Does anyone implement key identifiers? 21:35:14 [Travis] smaug: Don't think so. 21:35:40 [Travis] shepazu: What if you needed the unicode value? Or the character name (not it's value)...? 21:36:25 [Travis] ... Have some ideas for that: expose it on the event itself with the keyIdentifier (like an array?) or a ConvertTo(keyidentier, newFormat). 21:36:59 [Travis] ... What do you think? 21:38:04 [Travis] Travis: I like the convertTo api. Put it in a new spec (DOM L4 events?). 21:38:08 [Travis] smaug: +1 21:38:20 [Travis] shepazu: Also good to have available when events are not in use. 21:39:19 [shepazu] topic: default actions 21:39:20 [Travis] ... Do implementations have a huge list of character<->codes<->identifiers? 21:39:34 [shepazu] topic: default actions 21:40:55 [Travis] shepazu: Changed the events table to organize around default actions... it raised some questions. 21:41:53 [smaug] btw, scroll event isn't the default action of mouwewheel I believe 21:42:14 [smaug] scroll event certainly isn't the default action of wheel event 21:43:45 [Travis] ... For events without default actions but are cancellable, does that mean that an implemention has a default action anyway? 21:44:15 [Travis] smaug: No direct correlation between wheel event and scroll event. 21:44:24 [shepazu] topic: on-* attributes as event listeners? 21:44:24 [Travis] shepazu: OK, will update the spec. 21:45:07 [Travis] shepazu: Talked about onfoo attributes as an implicit addEventListener. 21:45:27 [Travis] ... hixie wasn't too keen on the idea initially, but has recently asked about it. 21:45:50 [Travis] ... hixie may have wanted more detail than we had in the spec at the time. 21:46:00 [Travis] ... Two benefits (I think): 21:46:06 [Hixie] onfoo isn't quite implicitly addEventListener(), it's a lot more complicated. HTML5 defines it all in detail though. 21:46:25 [Hixie] basically each onfoo registers an event listener when the element is created 21:46:35 [Hixie] and changing the value of onfoo doesn't affect that 21:46:50 [Hixie] the listener itself then uses the value of onfoo as part of its processing 21:47:17 [Hixie] html5 also defines some hooks so other specs can make use of these definitions easily 21:47:48 [Hixie] a number of specs make use of this, including at least xhr, eventsource, web sockets, web workers, web storage 21:47:48 [Travis] shepazu: Seems like HTML5 has it covered. 21:47:51 [annevk] I think what HTML5 says might not match implementations. removeAttribute("onfoo", ...) should actually work. Hallvord filed a bug on that. 21:48:14 [Hixie] annevk, that's a separate issue from addEventListener 21:48:28 [annevk] true 21:49:22 [Travis] shepazu: I should say something about this in the spec. (need to define if removeEventListener removes these events, or if they are just special). 21:49:33 [Travis] Travis: (Thinks they are special) 21:49:46 [Travis] ... but what order to the fire in relation to the other eventes? 21:50:10 [Hixie] removeEventListener can't remove them because you can't get a handle to the event handler 21:50:13 [Travis] shepazu: Will probably just reference HTML5 on this issue. (For context within the D3E spec.) 21:50:19 [annevk] order should be registration order 21:50:22 [Travis] True. 21:50:41 [Hixie] order is defined in html5, i believe. at least i intended to define it there. file a bug if it's not defined :-) 21:50:45 [Travis] shepazu: This simplifies the D3E spec. Good. 21:51:03 [Hixie] html5 basically hooks all this into the dom3 events model, so dom3 events shouldn't need to say anything special about it 21:51:14 [Hixie] so long as order is defined in general, i just hook into that 21:51:35 [Travis] ... Also talked awhile ago about providing an enumerator of event listeners. Seems too complicated for D3E. We could pursue in D4E... 21:54:08 [Travis] shepazu: Are folks committed to implementing the features of D3E? 21:54:47 [Travis] Travis: Yes. Will want to extend with experiemental stuff (like faster mutation event API, etc.) but should conform to the spec. 21:54:50 [Travis] smaug: Yes. 21:54:54 [shepazu] Topic: mouse x/y coordinates 21:55:54 [annevk] oh yeah, I define most of those in 21:56:07 [annevk] it would be nice if we figure out a way which spec defines what 21:56:52 [Travis] I don't mind them being in CSS-OM Views. (Planning to graft them in.) 21:57:06 [Travis] (that is...privately) 21:57:25 [Travis] shepazu: Two things in SVG that prevent mouse-location based events: 21:58:07 [Travis] ... 1) transformation - warps the coordinate space (expands/distorts/shifts). Nice for transforming element appearance, but bad for hit-detection, bounding boxes, reverse transformations... 21:58:26 [Travis] ... script libraries do this, but they're implicitly slow. 21:59:04 [Travis] ... 2) view box - grows or shrinks the rendered content (sets a scale--also changes the coordinate space). 21:59:16 [annevk] at some point Apple told me they'd come up with a proposal for a getClientRects() that's transform aware, but it hasn't happened yet 21:59:29 [Travis] ... SVG needs to add some math functions that specifiy how to get the absolute x/y within the current viewport. 21:59:47 [Travis] ... CSS transformations will also need this... 22:00:00 [Travis] .... Thought it would be nice to put it into D3E. 22:00:12 [Travis] Travis: Why this spec? 22:00:25 [Travis] shepazu: Because it has to do with the x/y of various events. 22:02:33 [Travis] ... SVG has backcompat issues with current client*/screen* APIs (can't repurpose to include transformations). 22:03:21 [Travis] ... basically a viewX, viewY to unravel the transformation automagically. 22:03:49 [Travis] ... could put it in the SVG working group, but would not prefer to because it applies to CSS transforms too. 22:04:12 [Travis] ... I'd like to make a proposal to the list about this. 22:04:37 [Travis] Travis: Will also want to run this by some IE folks. 22:06:21 [Travis] shepazu: For non transformed elements, this is just clientX/Y... 22:07:30 [Travis] Travis: Web authors may want their coordinate in pageX/Y, clientX/Y, pageX/Y, etc. May want a function that does the transformation instead. 22:08:09 [Travis] smaug: And a function also prevents us from having to add another parameter to the init* methods. 22:08:44 [jrossi] jrossi has joined #webapps 22:10:41 [jrossi] jrossi has joined #webapps 22:12:43 [jrossi] Hello from Atlanta.....telecon still going on or did I miss it? 22:13:29 [shepazu] jrossi: just finishing up 22:14:34 [jrossi] ah ok 22:14:58 [Travis] Action: Travis to send proposal on element-level reize events to be put on the wishlist. 22:14:58 [trackbot] Created ACTION-406 - Send proposal on element-level reize events to be put on the wishlist. [on Travis Leithead - due 2009-09-23]. 22:15:42 [Travis] smaug: Why did some events become UIEvents? 22:16:09 [Travis] shepazu: (also wondering why) 22:16:17 [Travis] ... will look into this. 22:19:38 [Travis] shepazu: back to resize event... 22:19:56 [Travis] smaug: resize event only fires on Window (never on Iframe) 22:21:31 [Zakim] -??P1 22:21:34 [Zakim] -[Microsoft] 22:21:40 [Zakim] -Shepazu 22:21:41 [Zakim] IA_WebApps(DOM3)5:00PM has ended 22:21:43 [Zakim] Attendees were [Microsoft], Shepazu 22:22:14 [shepazu] trackbot, end telcon 22:22:14 [trackbot] Zakim, list attendees 22:22:14 [Zakim] sorry, trackbot, I don't know what conference this is 22:22:15 [trackbot] RRSAgent, please draft minutes 22:22:15 [RRSAgent] I have made the request to generate trackbot 22:22:16 [trackbot] RRSAgent, bye 22:22:16 [RRSAgent] I see 1 open action item saved in : 22:22:16 [RRSAgent] ACTION: Travis to send proposal on element-level reize events to be put on the wishlist. [1] 22:22:16 [RRSAgent] recorded in
http://www.w3.org/2009/09/16-webapps-irc
CC-MAIN-2014-52
refinedweb
2,247
73.88
A flowchart is a graphical depiction of decisions and the results of those decisions. They are used to analyze, design, document or manage a process or diagram in different fields. Similar to other kinds of diagram, they help visualize what is going on. This helps to understand a process and find any flaws and bottlenecks in it. The program flowchart is analogous to the blue print of a building. A designer draws a blue print before beginning to construct a building. In the same manner, a programmer draws a flowchart before writing a computer program based on the flowchart. Just like drawing a blueprint, the flowchart is drawn based on defined rules which includes standard flowchart symbols given by the American National Standard Institute, Inc. You can easily make flowcharts in MS Word or even MS PowerPoint (you can learn more about how to use PowerPoint in this course) A flowchart has diagrams that illustrate the sequence of operations to be performed to get the solution of a particular problem. It enables communication between programmers and clients. Once a flowchart is drawn, it becomes comparatively easy to write the program in any high level language. In other words, flow charts are mandatory for good documentation of a complex program. Types of Flow Charts - High Level Flowchart This flowchart illustrates the major steps in a process. It also gives the intermediate outputs of each step and the sub steps involved. Also it provides a basic picture of the process and identifies the changes taking place within the process. - Detailed Flowchart This flowchart gives a detailed picture of a process by including all of the steps and activities that occur in the process. It is useful to examine areas of the process in detail and pinpointing problems or areas of inefficiency. - Deployment or Matrix Flowchart This flowchart is in the form of a matrix which shows the various participants and the flow of steps between those participants. Flowchart Symbols - Start and End Symbols: They are usually depicted by circles, ovals or rounded rectangles. Normally the start symbol has the word “START” in it. The end symbol has the word “END” in it. - Arrows: They show the flow of control. When an arrow comes from one symbol and ends at another symbol, it means that the program control will pass to the symbol the arrow points to. - Rectangles: They represent an action, operation or process. - Subroutines: They are usually depicted by rectangles with double struck vertical edges. - Input/Output: They are usually depicted by parallelograms. - Conditional or Decision: It is depicted as a diamond. It has two arrows coming out of it. One arrow corresponds to “Yes” or “True”. The other arrow corresponds to “No” or “False”. - Labeled Connectors: These are depicted by an identifying label inside a circle. Labeled connectors are used in multi sheet diagrams in place of arrows. Note that for every label, the outflow connector should always be unique. However there is no restriction on the number of inflow connectors. - Junction Symbol: It will have more than one arrow as input but only one as output. - Concurrency Symbol: This is depicted by a double transverse line with no restriction on the number of entry and exit arrows. Concurrency symbols are used when 2or more control flows must operate simultaneously. Note that a concurrency symbol with a single entry flow is termed a fork and one with a single exit flow is termed a join. You can see a simple flow chart with these symbols below. Note that most MS Office tools have built in shapes etc to create simple flow charts, you may want to use something like Illustrator or similar tools for more complex ones. (Take a quick tour of how to use Illustrator with this course.) Additional Symbols Used in Data Flow Diagrams - A number of symbols have been changed for data flow diagrams to represent data flow rather than control flow. - A document is depicted as a rectangle with a wavy base. - A manual input is depicted as a quadrilateral with the top irregularly sloping up from left to right. - A data file is depicted as a cylinder. - A manual operation is depicted as a trapezoid with the longest parallel side at the top. When to Use Flow Charts The flowcharts find out steps that are redundant or not in the right place. Also, they identify, appropriate team members, who should provide inputs or resources to whom. Flowcharts can be leveraged to identify areas for improvement and increase efficiency. Brainstorming is useful in creating flowcharts as individuals do not have knowledge of the entire process. The creation of the flowchart results in a common understanding about the process and enhance communication among participants.. The loop easily translates to a simple C for loop. In this program, we initialize the variable sum =0 and n=0. In the next step, we increment n by 1 and assign sum=sum+n. We check if n=50 and if not, increment n by 1. Else we just print the sum. You may want to check out our beginners C course before moving ahead. #include <stdio.h> main() { int sum=0, n=0, i; for(i=1; i<=50; i++, n++) { sum=sum+n; } printf("sum of the first 50 natural numbers is %d,sum"); } Example 2: C Program to Find the Factorial Value of a Number Now let’s do a bit more complex example. First we’ll draw out the flow chart for the factorial n And now let’s see how this translates to the code: #include<stdio.h> #include<conio.h> int main() { int num1,i,new_fact=1; printf("Enter any number : "); scanf("%d", &num1); for(i=1; i<=num1; i= i +1) new_fact = new_fact * i; printf("Factorial value of %d = %d",n,new_fact); return 0; } Initially we declare the variables num1, i and new_act. We assign the value 1 to new_fact. Then we accept the value of the number into variable num1 whose factorial has to be determined. In the ‘for’ loop we initialize i to value 1. Then we check if value of i = num1. Then we do the operation new_fact*i. After this in the ‘for’ loop i is incremented by 1 and the condition checked. The ‘for’ loop repeats till condition evaluates to fill. In the end we print the desired factorial value. Phew! If you had trouble following that program, you may want to check out this course on C programming to help you out. If Statement Introduction in Java If statements makes decisions and executes different code depending upon whether the condition is true or not. For your information nearly 99% of the flow decisions are made with if. Statemeents. Let’s take a look at an example. if ((i >= 0) && (i <= 10)){ System.out.println("i is an “ + “integer between 0 and 10"); } The double and operator “&&” returns true only if both its operands are true. This is displayed in the flowchart representation of If statement. To learn more about Java programming, you can take this beginners course. JavaScript else if Condition Statement Syntax Let’s take up another example for flowcharts. What does the if condition look like? if (conditional expression 1) { Statement1 block executed if condition 1 is true(satisfied). } else if (conditional expression 2) { Statement2 block executed if condition 2 is true(satisfied). } else { Statement3 block executed if condition 1 and 2 are false( not satisfied). } Here is a simple program for the same in Javascript. <script type="text/javascript"> var my_var=5; if (my_vart<3) { alert(my_var + " is less than 3"); } else if (my_var==5) { alert(my_var + " is equal to 5"); } else { alert(my_var + " is greater than 3"); } </script> In this program, initially the script is defined as type javascript. A variable is declared and initialised to a value. Then an if else if statement is executed. if the variable value is less than 3, the program code under if statement is executed. else the control flow branches out to else if statement. Next the else if statement is evaluated. If the evaluation returns true, then the statement under elseif is executed. If the evaluation returns false, then the statement under else is executed. In the end, the script is closed. You can take this course to see how to write your own JavaScript programs. Analyzing the Detailed Flowchart to Identify Problem Areas After the flowchart has been created, you need to look out for areas of improvement and areas prone to problems. Here are the steps that you need to follow. - First, examine each decision symbol to see if it is effective and not redundant. - Check whether the rework is preventing the problem or issue from reoccurring. - You need to examine each activity symbol to see whether its error prone and provides significant value addition. - Note that each document or database symbol needs to be examined to see whether it is necessary up to date and useful. - Each wait symbol needs to be examined to see its length and whether its length could be reduced. - Another important thing is to study each transition. It is important to know whether the intermediate service or product meets the requirements of the next person in the process. - Finally, the overall process needs to be study. Find out whether the flow is logical, there are control flows which lead nowhere and are there parallel tracks. Hope this article on flowcharts gets you up and running in creating your own customized flowcharts. Flowcharts are important to visualize and give a path to create useful and powerful programs.
https://blog.udemy.com/flowchart-examples/
CC-MAIN-2019-09
refinedweb
1,598
64.91
The QLayoutIterator class provides iterators over QLayoutItem. More... #include <qlayout.h> List of all member functions. Use. Constructs an iterator based on gi. The constructed iterator takes ownership of gi and will delete it. This constructor is provided for layout implementors. Application programmers should use QLayoutItem::iterator() to create an iterator over a layout. Creates a shallow copy of i, i.e. if the copy is modified, then the original will also be modified. Destroys the iterator. Returns the current item, or 0 if there is no current item. Removes and deletes the current child item from the layout and moves the iterator to the next item. This iterator will still be valid, but any other iterator over the same layout may become invalid. Moves the iterator to the next child item and returns that item, or 0 if there is no such item. Assigns i to this iterator and returns a reference to this iterator. Removes the current child item from the layout without deleting it, and moves the iterator to the next item. Returns the removed item, or 0 if there was no item to be removed. This iterator will still be valid, but any other iterator over the same layout may become invalid. This file is part of the Qt toolkit. Copyright © 1995-2005 Trolltech. All Rights Reserved.
https://doc.qt.io/archives/3.3/qlayoutiterator.html
CC-MAIN-2021-43
refinedweb
221
68.87
On the namespace, since Yonik seems concerned about it, and others aren't (I think?), why don't we leave everything factored out of Solr under the under org.apache.solr namespace? Anyone object to that approach? My only concern is that this sends the message that the module depends on Solr.... but, this turns into a non-issue once Solr is well factored into modules, because by the time we arrive at that future, "depending on Solr" just means "depending on Solr modules", which resolves my concern! Mike On Mon, May 2, 2011 at 6:11 PM, Grant Ingersoll <gsingers@apache.org> wrote: > > On Apr 27, 2011, at 11:45 PM, Greg Stein wrote: > >>. > > At the risk of speaking for someone else, I think it has to do w/ wanting to maintain brand awareness for Solr. We, as the PMC, currently produce two products: Apache Lucene and Apache Solr. I believe Yonik's concern is that if everything is just labeled Lucene, then Solr is just seen as a very thin shell around Lucene (which, IMO, would still not be the case, since wiring together a server app like Solr is non-trivial, but that is my opinion and I'm not sure if Yonik share's it). Solr has never been a thin shell around Lucene and never will be. However, In some ways, this gets at why I believe Yonik was interested in a Solr TLP: so that Solr could stand on it's own as a brand and as a first class Apache product steered by a PMC that is aligned solely w/ producing the Solr (i.e. as a TLP) product as opposed to the two products we produce now. (Note, my vote on such a TLP was -1, so please don't confuse me as arguing for the point, I'm just trying to, hopefully, explain it) > > That being said, 99% of consumers of Solr never even know what is in the underlying namespace b/c they only ever interact w/ Solr via HTTP (which has solr in the namespace by default) at the server API level, so at least in my mind, I don't care what the namespace used underneath is. Call it lusolr for all I care. > >> >> What does "fairness" have to do with the codebase? > > I can't speak to this, but perhaps it's just the wrong choice of words and would have been better said: please don't take this as a reason to gut Solr and call everything Lucene. > >> Isn't the whole >> point of the Lucene project to create the best code possible, for the >> benefit of our worldwide users? > > It is. We do that primarily through the release of two products: Lucene and Solr. Lucene is a Java class library. A good deal of programming is required to create anything meaningful in terms of a production ready search server. Solr is a server that takes and makes most things that are programming tasks in Lucene configuration tasks as well as adds a fair bit of functionality (distributed search, replication, faceting, auto-suggest, etc.) and is thus that much easier to put in production (I've seen people be in production on Solr in a matter of days/weeks, I've never seen that in Lucene) The crux of this debate is whether these additional pieces are better served as modules (I think they are) or tightly coupled inside of Solr (which does have a few benefits from a dev. point of view, even though I firmly believe they are outweighed by the positives of modularization.) And, while I think most of us agree that modularization makes sense, that doesn't mean there aren't reasons against it. I also believe we need to take it on a case by case basis. I also don't think every patch has to be in it's final place on first commit. As Otis so often says, it's just software. If it doesn't work, change it. Thus, if people contribute and it lands in Solr, the committer who commits it need not immediately move it (although, hopefully they will) or ask the contributor to do so, as that will likely dampen contributions. Likewise for Lucene. Along with that, if and when others wish to refactor, then they should by all means be allowed to do so assuming of course, all tests across both products still pass. > > In short, I believe people should still contribute where they see they can add the most value and according to their time schedules. Additionally, others who have more time or the ability to refactor for reusability should be free to do so as well. > > I don't know what the outcome of this thread should be, so I guess we need to just move forward and keep coding away and working to make things better. Do others see anything broader here? A vote? That would be symbolic, I guess, but doesn't force anyone to do anything since there isn't a specific issue at hand other than a broad concept that is seen as "good". > >
http://mail-archives.apache.org/mod_mbox/lucene-dev/201105.mbox/%3CBANLkTikB=n4sYs-j=Q1FN51bFhv=0=DJ1A@mail.gmail.com%3E
CC-MAIN-2014-52
refinedweb
856
74.32
I'm looking to plot a point on a map, when passed it's latitude and longitude into a function, i am using the mercator projection code to map the position and i have this working fine. But my problem comes when i want to add more than one point to the map. My knowledge of flash/actionscript is limited. example: the below code shows my document class and it calls the plot point function which figures out the position on the map, calls the draw_point function and places a red square on the stage. But when this function is called again, the previous red square is removed from the stage and the new one is placed. I need to show multiple points on a map as they appear, so the map could have say 100+ points on it, but as i mentioned the previous point gets removed when the function is run again. If you need any more information please let me know, i am working with AS3. MercatorProjection.as package { import flash.display.*; import flash.events.*; import flash.net.*; import flash.text.*; import flash.utils.*; //import LoadText; public class MercatorProjection extends Sprite { var dot_size = 6; var longitude_shift = 0; // number of pixels your map's prime meridian is off-center. var x_pos = 54; var y_pos = 0; var map_width = 774; var map_height = 600; var half_dot = dot_size / 2 public function MercatorProjection() { plot_point(54, -2); } public function draw_point(x, y) { trace("draw_point function reached"); var dot:Shape = new Shape; dot.graphics.beginFill(0xFF0000); dot.graphics.drawRect(0, 0, dot_size, dot_size); dot.graphics.endFill(); addChild(dot); } public function plot_point(lat, lng) { trace("plot_point function reached"); // Mercator projection // longitude: just scale and shift x = (map_width * (180 + lng) / 360 + longitude_shift) % map_width; // latitude: using the Mercator projection lat = lat * Math.PI / 180; // convert from degrees to radians y = Math.log(Math.tan((lat/2) + (Math.PI/4))); // do the Mercator projection (w/ equator of 2pi units) y = (map_height / 2) - (map_width * y / (2 * Math.PI)) + y_pos; // fit it to our map draw_point(x - half_dot, y - half_dot); } } } Thanks in advance.
http://www.dreamincode.net/forums/topic/227429-mercator-projection-plotting-point-onto-stage/
CC-MAIN-2016-18
refinedweb
342
59.23
Subject: [OMPI users] LDBL_MANT_DIG declaration trouble From: Ray Sheppard (rsheppar_at_[hidden]) Date: 2013-04-12 10:09:09 Hi, I am sorry to bother everyone. I have had no trouble building 1.6.3 with the Intel compiler. Now I am having to repeat the exercise for GNU. In opal/util/arch.h (about line 260) is the function below. I am getting an error that LDBL_MANT_DIG is not declared. I can not seem to find where it is declared. Any hints would be appreciated. Thanks. Ray static inline int32_t opal_arch_ldisintel( void ) { long double ld = 2.0; int i, j; uint32_t* pui = (uint32_t*)(void*)&ld; j = LDBL_MANT_DIG / 32; i = (LDBL_MANT_DIG % 32) - 1; if( opal_arch_isbigendian() ) { /* big endian */ j = (sizeof(long double) / sizeof(unsigned int)) - j; if( i < 0 ) { i = 31; j = j+1; } } else { if( i < 0 ) { i = 31; j = j-1; } } return (pui[j] & (1 << i) ? 1 : 0); } Function is described: /* we must find which representation of long double is used * intel or sparc. Both of them represent the long doubles using a close to * IEEE representation (seeeeeee..emmm...m) where the mantissa look like * 1.????. For the intel representaion the 1 is explicit, and for the sparc * the first one is implicit. If we take the number 2.0 the exponent is 1 * and the mantissa is 1.0 (the sign of course should be 0). So if we check * for the first one in the binary representation of the number, we will * find the bit from the exponent, so the next one should be the begining * of the mantissa. If it's 1 then we have an intel representaion, if not * we have a sparc one. QED */
https://www.open-mpi.org/community/lists/users/2013/04/21716.php
CC-MAIN-2016-30
refinedweb
279
66.23
I have the following code: import schedule import time def job(t): print "I'm working...", t return schedule.every().sunday.at("01:00").do(job,'It is sonday 01:00') while True: schedule.run_pending() time.sleep(60) # wait one minute How can I make sure the code works when my PC is off or when the code is not running? I don’t know if my question is weird but with schedule, we should be able to repeat the event. And I think it is just the case as long as the code is running. Or easy, what happens when the code is not running and/or my pc is off? Answer Use the OS scheduler. In case of windows – go with
https://www.tutorialguruji.com/python/how-do-i-schedule-tasks-on-windows-using-python/
CC-MAIN-2021-43
refinedweb
123
84.78
source code source code how to use nested for to print char in java language? Hello Friend, Try the following code: class UseNestedLoops{ public static void main(String[] args){ char[][] letters = { {'A', 'B'}, {'C','D Java StringTokenizer Source Code...Java StringTokenizer In this tutorial we will discuss about String Tokenizer in java. String Tokenizer allows an application to break into token. Java Java source code Java source code How are Java source code files named java source code - Java Beginners java source code hello Sir, Im a beginner in java.im greatly confused, so plz send me the source code for the following concept: Telephone... code: import java.io.*; import java.util.*; class Directory implements calendra.css source code - Java Beginners calendra.css source code hello i need the source code... and year are getting displayed Hi Friend, Try the following code...; cursor: pointer; } You can also download the code from the following link
http://roseindia.net/discussion/18735-Making-Tokens-of-a-Java-Source-Code.html
CC-MAIN-2014-35
refinedweb
154
59.09
notch.agent 0.5 The Network Operator's Toolkit for Command-line Hacking Notch brings a programmatic interface to all your network equipment like switches, routers and firewalls. Its Agent manages the command-line interfaces of network equipment in your network so that you can write powerful network management applications using the included Python library or from other languages by using the JSON-RPC interface. Note This package provides just the Notch Agent. A basic installation on a single machine also requires the client library, available in the notch.client package. For example, to get the version information from every cisco-ish device on your network (via a Notch Agent running at localhost:8080): #!/usr/bin/env python import notch.client def callback(request): """Called by the Notch Client when a request is complete.""" if request.error: print 'Exception: ', str.error.__class__.__name__ print str(request.error) else: print request.result nc = notch.client.Connection('localhost:8080') # Gets a list of devices from the Notch Agent. try: all_devices = nc.devices_matching('^.*$') except notch.client.Error: print 'Error querying for devices.' all_devices = [] # Send the command to each device, asynchronously receiving results. for device in all_devices: nc.command(device, 'show version', callback=callback) # Wait for all outstanding requests to complete. nc.wait_all() Installation Note As of version 0.5, Notch is split into separate notch.client and notch.agent pypi packages sharing a namespace package notch. Users upgrading from earlier versions must remove all existing Notch packages before proceeding with installation: $ pip uninstall notch $ pip uninstall notch.client $ pip uninstall notch.agent Also check your Python site-packages directories to ensure you do not have any notch* files or directories. Use pip to install both the Notch Agent or Client library. You'll need both packages to start with, but in larger installations, only machines acting as Agents require the notch.agent package. $ pip install notch.agent $ pip install notch.client This will install all but one dependency, which can be then installed using: $ pip install -e git+ You can also use easy_install, but we don't recommend that. If you don't have pip, install it with easy_install first. Configuration The Notch Agent requires some configuration to get started, and things are easiest if you already use the RANCID system, as the Notch Agent will read its router.db configuration file to populate its inventory. Then, you can start a Notch Agent using the built-in testing server: $ notch-agent --config=/path/to/your/notch.yaml The built-in testing server does not support parallel operation, so you must use a WSGI compatible server for production operation. Apache2 with mod_wsgi is used for many installations and an example configuration can be found in config/notch-mod_wsgi.sample.conf. The WSGI application object itself is defined in wsgi/notch.wsgi. -.agent-0.5.xml
http://pypi.python.org/pypi/notch.agent/0.5
crawl-003
refinedweb
471
51.44
sem_unlink(2) sem_unlink(2) NAME [Toc] [Back] sem_unlink - unlink a named POSIX semaphore SYNOPSIS [Toc] [Back] #include <sys/semaphore.h> int sem_unlink(const char *name); DESCRIPTION [Toc] [Back] sem_unlink() is used to unlink named semaphores. A successful call to sem_unlink() marks the semaphore, specified by name, for removal. Calling sem_unlink() does not affect processes, including the calling process, which currently have a descriptor, obtained from a call to sem_open(). Named semaphores are uniquely identified by character strings. All character string names will be pre-processed to ensure variations of a pathname resolve to the same semaphore name. If the semaphore is successfully marked for removal by a call to sem_unlink(), the semaphore will be removed when all processes remove their descriptors to the specified semaphore by calling sem_close(). Subsequent calls to sem_open() using the string name will refer to a new semaphore. To use this function, link in the realtime library by specifying -lrt on the compiler or linker command line. EXAMPLES [Toc] [Back] The following call to sem_unlink() will remove the named semaphore named by the string name. If the semaphore is currently referenced by one or more processes, the semaphore will be marked for removal and removed when there are no more processes referencing it. sem_unlink(name); RETURN VALUE [Toc] [Back] If the semaphore was unlinked successfully, sem_unlink() returns 0. If the semaphore could not be unlinked, the call returns -1 and sets errno to indicate the error. ERRORS [Toc] [Back] sem_unlink() fails and does not perform the requested operation if any of the following conditions are encountered: [EACCES] The named semaphore exists and the process does not have the permissions to unlink the semaphore. [ENAMETOOLONG] The name string is longer than {PATH_MAX}. [ENOENT] The flag O_CREAT is not set in oflag (see sem_open(2)) and the named semaphore does not exist. Hewlett-Packard Company - 1 - HP-UX 11i Version 2: August 2003 sem_unlink(2) sem_unlink(2) SEE ALSO [Toc] [Back] sem_close(2), sem_open(2), <semaphore.h>. STANDARDS CONFORMANCE [Toc] [Back] sem_unlink(): POSIX Hewlett-Packard Company - 2 - HP-UX 11i Version 2: August 2003
http://nixdoc.net/man-pages/HP-UX/man2/sem_unlink.2.html
CC-MAIN-2019-43
refinedweb
346
52.9
i wrote a code to search for palindromes import re word = "hannah" pattern = "(\w+)\1" if re.search(pattern, word): print(word) then i realised that the pattern i wrote can identify "hanhan" not hannah....somebody plz tell me how can i search for reversed group using regular expressions 2 AnswersNew Answer I'm pretty sure there is no actual way of doing that with pure regex, at least not for strings of arbitrary length (there might be ways of finding palindromes of given length, though). There are, of course, much more efficient ways of doing this, with just simple string slicing techniques, like: I know it doesn't help you with your issue in Python. But maybe it is interesting that there exists a solution for this in .NET, which might find its way into other languages... Have a look at balancing groups:
https://www.sololearn.com/Discuss/2306576/i-wrote-a-code-to-search-for-palindromes
CC-MAIN-2020-29
refinedweb
144
68.5
talk about how filter classes work internally. As we know, each data provider is 100% responsible for taking filters into account and changing the data it returns based on them. So, filtering happens inside each data provider, not via some magic system that runs after them. Let's look at how this is done in the core Doctrine data provider. Hit Shift+Shift, search for doctrinedataprovider and include non project items. There it is: CollectionDataProvider from Orm\. Here is the getCollection() method. The Doctrine data provider has a system called "collection extensions": these are hook points that allow you to modify the query in any way you want. And... we actually created one of these extensions in the last tutorial: CheeseListingIsPublishedExtension: This modifies the query so that we don't return unpublished listings, unless you're the owner of the listing or an admin. Why are we talking about these extension classes? Because one of the core Doctrine extensions is called FilterExtension. Let's open it up: Shirt+Shift and look for FilterExtension.php making sure to include all non-project items. Get the one from Orm\. I love this. It loops over all of the filters that have been activated for this resource class, calls apply() on each one and passes it the QueryBuilder! Thanks to this, in CheeseSearchFilter, all we needed to do was extend AbstractFilter and fill in the filterProperty() method: The apply() method lives in AbstractFilter, which does some work and then ultimately calls filterProperty(). The point is: the Doctrine filter system works via a Doctrine extension, which knows to call a method on each filter object. But all of this stuff will not happen in our situation... because we are not using the Doctrine data provider. However, because we made our filter implement the FilterInterface from the core Serializer\ namespace, API Platform will help us a bit: How? By automatically calling our apply() method on every request for an API resource where our filter has been activated: What I mean is, in DailyStats we added @ApiFilter(DailyStatsDateFilter::class): Thanks to this, whenever we make a request to a DailyStats operation, API Platform will automatically call the DailyStatsDateFilter::apply() method: This works via a context builder in the core of API Platform that loops over all of the filters for the current resource class, checks to see if they implement the FilterInterface that we're using and, if they do, calls apply(). So... this is huge! It means that API Platform is smart enough to automatically call our filter's apply() method but only when needed. This means that we can get down to work. Our first job is to read the query parameter from the URL. And... hey! We get the Request object as an argument: Schweet! Let's dd($request->query->all()): Back at your browser, refresh and... there it is: the from query param. Grab that with $from = $request->query->get('from'). And, if not $from, it means we should do no filtering. Return without doing anything. After, dd($from): Refresh now and... yay! We have a date string. So... what do we do with that string? I mean, we're not inside DailyStatsProvider where we actually need this info: we're way over here in the filter class. The answer is that we're going top pass this info from the filter to the data provider via the $context. Check it out: one of the arguments to apply() is the $context array and it's passed by reference: That means we can modify it. Head to the top of this class and add a new public constant, how about: FROM_FILTER_CONTEXT set to daily_stats_from. This will be the key we set on $context: Before we do that, let's convert the string into a DateTime object: $fromDate = \DateTimeImmutable::createFromFormat() passing Y-m-d as the format and then $from: We're using createFromFormat() because if the $from string is in an invalid format, it will return false. We can use that to code defensively: if $fromDate, then we know we have a valid date. Also add $fromDate = $fromDate->setTime() and pass zero, zero, zero to normalize all the dates to midnight: Finally, set this on the context: $context[self::FROM_FILTER_CONTEXT] = $fromDate: So the job of the apply() method in a custom, non-Doctrine filter is not actually to apply the filtering logic: it's to pass some filtering info into the context. And now, we're dangerous. Well... we're almost dangerous. If we can get access to the $context from inside DailyStatsProvider, then we can read that key off and set the from date. Unfortunately, we do not have the context yet: But fortunately, we know how to get it! Instead of CollectionDataProviderInterface, implement ContextAwareCollectionDataProviderInterface: The only difference is that getCollection() now has an extra array $context = [] argument: To start, let's dd($context) and see if the filter info is there: Ok, refresh. And... we got it! The daily_stats_from is there! And if we take the from query param off, it still works, but the key is gone. Let's finally use this. Remove the dd() and, down here, add $fromDate = $context[DailyStatsDateFilter::FROM_FILTER_CONTEXT] with a ?? null so that is defaults to null if the key doesn't exist: Then, if we have a $fromDate, call $paginator->setFromDate() and pass it there: Testing time! The query parameter should filter from 09-01. Refresh and... it does! We only get three results starting from that date! If we take off the query param... we get everything. We just built a completely custom filter. Great work team! Next, in DailyStatsDateFilter, if the from data is in an invalid format, we decided to ignore it: But we could also decide that we want to return a 400 error instead. Let's see how to do that and how we could even make that behavior configurable. This will lead us down a path towards true filter enlightenment and uncovering a hidden secret. Basically, we're going to learn even more about the power behind filters. // } }
https://symfonycasts.com/screencast/api-platform-extending/apply-filter
CC-MAIN-2021-31
refinedweb
1,010
64.41
Provided by: liblfc-dev_1.13.0-1_amd64 NAME lfc_getcwd - get LFC current directory used by the name server SYNOPSIS #include <sys/types.h> #include "lfc_api.h" char *lfc_getcwd (char *buf, int size) DESCRIPTION lfc_getcwd gets the LFC current directory used by the name server. This current working directory is stored in a thread-safe variable in the client. If buf is not NULL, the current directory name will be stored there. If buf is NULL, lfc_getcwd allocates a buffer of size bytes using malloc. size must be at least the length of the directory name to be returned plus one byte. RETURN VALUE This routine returns buf if the operation was successful or NULL if the operation failed. In the latter case, serrno is set appropriately. ERRORS. SEE ALSO lfc_chdir(3) AUTHOR LCG Grid Deployment Team
http://manpages.ubuntu.com/manpages/eoan/man3/lfc_getcwd.3.html
CC-MAIN-2020-29
refinedweb
136
68.26
Introduction In the past, I've come down hard on ASP.NET for trying to be an all-in-one solution. Rather, I've come down hard on ASP.NET developers trying to make it into an all-in-one solution. ASP.NET is an unsurpassed language for enterprise development and other medium-to-large scale projects. Despite significant improvements in 2.0, the simpler your project gets the more you begin to notice .NET's overhead (no, I'm not talking about performance). In fact, everything that makes ASP.NET a great toolset for large-scale development is what makes it such a bad choice for anything small. On the flip side, PHP excels at this type of small/medium work. It's extremely easy and quick to put something together and supports an incredible amount of kludging before falling apart. Of course, everything that makes PHP such a great toolset for small-scale development is also what makes it such a horrid choice for any large-scale projects or any sized enterprise development.. PHP is NOT Object Oriented As a language, PHP's object-support is quite rich. Since the introduction of PHP5, developers have had the same level of OO support as most other languages. Of course, there's nothing forcing developers to use OO principals. This is a mixed blessing. For projects with a small domain, I've found that OO can slow down development without brining any substantial benefits. For projects with a medium/large domain, developers are left on their own to do the right thing. I've heard it said more than once, but PHP doesn't have a culture of objects. The PHP community hasn't bought into the concept of OO the way the Java, Rails and .NET communities have. Whether that's an issue with PHP or an issue with PHP developers is hard to say. VB6 had decent OO support; most VB6 developers just didn't buy it (or maybe they didn't grok it). As a framework, PHP fails to leverage OO in any meaningful way. The fact that PHP itself isn't OO explains why developers don't use PHP's OO features. In .NET everything's an object. Want a textbox? Declaratively create a new textbox objects, programmatically set properties and execute member functions. Most PHP, VB6 or classic ASP programmers just don't get it. Think of it this way, in .NET you create a server-side DOM, manipulate it with a very rich API and an event model then render it out. This is particular useful when you want to extend the built-in objects and achieve greater re-use (within the same project or with completely separate projects). There are countless other examples, such as user controls, HttpModules, HttpHandlers and the page lifecycle which are completely foreign concepts to PHP developers. Beyond the presentation layer, PHP's vast library is also mostly procedural. The contrast between the MySQL and MySQLi libraries is a good example of how PHP is (MySQL) and how it should be (MySQLi). However, almost all of PHP's libraries are procedural: string manipulation, regular expressions, sessions, ftp, sockets, Images, database access (MySQLi is the exception) and so on. With PHP's own miserable ue of OO, it's not hard to understand why PHP developers don't leverage OO as much as they could. OO is hard, and PHP neither demands it nor does it bother promoting it. Layering The only support for layering offered by PHP is the include/require functions. Like OO, proper layering can greatly increase code readability and maintainability. Much like in classic ASP, most developers tightly intertwine their PHP and HTML. The resulting code is truly worthy of being labeled spaghetti for its incomprehensibility and the difficulty to maintain. "echo" is core to PHP development, while the equivalent Response.Write is scarcely found in ASP.NET. Despite being easily abused, ASP.NET's CodeBehind model and DataBound controls are well ahead of any PHP offering. The lack of OO buy-in often leads to anemic domains. On the data access front, the .NET community is as far ahead of PHP as Java is ahead of .NET. The introduction of LINQ in the next version of .NET will put it squarely ahead of all if developers adopt it. The first example of the MySQLi documentation in the PHP manual highlights the problem perfectly: <?php $mysqli = new mysqli("localhost", "my_user", "my_password", "world"); if (mysqli_connect_errno()) { printf("Connect failed: %s\n", mysqli_connect_error()); exit(); } $query = "SELECT Name, CountryCode FROM City ORDER by ID DESC LIMIT 50,5"; if ($result = $mysqli->query($query)) { /* fetch associative array */ while ($row = $result->fetch_assoc()) { printf ("%s (%s)\n", $row["Name"], $row["CountryCode"]); } /* free result set */ $result->close(); } /* close connection */ $mysqli->close(); ?> I can't really think of a worse way to write this. True, this is only a reference document, but most PHP code follows the same pattern. Exception Handling Exception handling in PHP is like object oriented programming – it's supported but not used. The PHP framework scarcely makes use of structured exception handling as do most of the online samples. The example which always gets under my skin is the MySQL library. Check the return value to see if a connection failed and echo out mysql_error() . Anyone who thinks using "or die" is any better just doesn't get it. Even the much improved MySQLi library didn't get it right. It doesn't really get any simpler than this: a failed database connection should throw an exception and in almost every case developers should let it bubble up to the global handler. I'm sure they exist, but I couldn't find a single standard PHP library which actually uses exceptions. There's a base exception class which you can extend, great! But why bother when you'll still have to do that oh-so-dated return value check all the time. PHP developers are quick to point out that PHP 5 has first class exception handling but slow to actually use it. Other Issues The fact that PHP is a loosely typed language can lead to significant difficulties for larger projects. Strongly-typed languages like C#, VB.NET and Java can take full advantages of compile-time checking and design-time tools (IDEs) while loosely-typed ones generally can't. (I have little experience with PHP IDEs, so maybe this isn't as much of a problem as I think it is). Like everything else we've covered, the lack of strong typing generally makes PHP ideal for smaller projects and problematic for larger ones. .NET also has a clear edge when it comes to tool support. Visual Studio has long been considered one of the best IDEs around. It does cost a fair chunk of change (we are talking about medium/large apps though), but there's the free Express line which is quite adequate. For very large projects, PHP doesn't have anything like Team System. When it comes to profiling, debugging and refactoring ASP.NET is ahead by at least a couple years. VS.NET's projects, solutions and references are all integral to code re-use. PHP largely goes by the copy-n-paste approach. Although both PHP and .NET lag behind Java when it comes to Agile methodologies, there's considerably more awareness and support for Test Driven Development, unit testing and Continuous Integration within the .NET community. .NET's tools are more mature and widely used. Searching for information on TDD, unit testing and CI for PHP returns only a handful of useful/relevant hits. Conclusion VB6 and VBA share a lot more in common with PHP than with either VB.NET or C#. More pointedly, VB6 developers share a lot in common with PHP developers. Despite having some of the necessary tools, PHP developers (including those working on the actual PHP codebase itself) seem unwilling or unable to make use of core software engineering methodologies.. This isn't an anti-PHP rant – it's a using the right tool for the job rant. There are frameworks out there for PHP which help solve a lot of the inherent issues, but like ASP.NET some of the limitations are built-in (you can't get around C#'s strong-typing and you can't get around PHP's loose-typing). There are MVC frameworks, O/R mapping frameworks and Agile tools. Despite those, with the 5 year anniversary of ASP.NET a couple months away, and significant innovation in the next release of .NET, PHP is starting to lag far behind. Hey thanks it was good article to read. And here my rant PHP was web framework of C and C++ (with apache of course). In web, every request processed in 1 second and guess what ? Zend going to wrtie full scale OO based framework for framework (read as zend_framework for php) because some Desktop programmers argues the way of PHP. Bahhh… Problem is not PHP, Problem is there where too many unemployment Desktop programmers around here to trying find web programmers job and when they found a old php code in their new job, they start to crying. Oh s***t this code unmaintainable. Then they create new mission for themselves. Lets start to teach proper programming way to those poor php programmers. And because of those desktop programmers. Zend start to write a framework. Look mate, this is web, things get old in a year. Today I see a messages in Zend_Framework mailing list to refactoring Zend_Framework for version 2.0. Oh Yesss… So ? if you stick to desktop programming ways, you found yourself the continuous update back end because ways are constantly changes in the web. And remember we are here to writing new web pages Regards Just a followup for 2008, PHP has *slowly* been improving. Zend_Framework (for example) is starting to replace a few of the more procedural libraries. PHP is also moving glacially towards supporting strong(er) typing. The articles main arguments are still valid, though. PHP can easily be used in a procedural manner and if you don’t have an OO background, best of luck to you. You’re right Alan, I hadn’t considered the [unfortunate] need for MySQLi (and the rest of the functions) to be backwards compatible for 4.0. I guess the move from ASP –> ASP.NET softened me to these kind of forced/breaking changes, but it’s a very good argument. You might be interested in a piece I wrote bitching about ASP.NET You might find more balance in my point of view when you put the two articles together: Karl: Good article. I am a die hard PHP programmer but agree with most of what you said. The parts I don’t I think you clearly showed both sides of the argument, thank you for that. However, I came away from the article feeling one of you biggest contentions is OO in the PHP libraries especially in your MySQL and MySQLi examples. Let me clarify at bit on why the developers of those chose to write them the way they did instead of using the power of PHP5 OO. Those two libraries are both written in PHP and are PHP4 compatible and PHP4 has very little OO functionality. So in writing them the have used PHP4. If you would like to see a PHP5 version of the mysql libraries look to PDO (). It is a extension written in C for PHP5 that is fully OO PHP library. I felt many of your contentions that you gave no argument for on PHP’s side resides from areas of misunderstanding like this. Again though, good article, Alan Too much of the .NET community is stuck on DataSets. They are a fine tool, but like any tool, need to be used in the right situation. .NET developers tend to use them in all situations. The Java community is ahead with it comes to O/R Mappers and other database< -->domain techniques. The good news though is that the .NET community is quickly catching up. Good article and i enjoyed it though Karl you wrote a statement below “On the data access front, the .NET community is as far ahead of PHP as Java is ahead of .NET” Can you please elaborate on that and explain why Java is ahead of .Net on the data access front? Thanks AC and Dan: You both make valid points. I don’t agree with AC’s “language vs Framework” argument though. The .NET framework is nothing more than a bunch of class libraries which let you do a bunch of stuff. They are neatly organized in assemblies and namespaces, but that’s no different than all the PHP functions (except there are probably a lot more in .NET). PHP is a set of libraries, a language, and a web module (for lack of a better word). This, to me, is no different than what people mean when they say ASP.NET. It’s a language (C# or VB.NET), a framework (all the class libraries) and a web module for IIS. The fact that ASP.NET is a further ahead than the out-of-the-box PHP offering doesn’t mean you aren’t comparing apples to apples – it just means one apple is a lot shinier than the other. But PHP *IS* a language, it *IS* a set of libraries and it *IS* a web framework. That said, I agree that there are a lot of PHP “frameworks” which greatly enhance PHP. I don’t know much (anything?) about them, but I’m readily willing to admit they bring PHP very close to ASP.NET. I have no hard data, but I think their use is limited (I’m happy to accept any corrections). And, as I’ve said before, it’s the programmer not the language (or framework). The point I agree with the most is Dan’s last one – any popular language (which both really are) is bound to have terrible programmers because most programmers are terrible. But that doesn’t explain why the PHP framework itself is terrible (which I realize is a near impossible pill to swallow). That the PHP framework is terrible though doesn’t help create better programmers. I actually think that .NET is creating better programmers – at a snail’s pace. I can put all of my methods in a single class (or to be even more evil, put all my logic in a single method) in any language with “enforced OO.” I don’t think this argument holds any water. And, any insanely popular language is bound to have terrible programmers, because most people are terrible programmers. But yeah, PHP sucks. I think one thing that many, many people get wrong is comparing a language to a framework. How many people can write a large or medium application in VB or C# easily and quickly without using the .NET framework? Not Many. There are frameworks available for PHP (CakePHP, CodeIgnitor, Zend Platform, etc) and those make PHP programming much easier for larger projects. They are not as robust as .NET (yet), but they do enforce good programming principles and design patterns (MVC, etc). I am not saying that PHP with a framework would be better than .NET, I am just saying that it would be closer to an apples-apples comparison. Also, I do not agree that PHP programmers do not understand programming. Giving an example of the worst coding possible in PHP does not support your argument. [e]lementar two points. First, even if everything you say is true, it doesn’t address the original point, which is, put bluntly: PHP developers don’t know how to program – they don’t understand OO, don’t understand layering, design patterns or simple structured exception handling. Of course it’s a generalization…just like it was for VB developers back in the day. As for the cost…I said it about 5 times, but I’ll say it again, I’m talking about medium to large software projects or enterprise development. This is the type of project where the cost of software and tools is less than 5% of the TOC. Your view is incredibly shortsighted. hm …come on php is free, fast, flexible and fashinates everybody it’s also easy to understand and build anything on da web . Php code exists everywhere and free spreads the personal education of the future programmer to the level that he gonna simply love it . Hence that you like ASP then how you can work with it from da moment that you need to buy everything even the addons cause is not all included there.. php if free and runs on Linux.Wins,solaris etc ASP do not We well all know that if you like c,c++ then you gonna love php . Whether it be object oriented or procedural it wouldn’t matter to me. I can do both. Bascially it should be whatever needs to get the job finished. Sometimes you can/can’t have both. But, whatever. To each his/her own. Hi, I’ve worked a lot VBA as well as with PHP. I like to specify two mayor differences. 1.) VB has a drag and drop IDE where you write code for events. A lot is magic is done in the background. All kinds of objects are automaticly created and the developer has to look into the manual or use the debugger to see which objects you can use. PHP starts with a blank sheet. The only variables/objects there are, are those you defined in the sourcecode. 2.) VBA supports something like OO, but it has no inheritance and therefor you can never create any sort of OO framework. In PHP this works as a charm. Differences like these (and a lot more) make the feel of the programming languages are as different as ANSI C and LISP. Conclusion: Both PHP and VB are exelent to quickly build small apps. But PHP is also an exelent programming language for large projects. The fact that you see little examples showing the power of PHP5 is that is has yet to be discovered by most. I think we’re both arguing for the same thing. I don’t disagree that ASP.NET is much more appropriate for large businesses that require web presence. Just pointing out that SMEs are more likely to go for the cheaper *startup* option which is a LAMP stack, whether or not it is the ideal choice. If there’s one benefit of ASP.NET over PHP that I had to pick it’s that the view is fairly well abstracted from the model, which can be particularly difficult in PHP without some discipline. Geeezz Scott can I steel your wisdom and pass it off as my own plz? As I’ve said, it’s always more about the developer than the language (and I frequently use the word discipline also), but some “features” are abused so often than at some point the language has to stand up and be accountable. Ala’a A. Ibrahim, thanks for the feedback. Fregas responded to you better than I could have. Although I don’t think you missed the point, I do think you just happen to be the kind of PHP programmer i WANS’T talking about – and I truly believe you to be in the minority… Barry, on the free point…ignoring Mono since I know little about it, I just can’t agree with your line of reasoning – it seems incredibly short sighted. You are absolutely right for a huge number of small-medium sites out there, but at the risk of repeating myself, as your site grows in complexity, or you are dealing with even remotely complex business, the cost of commercial products is only going to be a fraction of the total cost of ownership. I know Microsoft people throw around TCO like it was candy, but many studies have pegged code maintenance to be 80-90% of the total costs of software. Even if development tools and platform magically accounted for that other 20-10% (which would mean planning and development would be 0%), you’d still be better off trying to optimize the 80-90% than cut down on the 10-20%. I do agree though that it’s still a compelling point in PHP’s favor. Just like the fact that it’s cross-platform. Even though only a very few number of large projects require it..it’s certainly a plus in it’s favor that MS offerings don’t have. I don’t see anything wrong with the statement. I would imagine that an experienced programmer wouldn’t think of re-purposing a variable just because he could. A less-experienced programmer would require a tool to keep him in line. It really has more to do with discipline than experience, but discipline usually shows up only when it’s got a good reason, and good reasons usually come with experience. ——– Now, PHP being a loosely typed language, well for me I don’t see it a problem, I like to consider it a feature, an experienced programmer can make a very good use of such feature, but newbies would could fall in it. ——– Quick survey: How many people see exactly what’s wrong with the above statement? I’m in the middle of rebuilding a project from the ground up because the idiots before me “repurposed” literally hundreds of variables because they could. Personally, if you need to either change the underlying type or reason for a variable to exist then you should create a brand new one and move on. If your pull this garbage then you need to go back to school and learn some new words like D-E-S-I-G-N and R-O-B-U-S-T I think you’ve missed a couple of fairly important points about PHP. Firstly the syntax and style owes much to Unix, perl, and shell. This lowers the barrier to entry as it is a familiar style and syntax, much like Basic is, well, basic. Secondly, it is free. “This isn’t an anti-PHP rant – it’s a using the right tool for the job rant.” – well, unfortunately there are a lot of people in this world, some can’t, and some won’t, pay licensing fees to Microsoft either directly or via their ISP. The same goes for the development environment. So consequently some people have to use the *only* tool available. Both points combined, and you get a very mixed range of ability developers who are very unlikely to ever try and use OO-principles when designing web sites. Finally PHP is originally from 1994, when people didn’t know any better about writing web languages (OO was fairly new as well). It’s unlikely to ever undergo the massive re-engineering that ASP received as part of .NET, and besides, there’s Mono’s implementation if people want ASP.NET style development for free. Ala’a A. Ibrahim, I think you missed the point. I understand being defensive of your favorite language. Karl isn’t saying that PHP is bad necessarily, but just that it fits a particular niche, that is small sites and hobbyist programmers, the same way VB6, classic ASP and similar languages did. Allow me to respond to some of your comments. “yes, it’s not fully OOP, but wrappers exist for all the old procedural functions, if you couldn’t find one, you can easily build one.” Yes, but what incentive is there for you to build such wrappers? Who has time to do that? If they are not already OO, then most developers are just going to use the plain old procedural framework thats already there. And as Karl mentioned, there isn’t much of a cultural push among the PHP community to build ANYTHING in an OO fashion, much less convert the existing framework. Part of it is the langauge, and part of it is developers using it. .” Since you don’t have that experience, it is hard to know what you are missing. I’ve used coldfusion, vb6 and classic asp and i can tell you i dont want to go back to them having used .NET. Strongly typed languages tend to make refactoring and bug fixing a lot easier on midsize to large projects. “Also PHP is an interpreted language, which turns agile methodologies a much simpler task than complied languages, you don’t have to compile each time you run a test, you just run, and fix immediately.” I dont’ think be interpreted necessarily makes something agile. I kind of understand your point, how being able to save and refresh the page gives you “immediate feedback” which is an agile principle. Also, you can do that in ASP.NET…you dont’ necesarily have to recompile the whole project. It depends on how you like to work. However, interpreted languages have this disadvantage: You dont know that an error exists until you actually run that page or screen or piece of code. THis can come back to bite you later. Strongly typed langauges aren’t perfect, but at least when you compile the whole project, you know that there are not any syntax or type related errors. with interpreted langauges, you get no such feedback unless you run every piece of code. Nice article. I’ve written many web apps either using PHP and ASP .NET. I must agree with all stuff you’ve described. Writing something “bigger” in PHP is a real pain. Omnipresent procedural approach is not so convenient and it doesn’t speed up writing the code. The worse thing is that every major release of PHP differ so much from its predecessor. I started learning PHP when third release was “on top”. Fourth version of PHP was different and current, fifth version, presents another approach to creating applications. For somebody who starts learning PHP now it doesn’t matter but many older programmers have problem when they must learn new features in a totally different way. Well, thank you for this nice article, but let me disagree with you. PHP is relatively an easy language to program with, this is what brought up the inexperienced programmers (so you see a lot of the spaghetti code around), but it’s a very powerful language for the experienced. yes, it’s not fully OOP, but wrappers exist for all the old procedural functions, if you couldn’t find one, you can easily build one. Tools, well if you take a look at the Zend Studio, or PHPEclipse, I guess they are a very powerful IDEs that can make your day much easier. Now, one of the most important features about PHP, is the manual, you can never get lost about something built in the language, unlike other languages, PHP has a great manual, good examples and descriptions.. Well, I don’t know which community you are talking about (maybe the PHP newbies community) but unit testing and Patterns and other agile stuff are widely known in the PHP community. Also, searching the Internet for a PHP script to do something is so easy, and you always get the source code, you can almost find any script that does things for you in PHP. Also PHP is an interpreted language, which turns agile methodologies a much simpler task than complied languages, you don’t have to compile each time you run a test, you just run, and fix immediately. Thanks for the article, it was really nice.
http://codebetter.com/karlseguin/2006/11/27/is-php-the-new-vb6/
CC-MAIN-2014-42
refinedweb
4,629
72.56
Back. Apple’s presentation didn’t disappoint the hungry crowd. We hoped for a modern filesystem, optimized for next generation hardware, rich with features that have become the norm for data centers and professionals. With APFS, Apple showed a path to meeting those expectations. Dominic Giampaolo and Eric Tamura, leaders of the APFS team, shared performance optimizations, data integrity design, volume management, efficient storage of copied data, and snapshots—arguably the feature of APFS most directly in the user’s control. Far from vaporware, Apple made APFS available to registered developers that day. The company included it in macOS Sierra as a technology preview. You can play with APFS today and a lot of the features are there. You can use space sharing to carve up a single disk into multiple volumes. You can see the speed of its directory size calculation—nearly instantaneous—compared with the slow process on HFS+. You can use clones to make constant-time copies of files or directories. At WWDC, Apple demonstrated the feature folks were the most eager to play with: snapshots. Tamura used snapshotUtil to create, list, and mount snapshots. But early adopters quickly discovered that snapshotUtil wasn’t part of the APFS technology preview. Apple promised delivery in 2017. We all double-checked our HFS backups and waited. A brand new day It’s 2017, and Apple already appears to be making good on its promise with the revelation that the forthcoming iOS 10.3 will use APFS. The number of APFS tinkerers using it for their personal data has instantly gone from a few hundred to a few million. Beta users of iOS 10.3 have already made the switch apparently without incident. They have even ascribed unscientifically-significant performance improvements to APFS. With APFS taking the next step, I decided to check back in on snapshots. There had been no news from Apple and nothing obviously new in macOS updates, but back in June I wrote about a clue Apple had left in macOS Sierra: I used DTrace (technology I'm increasingly amazed that Apple ported from OpenSolaris) to find a tantalizingly named new system call fs_snapshot; I'll leave it to others to reverse engineer its proper use. With its proper use still, apparently, a mystery, and APFS freshly of interest, I dove back in. The game is afoot First a little background. An operating system roughly divides the world into the kernel and user processes. The kernel can, for the most part, do anything. It can talk to hardware devices; it can access all memory; it can execute privileged instructions. In short, it has unfettered access. The kernel provides abstractions and imposes security for regular user processes. Have you ever seen 'kernel_task' in Activity Monitor? That's the kernel using CPU, memory, or other resources. User programs are everything else: applications you run, the Finder, the windowing system, even the Dock or other pieces that modern parlance includes as part of the "operating system." A system call is simply a way for a user process to communicate with the kernel. If a program wants to write data to disk or get a larger memory allocation, it needs the kernel to verify permissions and execute those tasks; the system call is the mechanism that the user process uses. Note that the root user (or "sudo") still relates to user processes, just ones that the kernel imbues with greater privileges. I used DTrace to find the system call. DTrace is the dynamic tracing facility I co-authored at Sun with Bryan Cantrill and Mike Shapiro. It provides visibility into the whole system, from the kernel and device I/O to Java or Swift function calls. Naturally, DTrace includes visibility into system calls. Apple ported DTrace from Solaris in 2006; a typical Mac has hundreds of thousands of probes, discrete points of instrumentation; we can list them with dtrace -l: $ sudo dtrace -l | wc -l 415636 (Note that some parts of DTrace are protected by SIP and need to be disabled before you can use them!) I found the system call of interest by looking through DTrace system-call probes: $ sudo dtrace -l -n syscall:::entry | grep snapshot 1129 syscall stack_snapshot_with_config entry 1183 syscall fs_snapshot entry DTrace is an incredibly powerful tool for understanding how a system is behaving. Here, however, we're just taking advantage of how DTrace can show us a definitive list of system calls. We can also see the fs_snapshot system call in the file /usr/include/sys/syscall.h (you'll need the Xcode developer tools installed to do this): $ grep fs_snapshot /usr/include/sys/syscall.h #define SYS_fs_snapshot 518 It's a little more straightforward, but less definitive since there's no guarantee that code in a header file matches the running kernel. A simple Google search for fs_snapshot immediately pointed me in the right direction, turning up a file in XNU on Apple's open source website. XNU is the macOS kernel that came over from NeXT. Run uname -v and you'll see the specific XNU version that your computer is running. For well over a decade, Apple has made XNU available as open source (and has done the same for many other macOS components). For a company known for its secrecy, it's commendable that Apple has built such a tradition of transparency with at least some subset of their software. Commendable and quite the boon for anyone trying to enable an unpublished feature! The first snapshot Learning from XNU and making some educated guesses, I wrote my first C program to create an APFS snapshot. This section has a bit of code, which you can find in this Github repo: #include <fcntl.h> #include <unistd.h> #include <sys/syscall.h> int main(int argc, char **argv) { int ret; int dirfd = open(argv[1], O_RDONLY, 0); if (dirfd < 0) { perror("open"); exit(1); } ret = syscall(SYS_fs_snapshot, 0x01, dirfd, argv[2], NULL, NULL, 0); if (ret != 0) perror("fs_snapshot"); return (0); } Now to test it. First, I created an APFS volume and mounted it: $ hdiutil create -size 1g -fs APFS -volname "APFS" apfs.dmg] y ................................................................... created: /Users/ahl/src/apfs_snap/apfs.dmg $ hdiutil mount apfs.dmg /dev/disk2 GUID_partition_scheme /dev/disk2s1 Apple_APFS /dev/disk2s1s1 41504653-0000-11AA-AA11-0030654 /Volumes/APFS $ mount | grep /Volumes/APFS /dev/disk2s1s1 on /Volumes/APFS (apfs, local, nodev, nosuid, journaled, noowners, mounted by ahl) Then, I tried to take the first APFS snapshot outside of Apple (that we know of, at least): $ ./firstSnap /Volumes/APFS first_snap fs_snapshot: Operation not permitted Anticlimactic. The "Operation not permitted" error message corresponds to the error code EPERM, whose value is 1. We need to find out where that error is coming from. Fortunately, DTrace can help us figure out what's going on. DTrace uses its own language to describe probes and actions; here's a simple script with comments about what each clause does: #!/usr/sbin/dtrace -s #pragma D option flowindent /* * When a thread calls the fs_snapshot system call set a * thread-local variable called 'follow' to 1. */ syscall::fs_snapshot:entry { self->follow = 1; } /* * For every function entry and return in the kernel (of * which there are many!) if the thread has its 'follow' * value set, print out the first two arguments (or * the offset and return value for a return probe). */ fbt::: /self->follow/ { printf("%x %x", arg0, arg1); } /* * When the thread returns from the fs_snapshot system * call, set follow to 0 and exit this DTrace invocation * (thus removing all instrumentation). */ syscall::fs_snapshot:return /self->follow/ { self->follow = 0; exit(0); } Running this DTrace script in one terminal while running the snapshot program in another shows the code flow through the kernel as the program executes: $ sudo ./fs_snapshot.d dtrace: script './fs_snapshot.d' matched 137082 probes CPU FUNCTION 6 -> fs_snapshot ffffff8034f8b5a0 ffffff805c685330 6 -> vfs_context_current ffffff8034f8b5a0 ffffff805c685330 6 priv_check_cred ffffff80575447f0 36b2 6 -> mac_priv_check cd 1 6 <- priv_check_cred 56 1 6 <- fs_snapshot def 1 6 <= fs_snapshot Note first that DTrace turned out 137,082 discrete points of instrumentation, and then restore the system to its optimal state for this experiment. In the code flow, the priv_check_cred() function jumps out as a good place to continue because of its name, the fact that fs_snapshot calls it directly, and the fact that it returns 1 which corresponds with EPERM, the error we were getting. Looking again at the XNU source code, we find this delightful comment: /* * Check a credential for privilege. Lots of good reasons to deny privilege; * only a few to grant it. */ int priv_check_cred(kauth_cred_t cred, int priv, __unused int flags) { Apple engineers aren't without their own particular brand of humor. Walking through the function it becomes clear that fs_snapshot expects to be run with sudo. World's first non-Apple snapshot, take two! $ sudo ./first /Volumes/APFS first_snap No output. Did it work? Let’s try again: $ sudo ./first /Volumes/APFS first_snap fs_snapshot: File exists By "File exists" let's assume that it means that the snapshot named first_snap already exists. Success! You must login or create an account to comment.
https://arstechnica.com/gadgets/2017/02/testing-out-snapshots-in-apples-next-generation-apfs-file-system/
CC-MAIN-2019-43
refinedweb
1,513
54.32
NAME Chj::ruse - reload modules SYNOPSIS use FP::Repl; use Foo; use Chj::ruse; use Bar qw(biz bim); repl; # edit the Foo.pm or Bar.pm files, then (possibly from the repl): > ruse; # reloads all changed modules, and re-does all imports # which have happened for those modules since Chj::ruse has # been loaded. DESCRIPTION Extended copy of Module::Reload which modifies Exporter.pm so that exports are tracked, so that these are redone as well. One function is exported: ruse. It does the equivalent of Module::Reload->check, and re-exports stuff as far as possible. The function takes an optional argument: a temporary new debug level, which shadows the default one stored in $Chj::ruse::DEBUG. 0 means no debugging info. -1 means be very silent (set $^W to false, to prevent redefinitions of subroutines *which are NOT in the namespace being reloaded* (subroutines in the namespace being reoaded are deleted first, so there is never given a warning in this 'normal' case)). BUGS Each time import is called on a particular modules ("use Foo qw(biz baz)"), the previous import arguments from the previous call is forgotten. Thus if a module is use'd/import'ed multiple times in a row from the same source file, only part of the import list is remembered. Thus not everything is re-exported by ruse. (Can this be solved?) Hm, if an error prevented a module from having been loaded, somehow reload doesn't (always?) work ? why? This module might have problems with threads - I don't know if other threads might try to run subroutines which have been deleted before being defined again.
https://metacpan.org/pod/release/PFLANZE/FunctionalPerl-0.72/lib/Chj/ruse.pm
CC-MAIN-2020-05
refinedweb
275
63.9
I am trying to build a cartogram like here Installation from the link does not work: `install.packages('Rcartogram', repos = '', type = 'source')` Installing package into ‘C:/Users/Milena/Documents/R/win-library/3.2’ (as `lib` is unspecified) Warning in install.packages : package ‘Rcartogram’ is not available (for R version 3.2.0) install.packages("C:/Users/Milena/Downloads/Rcartogram_0.2-2.tar.gz", repos = NULL, type = "source") Installing package into ‘C:/Users/Milena/Documents/R/win-library/3.2’ (asis unspecified)is unspecified) lib * installing source package...... Rcartogram ********************************************** WARNING: this package has a configure script It probably needs manual configuration ********************************************** ** libs *** arch - i386 Warning: running command 'make -f "Makevars" -f "C:/PROGRA~1/R/R-3.2.0/etc/i386/Makeconf" -f "C:/PROGRA~1/R/R-3.2.0/share/make/winshlib.mk" SHLIB="Rcartogram.dll" OBJECTS="Rcart.o cart.o"' had status 127 ERROR: compilation failed for package 'Rcartogram' * removing 'C:/Users/Milena/Documents/R/win-library/3.2/Rcartogram' Warning in install.packages : running command '"C:/PROGRA~1/R/R-3.2.0/bin/x64/R" CMD INSTALL -l "C:\Users\Milena\Documents\R\win-library\3.2" "C:/Users/Milena/Downloads/Rcartogram_0.2-2.tar.gz"' had status 1 Warning in install.packages : installation of package ‘C:/Users/Milena/Downloads/Rcartogram_0.2-2.tar.gz’ had non-zero exit status sessionInfo R version 3.2.0 (2015-04-16) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows 8 x64 (build 9200)] fftw_1.0-3 loaded via a namespace (and not attached): [1] tools_3.2.0 Rcartogram is an R package (by Duncan Temple Lang) whose main purpose is to provide a R wrapper for a some C code (written by Mark Newman) which actually does the work of constructing a cartogram (aka morphing a map). The C code written by Mark Newman makes use of the FFTW (fastest Fourier Transform in the West) compiled library. The link in your question by Truc Viet Le describes how to install Rcartogram on a Unix system. There are a few extra tricks and traps involved in getting Rcartogram onto a Windows system, even though at its heart, it is pretty much the same process. To install Rcartogram on a Windows system you need first to install all the pre-requisites, namely: You then need to tell R where to find the FFTW library when you are first installing Rcartogram, and you will almost certainly need to let R know where to find the FFTW library whenever you load Rcartogram, eg via library(Rcartogram) in an R session. I found I also needed to make a few very small changes to the Rcartogram R code (mostly to bring it into line with changes to R syntax since it was written) in order to get it to install happily, and run correctly on Windows. So the full answer involves several steps. I suspect you need to install Rtools in order to get past the status 127 error. The official instructions on how to do that are here. There are user friendly explanations of how to install Rtools into a Windows system elsewhere on the web --- see for example . (The official instructions tell you how to install lots of other stuff as well, that you need if you want to build R itself from source on Windows, or produce package documentation using LaTeX, but you don't need all that stuff to get Rcartogram working). A bit longer answer: I can now replicate your status 127 error ---by removing the references to directories where Rtools lives from my PATH. When I do that the Windows cmd shell (where you might type R CMD INSTALL … can’t find the Rtools executables and that results in the 127 error message. Likewise trying to run install.packages() from within R also fails the same way, since under the hood install.packages calls the Windows cmd shell. Why do you need Rtools? Well Rcartogram is a package which includes C code, as well as pure R code. In fact it is mostly C code – from Mark Newman. Installing from source for a package which includes C code requires a C compiler. In fact it is best (almost essential) that it is the same C compiler that R itself was built from. That is what Rtools mostly is – an installable on Windows version of a C compiler. Running a C compiler in Windows needs a few extra shell commands (aka small programs) as well and that is what the rest of Rtools is. Most of the (open source) C community seem to work in a Unix (or variant thereof) world and those extra commands (and indeed the C compiler itself) are part of the “standard” system in Unix. It’s only those of us who work in Windows who need to install Rtools, which is a port of the necessary tools from Unix to Windows. Initially I got the FFTW library from here . There are two versions, a 32 bit version and a 64 bit version. On a Windows 64 bit machine you need both versions. (Aside, well there may be a way you can get away with only one, by setting flags when you install Rcartogram, but I haven't tested that route myself yet). Unzip the 32 bit version into a sub-directory /i386, and the 64 bit version into a subdirectory /x64. In my case (see below), I made these as subdirectories of "C:/msys/fftwtest". (Aside these subdirectories are conventions that R uses - you could theoretically put them elsewhere, but why make extra complications!). One trap that stumped me for quite a while was that these libraries are dynamic libraries (ie .dll s) and so - later on - I needed to make sure that when I installed them on my pc I put them in locations that were on my PATH (or alternatively I altered my PATH by adding in the locations - aka directories - where they were installed) since otherwise I got very unhelpful error messages in the final checks that R does before it finishes installing a package. Both the 32 bit and 64 bit (sub) directories should be included in your PATH. The trick to telling R (on a Windows machine) where to find the FFTW libraries when it is trying to install Rcartogram is to add a src/Makevars.win file into the src subdirectory of the Rcartogram package. That means you will have to unzip and untar the Rcartogram tar.gz file before you can make this change. (Aside : I use 7zip to uncompress these types of files on my machine). My src/Makevars.win file (it is a text file) has 2 lines, PKG_CPPFLAGS=-I"C:\msys\fftwtest\x64\include" -DNOPROGRESS PKG_LIBS=-L"C:\msys\fftwtest\x64\lib" -L"C:\msys\fftwtest\i386\lib" -lfftw3 -lm The file names in quotes are where I put my versions of the FFTW library. (These aren't exactly the ones I downloaded, along the way I learnt how to compile FFTW from source, and made my own copies, but explaining how to do that is a looong story so I won't attempt it here). The directory mentioned in PKG_CPPFLAGS line is the one containing a header file called fftw3.h that the C pre-processor needs. It doesn't matter whether you point at the 32 bit (\i386 subdirectory) or the 64 bit (\x64 subdirectory) - the fftw3.h file is a C source file and is the same no matter what architecture R is installing for. The 2 directories mentioned in PKG_LIBS line are the ones where files called libfftw3.something can be found, and that the linker needs when it is putting everything together at the end of the compilation step. something might be ".dll" (in which case the subdirectory might be \bin instead of \lib), or it might be ".a" or ".la" (which is what R looks for when it uses the static FFTW libraries which I created once I had learnt how to compile FFTW from source). 2 directories are needed because R by default tries to install both 32 bit and 64 bit versions of Rcartogram on Windows machines. If you supply FFTW library files in .dll format, then these are the exact same libraries that must be on your PATH (because when you try to do library(Rcartogram) R needs to find the FFTW dll libraries again while it is loading the installed Rcartogram package) (Aside, that's why in the end I compiled my own static FFTW libraries, so I didn't have to mess with my PATH variable in the Windows environment). If you are using the downloaded binaries from the link above, the fftw3.h and the libfftw3.dll files are all in the same (sub) directory, and the libfftw3.dll file is actually called libfftw3-3.dll, so in this case your src/Makevars.win file would need to be : PKG_CPPFLAGS=-I"main libfftw directory\x64" -DNOPROGRESS PKG_LIBS=-L"main libfftw directory\x64" -L"main libfftw directory\i386" -lfftw3-3 -lm The key differences from my src/Makevars.win are : main libfftw directory- ie the parent directory under which you created the /i386and /x64subdirectories when you unzipped the downloaded FFTW binaries \includeand \libsub-sub-directories, and -libfftw3to -libfftw3-3(note also that there must be a space in front of each -(minus) sign at the start of the -Land -lflags). What is the Makevars.win file doing? It is telling the R install process the flags that it will need when it tries to preprocess, compile and link the C code in Rcartogram's src subdirectory. The value of PKG_CPPFLAGS is a set of flags for the C pre-processor, and the value of PKG_LIBS is a set of flags for the link step. -Iis a flag that says 'try looking in the following directory when the C pre-processor is looking for include files', so in the example above it says to look in "main libfftw directory\x64". The include file it seeks is fftw3.h(that filename is buried in the C code inside Rcartogram) -Lflag says 'try looking in the following directory when the linker is looking for files from any library that you expect to use', so -L"main libfftw directory\x64"says try looking in the "main libfftw directory\x64"directory. You can (and need to) have more than 1 directory on that search path - the linker just keeps looking till it finds what it is looking for (or runs out of places to look and gives an error message), and -lflags gives the name of the library file that the linker should look for, but not verbatim --- instead the name is constructed from what you enter following a (slightly crazy to me) convention from the unix world. Because the file name of the library always begins with “lib”, the first part of the convention is that you leave "lib" out of the name you put in the flag. The file name of the library can have several different extensions ( eg “.dll” or “.a”) so the second part of the convention is that you leave you leave the file extension out of the -lflag value as well, and let the linker sort out what it wants. So –lfftw3says look for a file called either libfftw3.dllor one called libfftw3.a(there may be other possible extensions as well, I'm not sure). The downloaded dlls are actually called libfftw3-3.dll, (unlike the ones I compiled myself, which are called libfftw3.a) hence the need to change the –l flag to –lfftw3-3 NB If you are using the downloaded FFTW library which uses .dlls make sure you have put them on your PATH (see the last para of step 2) as well. There were two other small changes I found I had to make to the Rcartogram code itself get things running. First in the file R/cart.R there are two lines, both of which use the .Call( ) function. I needed to add one more argument (namely PACKAGE = "Rcartogram") to the .Call function, so for example tmp = .Call("R_makecartogram", popEls, x, y, dim, as.numeric(blur)) became tmp = .Call("R_makecartogram", popEls, x, y, dim, as.numeric(blur), PACKAGE = "Rcartogram") Likewise, further down cart.R the altered .Call became .Call("R_predict", object, as.numeric(x), as.numeric(y), ans, dim(object$x), PACKAGE = "Rcartogram") Second, again in R/cart.R, I had to change tmp = rep(as.numeric(NA), length(x)) ans = list(x = tmp, y = tmp) to # Avoid problems with the same vector (tmp) being sent to C twice due to R's # copy-on-modify rules tmp_x = rep(as.numeric(NA), length(x)) tmp_y = rep(as.numeric(NA), length(y)) ans = list(x = tmp_x, y = tmp_y) This one took me a lot of work to find, but without it, the demo for Rcartogram gave the wrong results (even though it ran OK). You should now be able to install Rcartogram. Either by opening a cmd window, changing directory ( cd to) the location where the unzipped and modified Rcartogram package source code lives, and typing R CMD INSTALL --preclean . or by starting an R session, setting the working directory to wherever the Rcartogram source is and typing install.packages(".", repos = NULL, type = 'source', INSTALL_opts = "--preclean") The . works because you have cded to directory where the Rcartogram source code lives. The --preclean flag tells R to tidy up any leftover intermediate files from earlier (failed) attempts to compile the C code in Rcartogram before it begins. If you get this far and are still having trouble, there is also a --debug flag that can be added as well. It gives more detail about why the install is failing. I am just getting started actually using Rcartogram myself (it took me a while to get this far!), but you may want to check out the getcartr --- R package. That package uses Rcartogram, and it seems pretty neat! And the installation instructions given on the github website worked first time for me (I do already have devtools installed and working though). Hope this helps (and congratulations to anyone who has read this far)
https://codedump.io/share/J0UGvZ8s6Gcs/1/installing-rcartogram-packages---error-message
CC-MAIN-2017-13
refinedweb
2,368
71.14
31163/how-to-set-colorbar-range-in-matplotlib-using-python I have the following code: import matplotlib.pyplot as plt cdict = { 'red' : ( (0.0, 0.25, .25), (0.02, .59, .59), (1., 1., 1.)), 'green': ( (0.0, 0.0, 0.0), (0.02, .45, .45), (1., .97, .97)), 'blue' : ( (0.0, 1.0, 1.0), (0.02, .75, .75), (1., 0.45, 0.45)) } cm = m.colors.LinearSegmentedColormap('my_colormap', cdict, 1024) plt.clf() plt.pcolor(X, Y, v, cmap=cm) plt.loglog() plt.xlabel('X Axis') plt.ylabel('Y Axis') plt.colorbar() plt.show() So this produces a graph of the values 'v' on the axes X vs Y, using the specified colormap. The X and Y axes are perfect, but the colormap spreads between the min and max of v. I would like to force the colormap to range between 0 and 1. You could scale your data to the range between 0 to 1 and then modify the colorbar: import matplotlib as mpl ... ax, _ = mpl.colorbar.make_axes(plt.gca(), shrink=0.5) cbar = mpl.colorbar.ColorbarBase(ax, cmap=cm, norm=mpl.colors.Normalize(vmin=-0.5, vmax=1.5)) cbar.set_clim(-2.0, 2.0) With the two different limits you can control the range and legend of the colorbar. In this example only the range between -0.5 to 1.5 is show in the bar, while the colormap covers -2 to 2 (so this could be your data range, which you record before the scaling). So instead of scaling the colormap you scale your data and fit the colorbar to that. I think you should try: I used %matplotlib inline in ...READ MORE David here, from the Zapier Platform team. ...READ MORE You can get the changing time from ...READ MORE Hi all, As per the title, I am ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE n = list(range(10)) b = list(filter(lambda i:i%2!=0,n)) print(b) READ MORE >>> class Test: ... ...READ MORE Hi, the answer is pretty simple. Without the ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/31163/how-to-set-colorbar-range-in-matplotlib-using-python?show=31164
CC-MAIN-2021-10
refinedweb
373
78.85
#include <stddef.h> #include "gromacs/utility/basedefinitions.h" #include <type_traits> C-style memory allocation routines for GROMACS. This header provides macros snew(), srenew(), smalloc(), and sfree() for C-style memory management. Additionally, snew_aligned() and sfree_aligned() are provided for managing memory with a specified byte alignment. If an allocation fails, the program is halted by calling gmx_fatal(), which outputs source file and line number and the name of the variable involved. This frees calling code from the trouble of checking the result of the allocations everywhere. It also provides a location for centrally logging memory allocations for diagnosing memory usage (currently can only enabled by changing the source code). Additionally, sfree() works also with a NULL parameter, which standard free() does not. The macros forward the calls to functions save_malloc(), save_calloc(), save_realloc(), save_free(), save_calloc_aligned(), and save_free_aligned(). There are a few low-level locations in GROMACS that call these directly, but generally the macros should be used. save_malloc_aligned() exists for this purpose, although there is no macro to invoke it. Frees memory referenced by ptr. ptr is allowed to be NULL, in which case nothing is done. Frees aligned memory referenced by ptr. This must only be called with a pointer obtained through snew_aligned(). ptr is allowed to be NULL, in which case nothing is done. Allocates memory for a given number of bytes. Allocates memory for size bytes and sets this to ptr. The allocated memory is initialized to zero. Allocates memory for a given number of elements. Allocates memory for nelem elements of type *ptr and sets this to ptr. The allocated memory is initialized to zeros. Allocates aligned memory for a given number of elements. Allocates memory for nelem elements of type *ptr and sets this to ptr. The returned pointer is alignment-byte aligned. The allocated memory is initialized to zeros. The returned pointer should only be freed with sfree_aligned(). Reallocates memory for a given number of elements. (Re)allocates memory for ptr such that it can hold nelem elements of type *ptr, and sets the new pointer to ptr. If ptr is NULL, memory is allocated as if it was new. If nelem is zero, ptr is freed (if not NULL). Note that the allocated memory is not initialized, unlike with snew(). Returns new allocation count for domain decomposition allocations. Returns n when domain decomposition over allocation is off. Returns OVER_ALLOC_FAC*n + 100 when over allocation in on. This is to avoid frequent reallocation during domain decomposition in mdrun. Over allocation for small data types: int, real etc. GROMACS wrapper for allocating zero-initialized aligned memory. alignment-byte boundary. This should generally be called through snew_aligned(), not directly. The returned pointer should only be freed with a call to save_free_aligned(). GROMACS wrapper for freeing aligned memory. If ptr is NULL, does nothing. ptr should have been allocated with save_malloc_aligned() or save_calloc_aligned(). This should generally be called through sfree_aligned(), not directly. This never fails. GROMACS wrapper for allocating aligned memory. alignment-byte boundary. There is no macro that invokes this function. The returned pointer should only be freed with a call to save_free_aligned(). Turns over allocation for variable size atoms/cg/top arrays on or off, default is off. Over allocation factor for memory allocations. Memory (re)allocation can be VERY slow, especially with some MPI libraries that replace the standard malloc and realloc calls. To avoid slow memory allocation we use over_alloc to set the memory allocation size for large data blocks. Since this scales the size with a factor, we use log(n) realloc calls instead of n. This can reduce allocation times from minutes to seconds. This factor leads to 4 realloc calls to double the array size.
https://manual.gromacs.org/current/doxygen/html-lib/smalloc_8h.xhtml
CC-MAIN-2021-17
refinedweb
613
52.05
Performance Optimizations for High Speed JavaScript By Description In this article, we look at how important JavaScript optimizations are analyzed. These will be explained, including using local function variables, avoiding references to objects or object properties, avoiding adding short strings to long strings, and finally, using buffering to process data in optimal sizes. These general-purpose JavaScript optimization techniques are designed for JavaScript on all browsers. Detailed graphs of all the performance results are given after each optimization. You will be amazed at the incredible speed improvements in JavaScript! Introduction Optimization of JavaScript computer code deserves attention, since JavaScript has a large impact on Web page performance. In this article we develop two high performance JavaScript algorithms using several performance optimization techniques.. Important JavaScript optimizations Optimizations in a broad sense will involve simplifying code, precomputing results which are repeatedly reused, and organizing code so more results can be reused. From the standpoint of computer programming purity, optimizations should increase the simplicity, clarity and generality of a computer program. (See The Practice of Programming by Brian Kernighan and Rob Pike.) Adding simplicity, clarity and generality is what these optimizations will do. In fact, one of the optimizations adds support for Unicode and multibyte characters such as and —, and still improves performance drastically compared to the slower unoptimized version! We analyze four optimizations in this article. More complicated optimizations or ones with bigger payoffs are listed after easy-to-use optimizations. - Use local function variables (82% improvement, 63 microseconds versus 359). - Avoid references to objects or object properties (41% improvement, 27.68 microseconds versus 47.297). - Avoid adding short strings to long strings (93% improvement, 4.608 microseconds versus 62.54). - Use buffering to process data in optimal sizes (96% improvement, 2.0 seconds versus 50.0). Technique 1: Use local function variables When I wrote code to implement the MD5 message digest algorithm, my code was first written in C, then transferred to JavaScript. In order to make my JavaScript MD5 code faster than everyone else's, I had to take advantage of local function variables. What makes my MD5 code faster are local function variables and optimized buffer sizes, techniques that we address in this article. Using local function variables is simple. If you have code which uses variables repetitively, it's worthwhile to make the code into a function to take advantage of the higher performance of local function variables. Global variables have slow performance because they live in a highly-populated namespace. Not only are they stored along with many other user-defined quantities and JavaScript variables, the browser must also distinguish between global variables and properties of objects that are in the current context. Many objects in the current context can be referred to by a variable name rather than as an object property, such as alert() being synonymous with window.alert(). The down side is this convenience slows down code that uses global variables. Sometimes global variables also have higher performance, like local function variables, if you declare them explicitly with the var keyword. An example is var d = document, instead of d = document, but it's not reliable. Mysterious behavior that works sometimes but not always is a danger sign, and I feel more comfortable using local function variables. On the other hand, it makes sense that local function variables should have better performance. There are few local function variables in most functions, and references to local function variables can be converted to efficient executable instructions by the JavaScript compiler. It's amazing how few people are aware of simple JavaScript optimizations like this, or who simply don't care. For example, no one else has taken the time to re-optimize their MD5 JavaScript code. When I wrote my md5.js script, I had no intention of competing for the top position, and yet the code has been unbeaten since 2003. Let's figure out why local function variables make a difference by counting to a million, first without local function variables, and then with local function variables. Counting to one million without local function variables Tip: the new Date() object returns the time difference in milliseconds when it's subtracted from another new Date() object, thus providing a great way to time your scripts. Counting to one million with local function variables Code that gives us the results of the timing The result is 359 milliseconds (thousandths of a second) when not using local function variables compared to 63 milliseconds when local function variables are used. This improvement is worth taking the extra time to convert code into a function.
http://www.webreference.com/programming/javascript/jkm3/
crawl-002
refinedweb
767
53.92
Automate Sysadmin Tasks with Python's os.walk Function Using Python's os.walk function to walk through a tree of files and directories.. os.walk Basics Linux users are used to the ls command to get a list of files in a directory. Python comes with two different functions that can return the list of files. One is os.listdir, which means the "listdir" function in the "os" package. If you want, you can pass the name of a directory to os.listdir. If you don't do that, you'll get the names of files in the current directory. So, you can say: In [10]: import os When I do that on my computer, in the current directory, I get the following: In [11]: os.listdir('.') Out[11]: ['.git', '.gitignore', '.ipynb_checkpoints', '.mypy_cache', 'Archive', 'Files'] As you can see, os.listdir returns a list of strings, with each string being a filename. Of course, in UNIX-type systems, directories are files too—so along with files, you'll also see subdirectories without any obvious indication of which is which. I gave up on os.listdir long ago, in favor of glob.glob, which means the "glob" function in the "glob" module. Command-line users are used to using "globbing", although they often don't know its name. Globbing means using the * and ? characters, among others, for more flexible matching of filenames. Although os.listdir can return the list of files in a directory, it cannot filter them. You can though with glob.glob: In [13]: import glob In [14]: glob.glob('Files/*.zip') Out[14]: ['Files/advanced-exercise-files.zip', 'Files/exercise-files.zip', 'Files/names.zip', 'Files/words.zip'] In either case, you get the names of the files (and subdirectories) as strings. You then can use a for loop or a list comprehension to iterate over them and perform an action. Also note that in contrast with os.listdir, which returns the list of filenames without any path, glob.glob returns the full pathname of each file, something I've often found to be useful. But what if you want to go through each file, including every file in every subdirectory? Then you have a bit more of a problem. Sure, you could use a for loop to iterate over each filename and then use os.path.isdir to figure out whether it's a subdirectory—and if so, then you could get the list of files in that subdirectory and add them to the list over which you're iterating. Or, you can use the os.walk function, which does all of this and more. Although os.walk looks and acts like a function, it's actually a "generator function"—a function that, when executed, returns a "generator" object that implements the iteration protocol. If you're not used to working with generators, running the function can be a bit surprising: In [15]: os.walk('.') Out[15]: <generator object walk at 0x1035be5e8> The idea is that you'll put the output from os.walk in a for loop. Let's do that: In [17]: for item in os.walk('.'): ...: print(item) The result, at least on my computer, is a huge amount of output, scrolling by so fast that I can't read it easily. Whether that happens to you depends on where you run this for loop on your system and how many files (and subdirectories) exist. In each iteration, os.walk returns a tuple containing three elements: - The current path (that is, directory name) as a string. - A list of subdirectory names (as strings). - A list of non-directory filenames (as strings). So, it's typical to invoke os.walk such that each of these three elements is assigned to a separate variable in the for loop: In [19]: for currentdir, dirnames, filenames in os.walk('.'): ...: print(currentdir) The iterations continue until each of the subdirectories under the argument to os.walk has been returned. This allows you to perform all sorts of reports and interesting tasks. For example, the above code will print all of the subdirectories under the current directory, ".". Counting Files Let's say you want to count the number of files (not subdirectories) under the current directory. You can say: In [19]: file_count = 0 In [20]: for currentdir, dirnames, filenames in os.walk('.'): ...: file_count += len(filenames) ...: In [21]: file_count Out[21]: 3657 You also can do something a bit more sophisticated, counting how many files there are of each type, using the extension as a classifier. You can get the extension with os.path.splitext, which returns two items—the filename without the extension and the extension itself: In [23]: os.path.splitext('abc/def/ghi.jkl') Out[23]: ('abc/def/ghi', '.jkl') You can count the items using one of my favorite Python data structures, Counter. For example: In [24]: from collections import Counter In [25]: counts = Counter() In [26]: for currentdir, dirnames, filenames in os.walk('.'): ...: for one_filename in filenames: ...: first_part, ext = ↪os.path.splitext(one_filename) ...: counts[ext] += 1 This goes through each directory under ".", getting the filenames. It then iterates through the list of filenames, splitting the name so that you can get the extension. You then add 1 to the counter for that extension. Once this code has run, you can ask counts for a report. Because it's a dict, you can use the items method and print the keys and values (that is, extensions and counts). You can print them as follows: In [30]: for extension, count in counts.items(): ...: print(f"{extension:8}{count}") In the above code, f strings displays the extension (in a field of eight characters) and the count. Wouldn't it be nice though to show only the ten most common extensions? Yes, but then you'd have to sort through the counts object. It's much easier just to use the most_common method that the Counter object provides, which returns not only the keys and values, but also sorts them in descending order: In [31]: for extension, count in counts.most_common(10): ...: print(f"{extension:8}{count}") ...: .py 1149 867 .zip 466 .ipynb 410 .pyc 372 .txt 151 .json 76 .so 37 .conf 19 .py~ 12 In other words—not surprisingly—this example shows that the most common file extension in the directory I use for teaching Python courses is .py. Files without any extension are next, followed by .zip, .ipynb (Jupyter notebooks) and .pyc (byte-compiled Python). File Sizes You can ask more interesting questions as well. For example, perhaps you want to know how much disk space is used by each of these file types. Now you don't add 1 for each time you encounter a file extension, but rather the size of the file. Fortunately, this turns out to be trivially easy, thanks to the os.path.getsize function (this returns the same value that you would get from os.stat): for currentdir, dirnames, filenames in os.walk('.'): for one_filename in filenames: first_part, ext = os.path.splitext(one_filename) try: counts[ext] += ↪os.path.getsize(os.path.join(currentdir,one_filename)) except FileNotFoundError: pass The above code includes three changes from the previous version: - As indicated, this no longer adds 1 to the count for each extension, but rather the size of the file, which comes from os.path.getsize. os.path.joinputs the path and filename together and (as a bonus) uses the current operating system's path separation character. What are the odds of a program being used on a Windows system and, thus, needing a backslash rather than a slash? Pretty slim, but it doesn't hurt to use this sort of built-in operation. os.walkdoesn't normally look at symbolic links, which means you potentially can get yourself into some trouble trying to measure the sizes of files that don't exist. For this reason, here the counting is wrapped in a try/exceptblock. Once this is done, you can identify the file types consuming the greatest amount of space in the directory: In [46]: for extension, count in counts.most_common(10): ...: print(f"{extension:8}{count}") ...: .pack 669153001 .zip 486110102 .ipynb 223155683 .sql 125443333 46296632 .json 14224651 .txt 10921226 .pdf 7557943 .py 5253208 .pyc 4948851 Now things seem a bit different! In my case, it looks like I've got a lot of stuff in .pack files, indicating that my Git repository (where I store all of my old training examples, exercises and Jupyter notebooks) is quite large. I have a lot in zipfiles, in which I store my daily updates. And of course, lots in Jupyter notebooks, which are written in JSON format and can become quite large. The surprise to me is the .sql extension, which I honestly had forgotten that I had. Files per Year What if you want to know how many files of each type were modified in each year? This could be useful for removing logfiles or (if you're like me) identifying what large, unnecessary files are taking up space. In order to do that, you'll need to get the modification time ( mtime, in UNIX parlance) for each file. You'll then need to convert that mtime from a UNIX time (that is, the number of seconds since January 1st, 1970) to something you can parse and use. Instead of using a Counter object to keep track of things, you can just use a dictionary. However, this dict's values will be a Counter, with the years serving as keys and the counts as values. Since you know that all of the main dicts will be Counter objects, you can just use a defaultdict, which will require you to write less code. Here's how you can do all of this: from collections import defaultdict, Counter from datetime import datetime counts = defaultdict(Counter) for currentdir, dirnames, filenames in os.walk('.'): for one_filename in filenames: first_part, ext = os.path.splitext(one_filename) try: full_filename = os.path.join(currentdir, ↪one_filename) mtime = ↪datetime.fromtimestamp(os.path.getmtime(full_filename)) counts[ext][mtime.year] += 1 except FileNotFoundError: pass First, this creates counts as an instance of defaultdict with a Counter. This means if you ask for a key that doesn't yet exist, the key will be created, with its value being a new Counter that allows you to say something like this: counts['.zip'][2018] += 1 without having to initialize either the zip key (for counts) or the 2018 key (for the Counter object). You can just add one to the count, and know that it's working. Then, when you iterate over the filesystem, you grab the mtime from the filename (using os.path.getmtime). That is turned into a datetime object with datetime.fromtimestamp, a great function that lets you move from UNIX timestamps to human-style dates and times. Finally, you then add 1 to your counts. Once again, you can display the results: for extension, year_counts in counts.items(): print(extension) for year, file_count in sorted(year_counts.items()): print(f"\t{year}\t{file_count}") The counts variable is now a defaultdict, but that means it behaves just like a dictionary in most respects. So, you can iterate over its keys and values with items, which is shown here, getting each file extension and the Counter object for each. Next the extension is printed, and then it iterates over the years and their counts, sorting them by year and printing them indented somewhat with a tab ( \t) character. In this way, you can see precisely how many files of each extension have been modified per year—and perhaps understand which files are truly important and which you easily can get rid of. Conclusion Python can't and shouldn't replace Bash for simple scripting, but in many cases, if you're working with large number of files and/or creating reports, Python's standard library can make it easy to do such tasks with a minimum of code.
https://www.linuxjournal.com/content/automate-sysadmin-tasks-pythons-oswalk-function
CC-MAIN-2020-29
refinedweb
1,994
73.78
Welcome to part 6 of the intermediate Python programming tutorial series. In this part, we're going to talk about the timeit module. The idea of the timeit module is to be able to test snippets of code. In our previous tutorial, we were talking about list comprehension and generators, and the difference between the two of them (speed vs memory) was explained. Using the timeit module, I will illustrate this. Many times on forums and such, people will ask questions about which method is faster in some scenario, and the answer is always the same: Test it. In this case, let's test to see how quickly we can build a list of even numbers from range(500). The actual code for a generator: input_list = range(100) def div_by_five(num): if num % 5 == 0: return True else: return False # generator, converted to list. xyz = list(i for i in input_list if div_by_five(i)) The code for list comprehension: input_list = range(100) def div_by_five(num): if num % 5 == 0: return True else: return False # generator: xyz = [i for i in input_list if div_by_five(i)] In these cases, the generator is actually only taking part in the calculation of whether or not a number is divisible by two, since, at the end, we are actually creating a list. We're only doing this to exemplify a generator vs list comprehension. Now, to test this code, we can use timeit. A quick example: import timeit print(timeit.timeit('1+3', number=500000)) Output: 0.006161588492280894 This tells us how long it took to run 500,000 iterations of 1+3. We can also use the timeit module against multiple lines of code: Our generator print(timeit.timeit('''input_list = range(100) def div_by_two(num): if (num/2).is_integer(): return True else: return False # generator: xyz = list(i for i in input_list if div_by_two(i)) ''', number=50000)) List comprehension: print(timeit.timeit('''input_list = range(100) def div_by_two(num): if (num/2).is_integer(): return True else: return False # generator: xyz = [i for i in input_list if div_by_two(i)]''', number=50000)) The generator: 1.2544767654206526 List comprehension: 1.1799026390586294 Fairly close, but if we increase the stakes, and do maybe a range(500): The generator: 6.2863801209022245 List comprehension: 5.917454497778153 Now, these appear to be pretty close, so, as you might guess, it would really require a huge range to make a generator preferable. That said, what if we can leave things in generator form? It's a common thought process as a scripter to think one line at a time, but what are your next steps going to be? Might it be possible to stay as a generator, and continue your operations as a generator? Do you ever need to access the list as a whole? If not, you might want to stay as a generator. Let's illustrate why! Rather than converting the generator to a list at the end like this: xyz = list(i for i in input_list if div_by_two(i)), let's leave it as a generator (just delete list)! Run it again: The generator: 0.0343689859103488 List comprehension: 5.898960074639096 Oh my! The generator blew the list comprehension out of the water. But didn't we need to convert the generator to a list to see the values? Before it was just a generator object! Nope. Remember the for i in range(50)? range() is a generator, and we just need to iterate through it. Thus, we can do: input_list = range(500) def div_by_two(num): if (num/2).is_integer(): return True else: return False # generator: xyz = (i for i in input_list if div_by_two(i)) for i in xyz: print(i) At no point did we need to load the entire "list" of the even numbers into memory, it's all generated, and is a generator object til we do anything. We can also do: input_list = range(500) def div_by_two(num): if (num/2).is_integer(): return True else: return False for i in (i for i in input_list if div_by_two(i)): print(i) Boom. Thus, you really only need to be using lists IF you need to be able to access the entire list at once, otherwise you should *probably* be using a generator. Yes, list comprehension is in theory faster since the list is in memory, BUT this is only true if you're not building a new list. Building lists is expensive! Yay for generators. Alright, let's move along and talk about enumerate, a built-in function that has existed for a very long time, but is often not used when it should be!
https://pythonprogramming.net/timeit-intermediate-python-tutorial/
CC-MAIN-2022-40
refinedweb
766
64.81
What do you think this module/function should be called? Right now, I'm using chump but I'm also considering champ, chimp and chmop... =head1 NAME chump - like "chomp" but also checks your spelling =head1 SYNOPSIS use 5.010; use chump lang => 'en'; while (<>) { chump; say $_; last unless $_; } [download] =head1 DESCRIPTION The chump package exports a function C<chump> which acts just like the Perl built-in C<chomp> (i.e. removes a trailing new line character) bu +t also corrects any spelling mistakes in the string. =head2 Functions =over =item * C<< chump($string) >> Modifies C<$string> in-place, removing a trailing new line character i +f there is one, and correcting any spelling mistakes. =back =head2 Import Options Any options passed on the "use" line are passed on to Text::Aspell. Options are lexically scoped, and scopes are not cumulative. { use chump lang => 'fr', mode => 'email'; my $line = <>; chump $line; # check spelling in French. { use chump lang => 'en'; my $line2 = <>; chump $line2; # check spelling in English # but not in email mode. } my $line3 = <>; chump $line3; # in French and email mode again. } =head2 Unimport Do not unimport the module. =cut package chump; use JSON qw//; use Text::Aspell; use strict 'subs', 'vars'; sub import { my $class = shift; my $caller = caller; *{"$caller\::chump"} = \&chump; if (@_) { $^H{+__PACKAGE__} = _serialize_options(@_); } } sub unimport { warn "You think you no chump?\n"; } sub _serialize_options { JSON::to_json({ @_ }); } sub _deserialize_options { return unless $_[0]; my $r = JSON::from_json($_[0]); %{ $r || {} }; } sub chump (_) { my $spell = Text::Aspell->new; my @caller = caller(0); my %opts = _deserialize_options($caller[10]{+__PACKAGE__}); foreach my $key (keys %opts) { $spell->set_option($key, $opts{$key}); } my @parts = split /([[:alpha:]]+)/, $_[0]; my $count; for my $i (0 .. $#parts) { if ($parts[$i] =~ /^[[:alpha:]]+$/) { next if $spell->check($parts[$i]); my ($guess) = $spell->suggest($parts[$i]); $guess = '?' x (length $parts[$i]) unless defined $guess; $parts[$i] = $guess; $count++; } } $_[0] = join q{}, @parts; return $count + chomp $_[0]; } __PACKAGE__ [download] Is this meant as a joke? If yes, Acme::Chump. If not, I'd consider to just do the spelling correction, and name it Text::Spelling::AutoCorrect. Mostly a joke, but I can imagine it being useful in certain circumstances. Perhaps the Acme namespace makes sense for the module name, but what of the function name? I wouldn't conflate the two purposes. The spell-corrector on its own might have some value, but adding chomp functionality to it diminishes its generality. Also, what is the use case for a return value consisting of the number of words corrected plus the number of characters that chomp removed? The spell-corrector itself is a big leap of faith. I find that spell-checkers frequently guess incorrectly, particularly in documents that contain a mixture of prose and code or other technical text. While it might be great for the simple case, it's hard to imagine what that simple case would be. Dave Now, if there was a call that combined chomping, spelling correction, reversing, opening a file, and sleeping into one, now that would be useful. Y’know, I quite honestly think that you ought to reconsider this module, before releasing anything, because it appears to me to be a wedge of two rather unrelated concepts. (Do they, in fact, “fit” together? .. if so, how?) I just do not now have a warm, fuzzy feeling about this module in the sense that I would actually use it or want to use it. Since it goes without saying that you do want to produce a genuinely useful contribution to the Community, perhaps this bit of “advance market-research” might be useful. I simply think that perhaps you should “smoke it over a little bit more... first.” I think it needs a little more time on the stove. I upvoted your post because I'm glad you asked for comment before working on this module. Like others in this thread, I don't see the value of the combined effects. There are a number of spell-check-related modules; I'm not sure how this improves over the Text::Aspell it uses. Sorry. My module is 30% more funny and 80% more useless than Text::Aspell.. Perl Cookbook How to Cook Everything The Anarchist Cookbook Creative Accounting Exposed To Serve Man Cooking for Geeks Star Trek Cooking Manual Manifold Destiny Other Results (147 votes), past polls
http://www.perlmonks.org/?node_id=956440
CC-MAIN-2014-41
refinedweb
733
70.94
Quite often, we receive log files for analysis which are simply HUGE!!! We try opening that in Notepad and it hangs. After n minutes, we kill the notepad, try MS Excel and that hangs as well. Sometimes, MS Excel shows “File not loaded completely”. Painful, isn’t it?? OKAY, what’s the point?? We are wasting time in trying to open the files in the first place!!! So, I have a Filemon Log of 500 MB and I want to search the file for the lines which contain “Access Denied”. Why am I trying to open that file to find out just a few lines containing the string I am interested in? May be because we are used to CTRL+F. Is there a way out??? Yes, there is… and in comes Log Parser. Download it from and run the setup. Now, we will see how to use Log Parser to parse the file without opening it. The filename is Filemon.log and it is located in C:\. All I am interested in, is to find the lines which contain the string “Access Denied” WITHOUT opening the Filemon.log, because none of the software is responding in a timely manner (due to the size factor). You need to start the Log Parser and you will see a command line interface. Type the following and hit enter… LOGPARSER “Select Text from C:\Filemon.log where Text like ‘%Access Denied%'” -i:TEXTLINE -q:Off You will see an output in a similar format as follows… Text ————————————————————– 7447 1:49:24 PM explorer.exe:1200 DIRECTORY C:\ Access Denied Statistics: ———– Elements processed: 640444 Elements output: 1 Execution time: 12.75 seconds Not bad at all… By the way, there are tonnes of native log files like IIS Log files, CSV, TSV, URLSCAN, REG(istry), FS (Filesystem), XML, etc which the Logparser can parse for you in a more robust fashion. Go through the documentation that comes along with the log parser. This is one of the tools which you will definitely like to master and keep it in your arsenal of tools for troubleshooting various kind of issues. Cheers! -Rahul Soni I am rather new to asp.net 2.0 and I have a problem here. i give the configsections as below <configSections> <sectionGroup name="Path"> <section name="PathInfo" type=".."/> </sectionGroup> </configSections> <Path> <PathInfo> <add key="UploadPath" value="C:AspirenDocumentsUploaded.txt"/> </PathInfo> </Path> The problem is when i access this though the code behind(as in 1.1), it asks for some alias. NameValueCollection nvcGeneral = (NameValueCollection)ConfigurationSettings.GetConfig("Path/PathInfo"); Is it that i am missing any namespaces or is ther any other way to access it? Actually, System.Configuration.GetConfig(string) is obsolete now. Try using System.Configuration.ConfigurationManager.GetSection and you should be good to go! Hope that helps! Rahul I want to parse whole Vb.Net file. Any body how to parse the same. Regards Allahbaksh Hi Allahbaksh, You can try reading the .vb files line by line and do the needful. There are many similar links which might help you out. Thanks, Rahul Hi Rahul, We have developed an application on our Desktops to create a file with data of our clients: The Desktop has Windows XP (latest SP) with VS 2003 Enterprise running Framework 1.1. The IIS Version is 5.1. When we execute our web application locally it takes only 2 minutes to generate the file. When we take the application over to the server IBM x3850 16 Gig of Ram, 4 processor (Windows 2003, IIS 6.0), The program takes over 10 hours to produce the file. The program makes one call to the database to retrieve all the data, then writes each clients info to a file. For both the server and desktop we are using the same database server. The problem takes place after the call to the database so that should not be an issue. We tried this application on another server and the same result happen. Any suggestion? is there any any configuration on servers that limits the bandwidth of writing to the file etc… thanks for your& regards arshad Hi Arshad, The difference is huge!! 10 hrs to 2 mins, no comparisons at all. I suspect something fishy is going on. I would suggest taking a trace.axd on the server and see what is causing that much of delay. BTW, was it really 10 hrs or 10 mins?? Basically, I would try browsing on the server with trace enabled and start from there! HTH, Rahul
https://blogs.msdn.microsoft.com/rahulso/2006/05/02/how-to-find-out-details-using-log-parser-from-huge-text-files-without-opening-them-in-notepad-etc/
CC-MAIN-2017-26
refinedweb
756
75.91
Cannot use QtAV in QML I have installed QtAV following the instructions at this link. I have added QT += av avwidgets. I am able to build the project, but I cannot use the module QtAV in QML. In my QML I wrote: import QtQV 1.6, but when I try to run the project I get the following error module "QtAV" is not installed - SGaist Lifetime Qt Champion Hi, You should contact the project author for that question since you seem to be using his pre-built package. Thank you, I will try to contact the author. It'd be really useful if someone could try to install QtAV and let me know if it works for them, it should be a fairly short task (I took about 1 minute)
https://forum.qt.io/topic/71340/cannot-use-qtav-in-qml
CC-MAIN-2018-22
refinedweb
130
78.48
New release candidate for Scala IDE for Eclipse 2.0.1 A quick lick of paint for Scala’s Eclipse IDE Rolling on from their big Typesafe stack announcement recently, there’s been further progress made on the road towards Scala IDE for Eclipse 2.0.1, with the reveal of the first release candidate. Codenamed Helium, Typesafe’s Scala IDE for Eclipse aims to truly give the growing Scala community a first-class IDE they crave – featuring a Scala editor with syntax highlighting, code completion, inferred type hovers, hyperlinking to definitions, error markers and much more. There’s still that underlying recognition that Java and Scala need to support each other and what better way of getting that than using the most popular Java IDE as a template. Although primarily a maintenance release for the Scala IDE released at the back end of last year, 2.0.1 RC1 improves the link between sbt and the builder further by ‘following sbt more closely when dealing with dependent projects‘. So, when a project has build errors, dependent projects are not rebuilt and the Eclipse builder compiles exactly the same amount of files as the command line. Other than that, it’s a bit of polishing up. Niggling editor issues have been fixed, such as double braces being inserted and deleted together, completions that need an additional import won’t mess up the file, and Open Declaration works when called from the contextual menu. You can expect a better compiler too, with small tweaks being made to make it able to deal with multiple compiler plugins alongside each other. Finally, this version works with both Eclipse 3.6 (Helios) and 3.7 (Indigo). The team have developed and tested it using Java 6, but say Java 7 can be used with some caveats. All good stuff, I’m sure you’ll agree. Although perhaps slightly behind, the Scala IDE roadmap reveals what else will be in Milestone 1. And there’s a lot of work to be done as you would expect with something this huge: It is recommended that users update as soon as possible to this version, which you download here. For now anyway, the team tell us ‘to go forth and code’. We shall indeed.
http://jaxenter.com/new-release-candidate-for-scala-ide-for-eclipse-2-0-1-104301.html
CC-MAIN-2015-22
refinedweb
377
69.21
In the previous post, I dealt with the standard .NET config file. It would be lovely if all we needed to deal with was appsettings and connection strings. Sadly, the world is more complex than that. Now, quite a few of the systems listed below (pretty much all of the XML-based ones) give you the option of including their config in the app.config or in a separate file. I can’t say as I see it makes a blind bit of difference. Each of them has their own XML schema and I doubt you wish to write a diff tool for each of them. Castle Windsor has a fluent configuration mechanism and an XML format. Annoyingly, neither of them support environmental diffs. I can’t quite believe that this isn’t baked in, but then the major competitors don’t seem to address this either. Furthermore, since there is no facility within Windsor for producing a container that specializes another container, you can’t use specializations to produce environmental deltas. In particular, you could have the principal configuration using the fluent interface, keeping with ThoughtWorks dictum of not putting anything into a config that can be hardwired. You could then specialize using the XML format. There is, however, one other alternative, Binsor. Now, Binsor is a true .NET language with a couple of specializations to support windsor configuration (or a DSL, if you prefer). It supports environmental deltas through the Extend macro. The Extend Macro isn’t really documented, so here’s a quick guide to how to use it: MainConfiguration.boo: def DoConfigure(): Component "service", IService, ConcreteService: standardParameter = "Value the same across all deployments" Environment1.boo import file from "MainConfiguration.boo" DoConfigure() Extend "service": environmentalParameter = "Parameter value is different in Environment2.boo" This is incredibly powerful. In particular it lets you change your mind about which parts of the config are environmentally driven and which aren’t. Let’s go through it (with some stuff I learnt the hard way) - Binsor allows you to include one file in another. You need to wrap the code in the main configuration in a function that you call from the environmental diff. This might seem backwards, but it’s pretty much the only way it can work. - The import file syntax supports relative paths, but make sure that you pass BooReader an absolute path to the environmental diff, or it may not work the way you’re hoping. - Extend is keyed by the name of the component that is declared in the MainConfiguration.boo file. Thus, in the example, the ConcreteService class has string parameters for standardParameter and environmentalParameter. - There is, of course, the outstanding question of how you identify which environmental delta to use. You’ve basically got two options here: get the install to rename the delta file to a standard name, or put the name into the appsettings. (I’m finding it remarkably hard to kill off appsettings completely, much as I try…) I like Binsor in that it’s a complete solution to the problem. It is, however, quite a heavy-weight solution, and you can’t mix and match it with the fluent configuration since it’s doing all the hard work itself. Since Castle Windsor’s ComponentModel class is immutable and doesn’t support specialization, it has to build its own component model in order to support this feature. That shouldn’t bother you until you try stepping through the code and discover that there’s several thousand more lines supporting this syntax than you were expecting. A more general difficulty is that it uses Boo’s most powerful and most dangerous feature: the ability to change the way the syntax tree is evaluated. It produces a relatively elegant syntax, but it’s not documented and there’s no editor support for it that I know of.
https://colourcoding.net/2009/03/30/automated-deployments-3-the-extend-macro-in-binsor/
CC-MAIN-2020-29
refinedweb
643
55.13
In this tutorial we will go through the basics of setting up a minimalist VM (virtual machine) using VMware Workstation (there is a free version so don’t worry) and then create a custom file echoer (it will return the contents of a file when requested through a web browser). NOTE – The web server will be built in C and C++ and will not use Apache or other premade server software out there. That said we will not (at least at this point) be supporting server-side languages such as PHP. GOAL – Make a quick, light-weight server which is capable of receiving incoming HTTP requests and replying with a simple message. Starting off: Download VMware Player (or VMware workstation if you have some money burning a hole in your pocket) from here: (you will probably have to create an account with their site) Install it and once it is finished start downloading Ubuntu 10.10 netbook from here: (while you aren’t required to use the netbook edition it is light weight and that is the point of this tutorial – I will also be using the netbook version so if you are starting off you may want to download it just so you can follow along exactly with that I do, well almost… There may be some differences when using VMware Player…). Setting up the VM: Open up VMware Player/Workstation and go to File -> New -> Virtual Machine A dialogue box will pop up and talk about options on how to create the VM, select standard (not advanced) then click next and use a file as the iso (second option) then click browse and find the downloaded iso (select it). VMware will recognize that it is Ubuntu 10.10 and will say it is going to use easy install, this is fine. Follow the steps provided and you can use the defaults (the only thing I changed was I told it to allocate 4 GB instead of the 8 it recommends). Once you have finished all the options it will start installing Ubuntu on a VM for you. Wait for this to finish (shouldn’t take too long). Minimizing: Alright, now that we have Ubuntu all set up and running it is time to get rid of some of the additional crap that was installed with the OS. To accomplish this go to System -> Administration -> Synaptic Package Manager. Wait for it to finish creating its search database (it will be easier to remove the packages that way). Once it is completed go ahead and search for all the applications they have installed and mark them for complete removal. I left the following applications installed: Terminal and Gedit. I also market Chromium for installation (we really just need a browser, so Firefox would have been fine, I just like Chromium better). Once you have everything marked appropriately apply the changes. NOTE – You could go ahead and use the terminal for all of this, I just find that people enjoy using GUIs better. Good, now the annoying set up is all completed. Time to install some additional software. Since this is a C++ tutorial I figure it is only proper that we install a compiler. Installing things: Now, without closing out of the package manager we want to check for a few packages. Search for g++ and mark it for installation (it will have additional packages). Once it is installed it is time to start doing some fun stuff (programming). You will also need to search for and install the pthread libc library. Our socket connection will be obtained through a posix compliant library, as will the threads, this means that this application should be able to run on all Unix systems (though apparently out version of Ubuntu doesn’t have it installed either), Mac, and Windows (though you may need to install the posix library on Windows to get it working). Some code to get us off the ground: First thing’s first, we need to set up a project folder. I placed this directly on my desktop and called it serverFromScratch (I will be calling it SFS from here on out). Go into your project folder and create the following files with the following code within each. NOTE – I won’t be explaining this code as it would be a rather length tutorial otherwise. Most of it is pretty self-explanatory. Socket.h: // James Blades (BetaWar) // dreamincode.net // Definition of the Socket class #ifndef Socket_class #define Socket_class #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <netdb.h> #include <unistd.h> #include <string> #include <arpa/inet.h> const int MAXHOSTNAME = 200; const int MAXCONNECTIONS = 5; const int MAXRECV = 500; //const int MSG_NOSIGNAL = 0; // defined by dgame class Socket{ private: int m_sock; sockaddr_in m_addr; public: Socket(); virtual ~Socket(); // Server initialization bool create(); bool bind(const int port); bool listen(void) const; bool listen(const int connections) const; bool accept(Socket&) const; // Client initialization bool connect(const std::string hostname, const int port); std::string getIP() const; unsigned short getPort() const; // Data Transimission bool send(const std::string) const; int recv(std::string&) const; void set_non_blocking(const bool); bool is_valid() const; }; #endif Socket.cpp: // James Blades (BetaWar) // dreamincode.net #include "Socket.h" #include <string> #include <iostream> #include <errno.h> #include <fcntl.h> #include <string.h> Socket::Socket() : m_sock(-1){ memset(&m_addr, 0, sizeof(m_addr)); } Socket::~Socket(){ if(is_valid()){ ::close(m_sock); } } bool Socket::create(){ m_sock = socket(AF_INET, SOCK_STREAM, 0); if(!is_valid()){ return false; } // TIME_WAIT - argh int on = 1; return setsockopt(m_sock, SOL_SOCKET, SO_REUSEADDR, (const char*)&on, sizeof(on)) != -1; } bool Socket::bind(const int port){ if(!is_valid()){ return false; } m_addr.sin_family = AF_INET; m_addr.sin_addr.s_addr = INADDR_ANY; m_addr.sin_port = htons(port); int bind_return = ::bind(m_sock, (struct sockaddr*)&m_addr, sizeof(m_addr)); return bind_return != -1; } bool Socket::listen(void) const{ return listen(MAXCONNECTIONS); } bool Socket::listen(const int connections) const{ if(!is_valid()){ return false; } int listen_return = ::listen(m_sock, connections); return listen_return != -1; } bool Socket::accept(Socket& new_socket) const{ int addr_length = sizeof(m_addr); new_socket.m_sock = ::accept(m_sock, (sockaddr*)&m_addr, (socklen_t*)&addr_length); return !(new_socket.m_sock <= 0); } bool Socket::send(const std::string s) const{ int status = ::send(m_sock, s.c_str(), s.size(), MSG_NOSIGNAL); return !(status == -1); } int Socket::recv(std::string& s) const{ char buf[MAXRECV + 1];>> ERROR - status == -1 errno == " << errno << " in Socket::recv\n"; return 0; } else if(status == 0){ return 0; } else{ s = buf; return status; } } bool Socket::connect(const std::string hostname, const int port){ if(!is_valid()){ return false; } hostent* record = gethostbyname(hostname.c_str()); in_addr* addr = (in_addr*)record->h_addr; std::string ip = inet_ntoa(*addr); m_addr.sin_family = AF_INET; m_addr.sin_port = htons(port); int status = inet_pton(AF_INET, ip.c_str(), &m_addr.sin_addr); if(errno == EAFNOSUPPORT){ return false; } status = ::connect(m_sock, (sockaddr*)&m_addr, sizeof(m_addr)); return status == 0; } std::string Socket::getIP() const{ struct sockaddr_in peer_addr; socklen_t len = sizeof peer_addr; getpeername(m_sock, (struct sockaddr*)&peer_addr, &len); return inet_ntoa(peer_addr.sin_addr); } unsigned short Socket::getPort() const{ struct sockaddr_in peer_addr; socklen_t len = sizeof peer_addr; getpeername(m_sock, (struct sockaddr*)&peer_addr, &len); return peer_addr.sin_port; } void Socket::set_non_blocking(const bool B){ int opts; opts = fcntl(m_sock, F_GETFL); if(opts < 0){ return; } if(B){ opts = (opts | O_NONBLOCK); } else{ opts = (opts & ~O_NONBLOCK); } fcntl(m_sock, F_SETFL,opts); } bool Socket::is_valid() const{ return m_sock != -1; } Thread.h: // James Blades (BetaWar) // dreamincode.net #ifndef THREAD_H #define THREAD_H #include "Runnable.h" #include <pthread.h> class Thread{ private: volatile bool m_stoprequested; volatile bool m_running; pthread_t m_thread; Runnable* workBot; static void* start_thread(void* obj){ ((Thread*)obj)->run(); return NULL; } void run(); public: Thread(Runnable* bot); ~Thread(); void start(); void stop(bool kill); }; #endif /* THREAD_H */ Thread.cpp: // James Blades (BetaWar) // dreamincode.net #include "Thread.h" #include <iostream> void Thread::run(){ while(!m_stoprequested){ if(workBot->ceaseThread){ stop(workBot->killThread); return; } try{ workBot->tick(); } catch(...){ // Some error occurred, let the bot try to repair itself. std::cout << "An error occurred - run()\n"; workBot->tack(); } } } Thread::Thread(Runnable* bot) : m_stoprequested(false), m_running(false){ // pthread_mutex_init(&m_mutex, NULL); workBot = bot; workBot->ceaseThread = workBot->killThread = false; } Thread::~Thread(){ // pthread_mutex_destroy(&m_mutex); } void Thread::start(){ if(m_running){ return; } m_running = true; pthread_create(&m_thread, NULL, Thread::start_thread, this); } void Thread::stop(bool kill){ if(!m_running){ return; } m_running = false; m_stoprequested = true; pthread_cancel(m_thread); pthread_join(m_thread, NULL); if(kill){ delete this; } } Mutex.h: // James Blades (BetaWar) // dreamincode.net #ifndef Mutex_H #define Mutex_H #include <map> #include <pthread.h> class Mutex{ private: typedef std::map<void*, pthread_mutex_t> items_t; static items_t& get_items(){ static items_t* items = new items_t; return *items; } public: static void lock(void* addr){ items_t& items = get_items(); if(&items[addr] == NULL){ pthread_mutex_init(&items[addr], NULL); } pthread_mutex_lock(&items[addr]); } static void unlock(void* addr){ items_t& items = get_items(); pthread_mutex_unlock(&items[addr]); } }; #endif Mutex.cpp: // James Blades (BetaWar) // dreamincode.net #include "Mutex.h" // empty file Runnable.h: // James Blades (BetaWar) // dreamincode.net #ifndef RUNNABLE_H #define RUNNABLE_H // just an interface class Runnable{ protected: public: bool ceaseThread; bool killThread; virtual void tick(void) = 0; virtual void tack(void) = 0; }; #endif /* RUNNABLE_H */ Runnable.cpp: // James Blades (BetaWar) // dreamincode.net #include "Runnable.h" // empty file Great… I know that is a TON of code, but we aren’t going to be going over it in this tutorial (if you want to have me explain it I will, just comment stating such). Creating the Make file: You may know how this is done and be able to skip this part, but I find normally that people don’t set up makefiles efficiently, or they add a whole bunch of additional crap which isn’t needed. Luckily it is easy to set one up so it only compiles things that are different. To accomplish this we will use the following code: Makefile: .cpp.o: g++ -c -0 -Wall $< .c.o: g++ -c -0 -Wall $< main: Socket.o Runnable.o Thread.o Mutex.o main.o g++ -Wall -o $@ $^ -lpthread clean: rm -f main *.o *~ Now, the first rule we set up is for any C++ file that we want to create an object file from, it uses Makefile variables to allow us to make a general rule instead of one which is made specifically for a given file. The second rule does the same thing, but is for C files. The third rule is the general one (what will happen if you just type “make” into the terminal), and will make an executable called “main” which depends on “Socket.o”, “Runnable.o”, “Thread.o”, “Mutex.o” and “main.o” (each of these files is compiled by the general rules above). This rule also links in the pthread library, which we need linked for our application to run. The fourth, and final rule is the clean. This allows you to type “make clean” into the terminal and it will remove the main executable, all object files, and all backup files. It is a nice little way to clean up the directory. Programming: At this point we have fewer than 100 lines of code to write before this tutorial will be over (that is good because this is getting fairly long…). Starting off we need a main file so go ahead and create the file “main.cpp” and open it up on your favorite text editor (I will be using Gedit). Includes: We will need to have quite a few included files for this server to work (though we aren’t actually using some of them at this point), we will also likely be adding additional ones in the future, but for the purposes of this tutorial we just need the following: iostream, list, string, Runnable.h, Socket.h, Thread.h and Mutex.h Include using the following code: #include <iostream> #include <list> #include <string> #include “Runnable.h” #include “Socket.h” #include “Thread.h” #include “Mutex.h” And we will want to be using the standard namespace in this file, so you might as well say you are using it: using namespace std; Listener: We now need to create something responsible for handling the incoming client socket connections. This will take the form of a Runnable class derivative (which is required if we are to use the Thread class I defined above). Here is the code: class Listener: public Runnable{ private: Socket* client; public: Listener(Socket* cli){ client = cli; } ~Listener(){ ceaseThread = true; delete client; killThread = true; } void tick(void){ string in; if(!client->recv(in)){ delete this; } if(!in.empty()){ cout << in << endl; client->send(“Hello Server\r\n”); delete this; } } void tack(void){ // does nothing. } }; Now, the Runnable interface has 2 variables (ceaseThread and killThead) both of which are Booleans. If you set the ceaseThread to true it will stop the thread, if you set killThread to true it will deallocate the thread memory. Runnable also requires 2 functions (tick and tack), tick is run each time the thread is called, and tack it called if tick throws an error (to allow you to attempt to recover). In our tick function we see if anything has been sent from the client, if we error (the recv function returns false) we just delete the thread and client socket (this will change later). Then, if the input string isn’t empty we output the request, send a small reply, and then destroy the thread – as we said in the beginning this is just a very simple server at this point. Connector: The next class we need to create is the server connector class, which will be responsible for listening to port 80 (standard HTTP request port). Here is the code: class Connector: public Runnable{ private: Socket* server; public: Connector(Socket* s){ server = s; } ~Connector(){ ceaseThread = true; delete server; killThread = true; } void tick(void){ Socket* client = new Socket(); server->accept(*client); Runnable* listen = new Listener(client); Thread* listener = new Thread(listen); listener->start(); // at this point the server orphans the thread so it has to be able to clean up after itself. } void tack(void){ // empty again } }; Since this class is very similar to the Listener class I am just going to explain the tick function. It creates a new client socket and then has the server accept it. Then is creates a listener for the client, and a thread for the listener. The last thing it does is starts the listener thread. Main: That’s correct it is finally time for the main function. After this we will have a nice, small, simple, server which is capable of taking incoming requests and replying “Hello Server”. When you look at it like that it isn’t all that impressive, but the job as a whole was quite a bit of work. The code: int main(void){ Socket* server = new Socket(); if(!server->create()){ cout << “Error – unable to create socket.” << endl; } if(!server->bind(80)){ cout << “Error – unable to bind port 80, please make sure you are running the server as root.” << endl; } if(!server->listen()){ cout << “Error – unable to listen.” << endl; } Runnable* serverListener = new Connector(server); Thread listenerThread(serverListener); listenerThread.start(); while(1){ // keep the program up and running } return 0; } Save the file, open a terminal and go to the directory you are using to house this project (cd ~/Desktop/SFS/ on my setup) and make the project (make). Once it has completed you shouldn’t have any errors or warnings, and it is time to run your new server. Go ahead and type sudo ./main, the program will seem to just hang. That is fine (remember we don’t have it outputting anything other than errors and the HTTP requests it receives), open up your web browser and go to localhost, you should have a bit of output in the terminal now, and the web page should say “Hello Server”. Congrats you have completed this portion of the tutorial! If you want to look at the request structure that is fine, we will have to respond in a similar fashion for the browser to know what we are talking about and how to deal with the content we give it. Hopefully you enjoyed this tutorial. Stay tuned for the next one in the series.
http://www.dreamincode.net/forums/topic/222927-part-1-server-from-scratch/
crawl-003
refinedweb
2,653
63.29
Sending And Receiving UDP (User Datagram Protocol) Messages With ColdFusion A year ago, at an NYCFUG meeting, I heard John Scott mention something called UDP (User Datagram Protocol). He described it as being like HTTP, but faster and less reliable. It sounded interesting, especially for "send it and forget it" type requests. Now, a year later, I'm finally carving out some time to look into it. For my first exploration, I figured it would be fun to have ColdFusion communicate with Node.js using UDP messages. With UDP (User Datagram Protocol), communication can be bidirectional; but, the messages themselves are unidirectional. By this, I mean that when you send a message, you get no confirmation as to whether or not the message was delivered successfully. The best you can do is send an outgoing message and then listen for subsequent incoming messages. Since this is my first look into UDP, I won't try to explain the protocol as I'll likely end up making mistakes. In fact, the Node.js side of this demo was, more or less, copied directly out of the Node.js documentation. In the following exploration, I'm going to send a UDP message from ColdFusion to the Node.js server (both living on the same machine). Then, Node.js is going to send a message back, which the ColdFusion "client" will be waiting [blocking] for. Client.cfm - Our ColdFusion UDP ClientClient.cfm - Our ColdFusion UDP Client <cfscript> // Because our UDP client (this script) and our UDP server (the // Node.js script) are running on the same machine, we need to set // them up on different ports so that we don't accidentally create // a circular chain on the Node.js side. localPort = javaCast( "int", 9001 ); localInetAddress = createObject( "java", "java.net.InetAddress" ) .getByName( javaCast( "string", "localhost" ) ) ; // The Node.js server. remotePort = javaCast( "int", 9002 ); remoteInetAddress = createObject( "java", "java.net.InetAddress" ) .getByName( javaCast( "string", "local.bennadel.com" ) ) ; // The DatagramSocket can [always] send and [sometimes] receive // UPD messages. In this case, we are going to configure the socket // to listen on the given port and address. socket = createObject( "java", "java.net.DatagramSocket" ).init( localPort, localInetAddress ); // Wrap all of this in a Try/Catch so that we can CLOSE the socket // no matter what happens. If we fail to close the socket, we will // receive an error the next time we try to open it. // -- // NOTE: You can apparently use the setReuseAddress() if you forgot // to close the socket. However, I was not able to get this to work. // I would always get the "Already in use" error, unless I tried to // bind on the same port, but without an address (I think). try { message = "Hello world!"; // Create the packet we want to send. Each packet is // individually coded to be delivered to a given host and // port number ("remote" port in this case). packet = createObject( "java", "java.net.DatagramPacket" ).init( charsetDecode( message, "utf-8" ), javaCast( "int", len( message ) ), remoteInetAddress, remotePort ); socket.send( packet ); // -------------------------------------------------- // // Now that we've sent data, we can also listen for data. The // data that comes back is not necessarily a RESPONSE to the // message we just sent. The message we just sent may not even // reach its destination. This is simply incoming data. // -- // NOTE: Waiting for data is a BLOCKING operation. The // request will wait until something comes in over the // socket connection. // -------------------------------------------------- // // Create a packet to contain the incoming message. response = createObject( "java", "java.net.DatagramPacket" ).init( charsetDecode( repeatString( " ", 1024 ), "utf-8" ), javaCast( "int", 1024 ) ); // BLOCK until we receive a message. The "timeout" will raise // an "SocketTimeoutException" exception if no message is // received after 5,000 milliseconds. // -- // NOTE: If you want to "unblock" this request, you either have // to send a message, close the socket (raises an exception), // or kill the jrun process. socket.setSoTimeout( javaCast( "int", 5000 ) ); socket.receive( response ); // Output the "response" message. writeOutput( charsetEncode( response.getData(), "utf-8" ) & "<br /><br />" ); } catch ( any error ) { writeDump( error ); // No matter what happens, we want to be sure to close the socket // binding to the local port number. This way, it can be re-bound // at a later time. } finally { socket.close(); } // So we can see change (this page runs really fast). writeOutput( "Now: " & timeFormat( now(), "h:mm:ss.l" ) ); </cfscript> After we send out the message, the ColdFusion code will listen for a "response." When you listen for a response over a UDP socket, the code will block. As such, I made sure to set an socket operation timeout such that if no messages were received (ie, I forgot to start the Node.js server), the ColdFusion code would eventually timeout and close the socket, releasing it for future use. On the Node.js side, I'm simply binding to the Datagram Socket to listen for messages. Once received, I send back a new (and completely distinct) message. Server.js - Our Node.js UDP ServerServer.js - Our Node.js UDP Server // Get our Datagram library and create our UDP socket; I // think you can think of this as being somewhat akin to Java's // java.net.DatagramSocket library. var socket = require( "dgram" ).createSocket( "udp4" ); // Listen for message events on the socket. socket.on( "message", function ( message, requestInfo ) { // Log the received message. console.log( "Message: " + message + " from " + requestInfo.address + ":" + requestInfo.port ); var response = new Buffer( "Got it on " + new Date() ); // Send a response. Note that this is entirely optional. // The client (ColdFusion) is not waiting for a response // [necessarily]. This is an independent action and will // not hold up the client's message. socket.send( response, 0, // Buffer offset response.length, requestInfo.port, requestInfo.address, function( error, byteLength ) { console.log( "... Sent response to " + requestInfo.address + ":" + requestInfo.port ); } ); } ); // Listen for error events on the socket. When we get an error, we // want to be sure to CLOSE the socket; otherwise, it's possible that // we won't be able to get it back without restarting the process. socket.on( "error", function ( error ) { socket.close(); } ); // When the socket is configured and ready to receive data, simply // log a confirmation to the console. socket.on( "listening", function () { var address = socket.address(); console.log( "socket listening " + address.address + ":" + address.port ); } ); // Start listening on the given port. Since we are not binding to // an explicit address [just a port], Node.js will aattempt to listen // to all addresses on the machine. socket.bind( 9002 ); When I start up the Node.js server [node server.js] and run the above ColdFusion page, I get the following page output: Got it on Mon Jan 20 2014 09:08:49 GMT-0500 (EST) Now: 9:08:49.876 ... and, on the Node.js side, I get the following console output: Message: Hello world! from 127.0.0.1:9001 ... Sent response to 127.0.0.1:9001 Pretty cool stuff! Slowly, but surely, I'm starting to learn more about how things operate at lower levels. I can definitely see the appeal of something small and simple like UDP for sending non-critical bits of data. Definitely more to explore here. Want to use code from this post? Check out the license. Reader Comments I heard this great joke about UDP and I would tell it to you but you might not get it (ba dum). @Michael, Ha ha, well played :) Nice post! A really useful application for UDP is in sending statistical data, especially when you're sending so much that it doesn't affect things if you drop a few packets here and there. We use StatsD for passing metrics to Graphite and the simple CF client I created for it uses UDP: @Matt, Funny you mention that. The first time I heard about UDP was in a presentation that was all about "metrics driven development." Amongst the topics discussed was StatsD and Graphite. I've only briefly heard about StatsD, but I love the way that it revolves around simple number collection. That and Graphite are definitely on my list of things to look into. Thanks for the wrapper component! UDP is much more similar to TCP than HTTP. A perfect example of when you want to use UDP is streaming audio or video. If you were transferring a song to keep (as in purchasing a song on iTunes), you'd want to use TCP. But if you want to stream it in real time, not to keep, you'd want to use UDP. In streaming, you're more concerned with keeping as close to real time as possible. If a glitch happens, you don't want to mess up the timing of subsequent packets with heroic measures to recover the content during the glitch. The streaming will be more coherent and listenable if you just say "screw it" to every glitch and move on to keep routing subsequent packets to the speakers. The glitch happens, but if you trudge on, the user ignores it. @WebManWalking, I don't fully understand TCP or HTTP, but I understand what you're saying. I started looking into this UDP stuff for "stats" collection in which it wouldn't be mission critical to lose data. Plus, I like the fact that it's non-blocking. In ColdFusion, even if I use CFHTTP with a timeout="1", it still blocks for at least one second. I could probably drop down into Java to adjust the timeout... but, with the UDP, its already non-blocking. Sweet! @Ben, In response to "I don't fully understand TCP or HTTP". At last! A chance for me to return the favor of all the good teaching/experimentation you provide! First, the origins: The C term "socket" is logically just a file that's open for input and output at the same time. If you "open a socket in the file namespace", it's also PHYSICALLY a file open for input and output at the same time. You can write something and then read it back, without ever having to do a close and reopen. At the other end of the socket is a file. But if you "open a socket in the Internet namespace", at the other end of the socket is another process. What I write, the other process reads. What the other process writes, I read. In this way, 2 processes can "talk to each other", even if they're not on the same machine. The communication is full duplex. A C socket contains 2 file descriptors, one for reading and one for writing. That way, both processes can write at the same time. Incoming messages get queued until the receiving process is ready to read them. TCP and UDP are 2 implementations of sockets in the Internet namespace, also known as transport protocols. TCP does an invisible back-and-forth of acknowledgements, negative acknowledgements, retransmissions, etc, to assure that no data will be lost. UDP doesn't. TCP and UDP share the exact same assigned port numbers. TCP port 80 is reliable HTTP. UDP port 80 is unreliable HTTP. But given how people like their web experience to be (complete pages without missing chunks), everyone uses TCP port 80. And given what people like their domain name lookups to be (as fast as possible), everyone uses UDP port 53, even though TCP 53 also exists. So now you know what TCP is, the reliable transport protocol. Layered on top of it are numerous other protocols, such as HTTP. Why? Well imagine that process A writes a message to process B while process B writes a message to process A. No problem. Full duplex, messages queued. But what if process A tries to READ a message from process B, but process B hasn't written anything yet. Process A goes into a wait state (blocks). Now suppose, due to some miscommunication, process B also tries to read something from process A!!! It'll never get written by A, because A is also in a wait state!!! Both processes wait forever!!! Oh noes!!! That's what all the higher level protocols are about. They're sets of rules, the main purpose of which is to guarantee that both processes never simultaneously block due to waiting on the other to send something. For example, HTTP is a pretty darn simple protocol. Request, response, done. Client A sends request. Server B sends response. End of story. Web Services are pretty similar: Request, response, request, response, done. The first request/response is for the WSDL. The second request/response is the actual call to the Web Service. I've coded both client sockets and server sockets in C. I wrote a TELNET client emulator once, in fact. I say emulator, not implementation, because its sole purpose was to fool a server into thinking it was talking to a human, even though there was no human. It wasn't done maliciously. It was to get 2 completely different machines to talk to each other when one of them had only a TELNET server port. Fun times. @WebManWalking, I appreciate the explanation. But, there is one thing you said that I don't follow - the whole port 80 / port 53 thing. When you go to create a socket (at least in the Java docs), it seems you can create pick an arbitrary port to connect to. But, should I be trying to connect to port 53 for some reason? On the flip-side to that, if I create a Node.js app (like in this example), I don't think that I can easily bind to ports lower than 1024 (if memory serves me correctly). I guess I'm just confused about the port stuff you mentioned. Also, if you're in a teaching mood :) not sure if this is related, but on podcasts, I often hear people refer to something called "tmux" and "multiplex". It always seems to be in the context of streams that allow two-way communication, much like the duplexing you mentioned. Maybe totally unrelated, but I'm eager to learn stuff :D @Ben, By role, there are 2 kinds of sockets, client sockets and server sockets. Using the telephone analogy, clients initiate the phone call and servers sit by a port and wait for someone to call them on that port. On the C level, servers call "listen", specifying any-old port they want to listen too. This puts them into a wait state (block) waiting for a message to arrive on that port. The conventional place for HTTP servers to wait is port 80 and DNS, port 53. You can certainly listen to another port if you like (examples, 8080 or 8001 for HTTP), but if you do that, your clients have to be told to use the non-standard port. It introduces human intervention, in other words. If you go with the conventional port, clients can connect automatically by going with the convention too. Anyway, calling "listen" is what makes a socket a server socket. Then all it does is hang around the phone (port), blocked, just pining away and wishing some handsome young client socket would call them on that port. If that happens, they perk up instantly and become marvelous conversationalists. If no one calls, the server sockets feel sad and lonely. Not really, just idle. Anthropomorphism. But you get the idea. Clients call "open" specifying an IP and port. They normally get IP address from DNS, but what port to use is by convention again. Browsers, for example, default to 80, but they look for /:\d+$/ in the server name part of the URL. That's how they allow human intervention. I know you love regular expressions. Anyway, most folks don't do that most of the time. That's why convention is so important. And that's why I sent you the iana.org URL. It's how the handsome client socket comes up with a phone number (IP and port) to find the server socket. IP gets you to the dormitory (machine), port to the dorm room (listening server socket). The low port numbers are reserved as shown in the iana.org page. Larger port numbers are a free-for-all, just pick one. There's also something called "port reuse". In the call to "listen", server sockets can specify "allow port reuse". Extending the phone call analogy, if you open a server socket (with listen) specifying to allow port reuse, only the ringing of the phone is on the conventional port number. The OS's comm software (aka the "TCP/IP stack") detects the ring on the port's phone. In establishing the connection, it picks a big port number from the available big port numbers pool and actually establishes the connection using that big port number. This frees up the original port number, allowing it to ring again if someone else happens to call around the same time. That's important. If you don't allow port reuse, your server is single-thread. There are tons of options. It doesn't have to go down using listen and open. For example, there's a middleware utility called inetd that manages persistent session protocols, such as FTP and TELNET. The servers that run under inetd just receive messages on stdin and write responses to stdout. The inetd process manages all of the server socket management and servers that run under inetd "don't even know they're talking to the Internet", as the saying goes. Actually, they do, but they don't directly call any socket functions. Even so, at the foundation, everything happens as just described (with inetd doing comm calls). See why the Event Gateway was such a big deal in ColdFusion? It allows you to listen for connections other than HTTP. You could listen to the SMS port and respond to real phone traffic (text messages), not just phone call analogies. You too can be a server socket. P.S.: On a Unix system, such as your MacBook Pro, you need to be root to listen to a low-numbered reserved port number. That's how experienced users (sysadmins) prevent neophyte users from messing up existing services. That's why you have to turn on the MBP's Apache web server using System Preferences > Network > Sharing > turn on Web Sharing, not just double-clicking an application. Hmmm... Looks like they're calling the File Namespace the "Local Namespace" nowadays: @WebManWalking, When I was looking at the Java docs (and Googling for information) for the DatagramSocket stuff, the "port reuse" stuff was very confusing. I started looking into it because, in my first experiments, I only had the ColdFusion side of stuff running, not the Node.js side. As such, the ColdFusion code would connect to a port, and then hang while executing the .receive() command. The problem was, when I refreshed the page, it would yell at me that the port in question was already in use. And, I'd have to restart JRUN. Then, I started trying to set the port-reuse flag before binding to the socket; but, it didn't seem to help. Upon refresh, I would get the same error. Eventually (not show in this code sample), I just started adding a timeout to the socket so at least it would die after a few seconds. Quick question: If I bind to a remote port (ie, the server port), but do NOT bind to a local port (ie, just let the client send over whichever port is available), so I have to "close" the socket connection in anyway? I can't seem to find any info on that. @Ben I've never specified a client port number. Never. Not in C. Not in Java. IMHO, that's why you're getting the "Already in use" error. You're unnecessarily restricting your client to 9001. On the second try, the first try still has 9001. The place to allow port reuse is on the server (Node). That frees up 9002 to be multithread. But I've never used Node, much less the "dgram" library, so I don't know what to advise as to how to do that. Also, I confess, I've never used UDP. That's what attracted me to this post on your blog, actually. I've always needed reliable connections for the tasks at hand. So the following is conjecture: Suppose you send Hello World to 9002, but for some reason, Node doesn't receive it. In ColdFusion, you don't know that. You blocked on socket.receive and you're waiting for a response, but Node DOESN'T KNOW YOU EXIST. It never got the Hello World, so it doesn't know to send back Got It. So you're waiting on a Got It that ain't comin'. In other words, you may have hit the same kind of impasse as I described earlier (blocking on a read for a message that never comes). The only difference is, it's one-sided. The Node side didn't block, because it didn't know about your attempts. That's my guess. The last time I used Java to open a client socket was around the year 2000. I had to implement a Web Services request by building the SOAP packet myself and sending it over HTTPS. This was before ColdFusion MX (6), so I had to use a Java CFX to get into Java. Also, HTTPS wasn't yet integrated into the java.net package, so I had to import an extension called JSSE (Java Secure Sockets Extension). I'm still using that code, by the way, because it hasn't needed to be modified since 2000. It's clearly due for a cfinvoke rewrite someday, but I don't task myself. It'll have to wait until my bosses give me the go-ahead to do cleanup. They only let me do that if they can't think of what else I should work on. :( Still, I'm very curious as to what you discover with UDP. I'll keep checking back here. At some point, when the kinks have been worked out, one or both of us should tell the folks on Twitter about this. P.S.: I forgot to respond to the "should I close anyway" part of your quick question. Yes. Your ColdFusion request can time out, but the Java objects you connect to don't know about cfsetting setrequesttimeout. I'm sure you've experienced overrunning your request timeout because you were in some cf tag that took way longer than the timeout, but CFML processing never got a chance to check whether you've timed out, because you stayed in that one tag. That can happen in Java objects too. So yes, your Java code should close all sockets you open (assuming you don't permanently hang on a read block). You don't want the Java object hanging around because it's still in use. @WebManWalking, So far, what I've used UDP to do is to talk to HostedGraphite.com for metrics tracking. Here's my "UDP Transport" component: Seems pretty cool so far. I love that it's non-blocking. Re: closing the socket, does that matter "always"? Or, only when I'm "listening" for an incoming message on my end. Also, I never know how "expensive" this stuff is. If I have to send a message every 10-seconds (as in the flush buffer for collecting metrics), is it expensive to 1) Create a socket, 2) Send message, 3) Close socket for every time I need to send the message? Or, should I really try to keep it open? @Ben Can't answer in depth right away, but servers are typically left on, listening to ports, 24x7. It's very, very typical to never take them down at all. Between messages, the server process that listens to a port is idle (because it blocked on the listen). The active process checking for incoming messages is the TCP/IP stack. So no, it's not expensive to leave a server socket listening to a port 24x7. Not expensive with CPU cycles or I/O or anything. The only significant expense occurs when a message arrives. The TCP/IP stack activates the server process by putting the message onto the read side of the server socket, unblocking the process. Then the expense depends on how efficiently you coded your server process. And again, YMMV with Node. Not an expert with Node. @WebManWalking, Ok, cool, thanks! As always, very much appreciated. @Ben, Okay, now I have a moment to talk about holding connections open. You probably are familiar with "COMET", the tongue-in-cheek opposite of AJAX. In CF terminology, COMET involves doing a bunch of cfflush commands during a process that you know is going to take a long time. Usually, you end on a boundary the browser understands, such as /div or /table, so that the browser doesn't have to wait on more HTML to determine how to repaint the screen. The user keeps seeing stuff added to their screen, so they are more patient toward a long-running process. COMET is server push. AJAX is browser pull. You could rewrite a persistent COMET connection into a pull from the browser via repeated AJAX calls. You'd probably use setInterval with $.ajax, or $.load perhaps. The problem is reestablishing context on each call. Do you have to re-login on each ajax call? Do you have to skip over previous data to get to the point where you left off? How much reestablishing of context overhead do you have to do, just to avoid a persistent connection? On a Unix or Mac server, you're not likely to lock your computer out of doing anything else, just because you're holding open a COMET connection. My experience with Windows multitasking, however, is that's not all that preemptive, meaning that the computer gets one-track-minded and all other processes suffer. Everything's a trade-off. There isn't a perfect answer. You have to ask yourself whether performance seems acceptable. If not, switch to a different way of doing things. My $0.02. pretty cool, but how do I connect to the server from browser or something client side? thanks,
https://www.bennadel.com/blog/2579-sending-and-receiving-udp-user-datagram-protocol-messages-with-coldfusion.htm
CC-MAIN-2022-27
refinedweb
4,376
74.9