text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Introduction: Intel Edison Based Heated 3d Printer Enclosure (use an Arduino, Edison Sucks, Froze Every Time!) **WARNING** This Instructable involves working with 120V current. If you are not comfortable working with high voltage, a 12v Hair dryer and relay MIGHT work ( I don't have one to test out, If someone does, Please send me a link to a YouTube Video of it working, over 50C and I will update this). **// I have tested this to 50C and it is suitable like in this picture to print PLA, but NOT ABS! I will be updating this as I add improvements to print ABS. I was worried that the cold-zone on the hot end would require water cooling, but I contacted Brook and it shouldn't be a problem at 70C. So, I'm getting a better heater now. //** I suggest using an Arduino, the Edison is still too Beta. It froze every single time! From testing, I have discovered that the extruder itself will need cooling. The insulation worked too good! The printers heated bed is more than enough to heat this small of a box to way over 60C. It will print small prints, until the metal heats up to 50C (Brook mentioned something about but now I know what he meant). So, I recommend making the box bigger. Don't worry version 2 is coming soon. Gathering parts now! Step 1: Gather Everything You Will Need Supplies: 1) 4 - 20'' Angle Aluminum 2) 4 - 18" Angle Aluminum 3) 4 - 16" Angle Aluminum 4) 2 - 18" x 20" Acrylic Sheet 5) 2 - 16" x 18" Acrylic Sheet 6) 2 - 16" x 20" Acrylic Sheet 7) 1/16'' Air Tube (stiff) Electronics: 1) Intel Edison 2) Grove Starter Kit 3) Hair Dryer 120V (12V might work, I don't have one to test) 4) Electrical Outlet 5) Electrical Cord Tools: 1) Drill 2) Dremel 3) Hacksaw 4) Files Step 2: Cut Aluminum to Lenghth 1) cut aluminum to length 4 - 20'' 4 - 18'' 4 - 16'' Step 3: Drill Holes for Corner Rivets and Rivet the Top and Bottom Now we are drilling holes and riveting the top and bottom 18'' x 20'' 2 - 20'' + 2 - 18'' Pieces angle aluminum (Top) 2 - 20'' + 2 - 18'' Pieces angle aluminum (Bottom) Carefully drill the corner's for the rivets and repeat on all the beams. Line up the holes of 2 of the beams(1-18'' & 1-20'') to form a 90 degree angle (like in the photo above). Repeat, then connect the two halves to form a square. This will form the top. Now repeat the entire process for the bottom. They should look like the last photo when finished. Step 4: Add Sides 4 - 16'' Angle Aluminum Add the sides by drilling a hole and adding a rivet at the top and bottom of each side. Step 5: Measure and Cut the Acrylic I used the frame as a template to cut the side panels. I used a Dremel with a cutting wheel to cut the sides. I used a flat file to trim a few burs. You will need; 2 - 18'' x 20'' 2 - 18'' x 16'' 1 - 20'' x 16'' 1 - 20'' x 16'' (w/ 14'' x 10'' cut from one corner for the door) Step 6: Adding the Bottom I used a 18'' x 20'' piece of sheet metal (mainly for weight and stability). It isn't necessary, an acrylic sheet would work, but it's really light. Step 7: Add 3 Side Panels and Top The side panels are held on with rivets. There are some spacers added to close the gaps between the panels and the frame. It wouldn't hurt to buy some extra aluminum to put into the gaps. Lets add the 3 sides (2 - 16'' x 18'' & 1 - 16'' x 20''), all 3 are acrylic sheets. I drilled out relief holes for the rivets holding the frame together. I Dremeled the hole for the rivet to hold the acrylic on. The drill went through the acrylic and aluminum but cracked one time (it never cracked with the Dremel). I added between 4 and 6 rivets to hold on each acrylic panel. Now, add a 20'' x 18'' acrylic panel for the top. Drill relief's for frame rivets and drill through the acrylic panel and aluminum frame for rivets to hold on the acrylic top. Step 8: Add the Door (and Support Frame) Let's add the front panel, door, and support frame for the door. The front is a 16'' x 20'' acrylic sheet with a 13'' x 10'' corner cut out. Cut another 16'' & 13'' pieces of aluminum for the door frame, and a 10'' x 13'' piece of acrylic for the door. Attach the 16'' piece in the middle of the vertical cut for the door and add the 13'' piece about 10'' on the 16'' piece (along the horizontal cut for the door). Everything was attached with rivets. The holes were made with a Dremel and drill. I positioned the hole for the door in the upper left corner of the front panel, about 5'' down the top of the left corner add the hinge. I used a small acrylic scrap for a spacer, between the hinge and frame. Now attach the 10'' x 13'' door I used a 2'' piece of flat aluminum for the hinge's rivets to brace against so I didn't crack the acrylic. I added a 3'' piece of angle aluminum at the top right corner for a spacer and added a hole for a pin to hold the door shut (I just used an unused rivet). Step 9: Mount the Electronics We will need; 1- Intel Edison 1- Grove Shield 1- Grove Relay 1- Grove Temperature Module 1- Printrboard (removed from Printrbot) 1- Electrical Outlet 1- Electrical Cord 1- Hair Dryer 3- Zip Ties This part is really up to you. I used some scrap aluminum and acrylic to make a frame and mounted the boards to the acrylic pieces. Each board is mounted on it's own small piece of acrylic a little bigger than the boards. The hair dryer is mounted to another few scraps of aluminum and a few zip ties the hole for it was made with a Dremel. Hook up a small piece of wire, between the relay and the one screw terminal on the outlet. Now connect the power wire from the electrical cord to the other screw terminal on the relay. Hook up the ground wire to the other screw terminal of the outlet. Hook up the Grove Temperature Module to A0 on the Grove Shield. Step 10: Filament Tube Just Dremel a small hole that is barely big enough to fit the 3'' length of hose. Seal it with some Sugru or hot glue if it is too big. Step 11: Intel Edison Arduino Code #include #include "rgb_lcd.h" //setting var's const int pinTemp = A0; // pin of temperature sensor float temperature; int B=3975; // B value of the thermistor float resistance; rgb_lcd lcd; const int relaypin = 4; //the Relay is attached to D4 // the setup part void setup() { pinMode(relaypin, OUTPUT); //sets relay to output lcd.begin(16, 2); // lets the main loop know that the lcd is 16 by 2 } // the main loop of code void loop() { int val = analogRead(pinTemp); // get analog value resistance=(float)(1023-val)*10000/val; // get resistance temperature=1/(log(resistance/10000)/B+1/298.15)-273.15; // calc temperature // //If temperature is over 25 degrees then enable relay if (temperature > 25) { digitalWrite(relaypin, HIGH); // turns on the relay if temp above 25 } if (temperature > 50) // if it is over 50 then keep the relay off { digitalWrite(relaypin, LOW); //turn the relay off } //END OF CODE The code was entered with the Arduino IDE. The Grove RGB LCD Library was needed. Upload and Enjoy your new heated build chamber! Step 12: Adding Extras I added a camera mount I made for a recycled laptop camera that I converted to USB. It also happens to hold my cell phone I use for timelapse. I added a power switch on my desk (close to my seat). Step 13: Insulating Insulating to help hit 70C. Make sure you leave enough room for the full range of the printer's movements. Recommendations We have a be nice policy. Please be positive and constructive. 2 Comments What happens if the room temp is 20 degrees at startup? You have to adjust the one line of code, it really depends on what temp sensor you use and how cold it is in the box. I can't really guess what it will be for others, so I just uploaded my code how it was when it was working properly at room temperature.
http://www.instructables.com/id/Intel-edison-based-heated-3d-printer-enclosure/
CC-MAIN-2018-09
refinedweb
1,470
77.27
Red Hat Bugzilla – Bug 37482 gcc -Wconversion gives weird conversion warning Last modified: 2007-04-18 12:32:50 EDT From Bugzilla Helper: User-Agent: Mozilla/4.77 [en] (X11; U; Linux 2.2.16-22smp i686) I think the following program should compile with no warnings: float foo(float x) { return (x); } int main(void) { float y = 0.0f; y = foo(y); return 0; } But: gcc -g -Wall -Wconversion foo.c foo.c: In function `main': foo.c:10: warning: passing arg 1 of `foo' as `float' rather than `double' due to prototype Reproducible: Always Steps to Reproduce: Just compile the above program with -Wconversion. Actual Results: gcc gives that weird warning. Expected Results: Since there is no double in the program, I would expect the program to compile without warnings. That warning is correct: `-Wconversion' Warn if a prototype causes a type conversion that is different from what would happen to the same argument in the absence of a prototype. ... If you had no prototype and foo was defined elsewhere, main would pass it a double not a float (that's what default argument promotion rules say), so you get the warning.
https://bugzilla.redhat.com/show_bug.cgi?id=37482
CC-MAIN-2017-17
refinedweb
194
62.17
In This Chapter Reading with the extraction operators Dealing with the end of the file Reading various data types Reading data that is formatted with text Well, isn't this nice. You have a file that you wrote to, but you need to read from it! After all, what good is a file if it's just sitting on your hard drive collecting dust? In this chapter, we show you how you can read from a file. Reading a file is tricky because you can run into some formatting issues. For example, you may have a line of text in a file with a sequence of 50 digits. Do those 50 digits correspond to 50 one-digit numbers, or maybe 25 two-digit numbers, or some other combination? If you created the file, you probably know; but the fun part is getting your C++ program to properly read from them. The file might contain 25 two-digit numbers, in which case you make sure that the C++ code doesn't just try to read one enormous 50-digit number. In this chapter, we give you all the dirt on getting the dust off your hard drive and the file into memory. Have at it! When you read from a file, you can use the extraction operator, >>. This operator is very easy to use, provided you recognize that the phrase, "Look mom, no caveats!" just doesn't apply to the extraction operator. Suppose you have a file called Numbers.txt with the following text on one line: 100 50 30 25 You can easily read in these numbers with the following code. First, make sure you #include <fstream> ... No credit card required
https://www.safaribooksonline.com/library/view/c-all-in-one-for/9780470317358/ch27.html
CC-MAIN-2018-34
refinedweb
281
81.53
How to trigger an action when a TextField is edited, I don't believe so. I think you have to code in the delegates yourself. It's not too bad though. Something like this should work (NB: Not tested). import ui class MyTextFieldDelegate(object): def textfield_did_change(self, textfield): print(textfield.text) textfield = ui.TextField() textfield.delegate = MyTextFieldDelegate() The textfield does have an action attribute, but it is not settable in the ui editor (pribably an oversight). Often people set this in the custom class init code. But you can also create a custom attribute in the ui editor. If you do something like: {'action':myfun} then be sure to define myfun before you load the view def myfun(sender): print sender.text v=ui.load_view(your_pyui_name) then the textfield is created, and the action is set to myfun. You could also simply do something like v=ui.load_view(your_pyui_name) v['textfield1'].action=myfun Often people create a custom class for the top level view which handles the various delegate methods which are not easily settable via the ui editor. Thanks a lot both. All the different solutions are working.
https://forum.omz-software.com/topic/3175/how-to-trigger-an-action-when-a-textfield-is-edited/4
CC-MAIN-2021-17
refinedweb
189
68.47
Overview A web application whether small or large will be having a lot of app secrets such as SECRET_KEY, api keys of different services used, database credentials etc. The convenient way for anyone to use these is to hardcode them in source code. It works, but it is also the most insecure way. We often push the code to repositories for version management or we share it among others resulting in exposing the app secrets. The easiest and secure way is to use these secrets as environment variable and import them directly in app. Python decouple manages this for you. It also helps in managing application configurations and secrets based on the development environments (DEVELOPMENT/PRODUCTION/STAGING ). Decouple was originally designed for Django, but currently exist as an independent tool for separating setting from code. Why to use this? Web framework’s settings stores many different kinds of parameters: - Locale and i18n; - Middlewares and Installed Apps; - Resource handles to the database, Memcached, and other backing services; - Credentials to external services such as Amazon S3 or Twitter; - Per-deploy values such as the canonical hostname for the instance. Why not just use environment variables? Envvars works,. Decouple provides a solution that doesn’t look like a workaround: config('DEBUG', cast=bool). Usage Install: pip install python-decouple Step 1: Create a .env file at the root level of your project project --- project ------ settings.py ------ urls.py ------ wsgi.py --- .env Now, set your own secret variable inside .env file like this: DEBUG=True SECRET_KEY=ARANDOMSECRETKEY DB_HOST=HOSTNAME DB_PASSWORD=PASSWORD Now, in your settings.py, use the above variables like this: from decouple import config # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = config('SECRET_KEY') # SECURITY WARNING: don't run with debug turned on in production! DEBUG = config('DEBUG', cast=bool) DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': config('DB_NAME'), 'USER': config('DB_USER'), 'PASSWORD': config('DB_PASSWORD'), 'HOST': config('DB_HOST'), 'PORT':'' } } That's it for setting up a basic configuration, if you want know more have a look at the official doc: NOTE: - It's a good idea to use version control like git, even for small projects. - Add .env file to .gitignore, every user should have their own seperate .env file which should not be pushed alongwith your repository.
https://raturi.in/blog/manage-django-environments-secrets-using-python-decouple/
CC-MAIN-2021-43
refinedweb
380
55.95
Please add comments or ideas on what to put on the migration page here In addition to code changes, info (or a link to info) on how to change compiler config/files should be provided. tango.core.Math has moved to tango.math.Math Doesn't tell me anything I need to know to replicate std.stream functionality in tango. If modules are simply renamed, could it at least provide the path to the renamed tango file? Doesn't mention anything about tango's signals. Apparently it has moved to tango.core.Memory.gc Refer to, it's a good introduction. import std.io; writefln("Lucky number is %s.", 42); becomes: import tango.io.Stdout; Stdout.format("Lucky number is {0}.\n", 42) For low level text output you also can use: import tango.io.Console; Cout("Lucky number is 42.") Both may be merged in the near future. Note that this is not likely. Tango ed.
http://www.dsource.org/projects/tango/wiki/MigrationComments
CC-MAIN-2018-26
refinedweb
157
62.44
Back to: C#.NET Tutorials For Beginners and Professionals Delegates in C# with Examples In this article, I am going to discuss the Delegates in C# with Examples. Please read our previous article where we discussed Exception Handling in Details. As part of this article, we are going to discuss the following important pointers in detail. - What are delegates in C#? - How many ways we can call a method in C#? - How to invoke methods using delegates in C#? - Examples of using Delegates. - Rules of using Delegates in C#. - What are the types of delegates? What are delegates in C#? In simple words, we can say that the delegates in C# are the Type-Safe Function Pointer. It means they hold the reference of a method or function and then call that method for execution. How many ways we can call a method in C#? In C#, we can call a method that is defined in a class in two ways. They are as follows: - We can call the method using the object of the class if it is a non-static method or we can call the method through class name if it is a static method. - We can also call a method in C# by using delegates. Calling a C# method using delegate will be faster in execution as compared to the first process i.e. either by using an object or by using the class name. Invoking Method using Object and Class Name: In the below example, we are invoking the method by using the object of the class for the non-static method and using the class name for the static method. namespace DelegateDemo { public class Program { //NonStatic method public void Add(int x, int y) { Console.WriteLine(@"The Sum of {0} and {1}, is {2} ", x, y, (x + y)); } //Static Method public static string Greetings(string name) { return "Hello @" + name; } static void Main(string[] args) { Program obj = new Program(); //calling non static method through object obj.Add(100, 100); //Calling static method with class name string GreetingsMessage = Program.Greetings("Pranaya"); Console.WriteLine(GreetingsMessage); Console.ReadKey(); } } } Output: In the above example, we are calling the methods using the object and the class name. Now let’s see how to call a method using delegates in C#. How to invoke methods using Delegates in C#? If you want to invoke or call a method using delegates then you need to follow three simple steps. The steps are as follows. - Defining a delegate - Instantiating a delegate - Invoking a delegate Step1: Define a Delegate in C# The syntax to declare a delegate in C# is very much similar to the function declaration. In delegate, we need to use the keyword delegate. The syntax for defining a delegate: <Access Modifier> delegate <return type> <delegate name> (arguments list); Example: If you have a method like below. public void Add(int x, int y) { Console.WriteLine(@"The Sum of {0} and {1}, is {2} ", x, y, (x + y)); } Then you have to define a delegate like below. public delegate void AddDelegate(int a, int b); Example: If you have a method like below public static string Greetings(string name) { return "Hello @" + name; } Then you need to define a delegate like below. public delegate string GreetingsDelegate(string name); Note: The point that you need to remember while working with C# Delegates is, the signature of the delegate and the method it points should be the same. So, when you create a delegate, then the access modifier, return type, number of arguments, and their data types of the delegates must and should be the same as the access modifier, return type, number of arguments, and the data types of the function that the delegate wants to refer. You can define the delegates either within a class or just like other types we defined under a namespace. Step2: Instantiating the Delegate in C#. Once we declare the delegate, then we need to consume the delegate. To consume the delegate, first, we need to create an object of the delegate and while creating the object the method you want to execute using the delegate should be passed as a parameter. The syntax to create an instance of a delegate is given below. DelegateName ObjectName = new DelegateName (target function-name); Example: AddDelegate ad = new AddDelegate(obj.Add); GreetingsDelegate gd = new GreetingsDelegate(Program.Greetings); While binding the method with a delegate, if the method is non-static refer to it as the object of the class and if it is static refer to it with the name of the class. Step3: Invoking the Delegate in C#. Once we create an instance of a delegate, then we need to call the delegate by providing the required values to the parameters so that the methods get executed internally which is bound with the delegates. For example: ad(100, 50); ad.Invoke(200, 300); string GreetingsMessage = gd(“Priyanka”); string GreetingsMessage = gd.Invoke(“Anurag”); At this point, the function that is referred to by the delegate will be called for execution. Complete Example for a Better Understanding. Please have a look at the below code. In the below example, first, we declare two delegates. Then within the program class, we define two methods whose signature is the same as the delegate signature. Then we create instances of the delegates and while creating the instance we are providing the function name as a parameter to the delegate constructor. It is this function that will execute when we invoke the delegate. Once we create the instance then we call the delegate by providing the required parameter. We can also invoke a delegate by using the Invoke method. The following code is self-explained, please go through the comment lines. namespace DelegateDemo { //Defining Delegates //Note: The access specifeis, return type and the number, order and type of parameters of delegate //should be same as the function it refers to. public delegate void AddDelegate(int a, int b); public delegate string GreetingsDelegate(string name); public class Program { //Defining Methods //NonStatic method public void Add(int x, int y) { Console.WriteLine(@"The Sum of {0} and {1}, is {2} ", x, y, (x + y)); } //Static Method public static string Greetings(string name) { return "Hello @" + name; } static void Main(string[] args) { Program obj = new Program(); //Instantiating delegate by passing the target function Name //The Add method is non static so here we are calling method using object AddDelegate ad = new AddDelegate(obj.Add); //Greetings method is static so here we are callling the method by using the class name GreetingsDelegate gd = new GreetingsDelegate(Program.Greetings); //Invoking The Delegates ad(100, 50); string GreetingsMessage = gd("Pranaya"); //We can also use Invoke method to execute delegates // ad.Invoke(100, 50); // string GreetingsMessage = gd.Invoke("Pranaya"); Console.WriteLine(GreetingsMessage); Console.ReadKey(); } } } When you run the application, it will give you the following output. Rules of using Delegates in C#: - A delegate in C# is a user-defined type and hence before invoking a method using a delegate, we must have to define that delegate first. - The signature of the delegate must match the signature of the method, the delegate points to otherwise we will get a compiler error. This is the reason why delegates are called type-safe function pointers. - A Delegate is similar to a class. This means we can create an instance of a delegate and when we do so, we need to pass the method name as a parameter to the delegate constructor, and it is the function the delegate will point to - Tip to remember delegate syntax: Delegates syntax looks very much similar to a method with a delegate keyword. What are the Types of Delegates in C#? The Delegates in C# are classified into two types such as - Single Cast Delegate - Multicast Delegate If a delegate is used for invoking a single method then it is called a single cast delegate or unicast delegate. In other words, we can say that the delegates that represent only a single function are known as single cast delegates. If a delegate is used for invoking multiple methods then it is known as the multicast delegate. Or the delegates that represent more than one function are called Multicast delegates. The example that we discussed in this article is of type Single Cast Delegate because the delegate points to a single function. In the next article, I am going to discuss the Multicast Delegate in C# with examples. Here, in this article, I try to explain Delegates in C# with examples. I hope you understood the need and use of delegates in C#. 5 thoughts on “Delegates in C#” Thanks for the great explanaitons and examples. What about the “events” type? It’s so clearly. Tks you so much :). Thank you. Thank you, well explained Wonderful tutorial
https://dotnettutorials.net/lesson/delegates-csharp/
CC-MAIN-2022-21
refinedweb
1,465
54.52
Top: Streams: outstm #include <pstreams.h> class outstm: iobase { void putf(const char* fmt, ...); void put(char); void put(string); void puteol(); void putline(string); int write(const char* buf, int count); void flush(); bool get/set_flusheol(bool); } This class implements the basic functionality of output streams. Outstm is derived from iobase and inherits all its public methods and properties. End-of-line sequences are not translated when you send data through the output methods. To write an end-of-line sequence appropriate to the given operating environment use puteol() instead. void outstm::putf(const char* fmt, ...) is a printf-style output method. PTypes supports a subset of format specifiers common to all platforms: <blank>, '#', '+' and '-' formatting flags, 'L', 'h', 'l' and 'll' format modifiers, and the following standard format specifiers: cdiouxXeEfgGps. In addition, PTypes supports a format specifier a for IP addresses (ipaddress type) and also t and T for timestamps (datetime type). Note that some compilers require to explicitly cast ipaddress arguments to long type, and also string arguments to const char* (or pconst). void outstm::put(char c) writes the character c to the stream. void outstm::put(string str) writes the string str to the stream. void outstm::puteol() writes an end-of-line sequence to the stream. The actual sequence depends on the platform the library was compiled on. May flush data if the property flusheol is set to true. void outstm::putline(string str) writes the string str to the stream followed by an end-of-line sequence, as in call to puteol(). int outstm::write(const char* buf, int count) writes count bytes from the buffer buf to the stream. void outstm::flush() writes the remaining data in the buffer to the media, if any. This method is called automatically when the stream is being closed. bool outstm::get/set_flusheol(bool) -- set this property to true if you want each line to be written to the media or communication stream immediately. Default is false. See also: iobase, instm, Error handling
http://www.melikyan.com/ptypes/doc/streams.outstm.html
crawl-001
refinedweb
337
57.98
THe strxfrm() function transforms the string such that comparing two transformed string using strcmp() function produces identical result as comparing the original strings using strcoll() function in the current C locale. For example, x and y are two strings. a and b are two strings formed by transforming x and y respectively using the strxfrm function. Then a call to strcmp(a,b) is same as calling strcoll(x,y). strxfrm() prototype size_t strxfrm(char* dest, const char* src, size_t count); The strxfrm() function converts the first count characters of the string pointed to by src to an implementation defined form and the result is stored in the memory location pointed to by dest. The behavior of this function is undefined if: - size of dest is less than the required size. - dest and src overlap. It is defined in <cstring> header file. strxfrm() Parameters - dest: pointer to the array where the transformed string is stored. - src: pointer to the null terminated string to be transformed. - count: maximum number of characters to convert. strxfrm() Return value The strxfrm() function returns the number of character transformed, excluding the terminating null character '\0'. Example: How strxfrm() function works? #include <iostream> #include <cstring> #include <clocale> using namespace std; int main() { setlocale(LC_COLLATE, "cs_CZ.UTF-8"); const char* s1 = "hrnec"; const char* s2 = "chrt"; char t1[20], t2[20]; cout << "strcoll returned " << strcoll(s1,s2) << endl; cout << "Before transformation, " << "strcmp returned " << strcmp(s1,s2) << endl; strxfrm(t1,s1,10); strxfrm(t2,s2,10); cout << "After transformation, " << "strcmp returned " << strcmp(t1,t2) << endl; return 0; } When you run the program, the output will be: strcoll returned -1 Before transformation, strcmp returned 1 After transformation, strcmp returned -1
https://www.programiz.com/cpp-programming/library-function/cstring/strxfrm
CC-MAIN-2020-16
refinedweb
281
53.92
SchedYield(), SchedYield_r() Yield to other threads Synopsis: #include <sys/neutrino.h> int SchedYield( void ); int SchedYield_r( void ); Since: BlackBerry 10.0.0 Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description:. Instead of using these kernel calls directly, consider calling sched_yield().. Blocking states These calls don't block. However, if other threads are ready at the same priority, the calling thread is placed at the end of the ready queue for this priority. Returns: The only difference between these functions is the way they indicate errors: Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/s/schedyield.html
CC-MAIN-2018-17
refinedweb
120
60.11
Supported Google Maps in an Angular project using the Angular Google Maps module. Adding/ Embedding Google Maps in an Angular project becomes very easy by using this module. Here we will explain step by step tutorial to make it even easier with a new sample Angular project using latest CLI current version 7.3.8 Let’s get mapped 😛 Check: Angular Google Maps with Radius Circle and Markers Create a new Angular project. You can update Angular CLI by running following npm command Install Angular Google Maps Run following NPM command to install AGM in project Next, open app.module.ts then import AgmCoreModule then add in imports array Note: Replace apiKey with your API key. Check here on How to get Google API key? Add the AGM template in the App component to show basic Google Map with the current location. To get current coordinates like Longitude & Lattitude, we will use geolocation service in the navigator. Here we will call setCurrentLocation() method to set AGM with Lattitude & Longitude. [zoom] property can be used to zoom up to any level. Add Map height in app.component.css. Add Location/ Places Search bar Next, we will install GoogleMaps types library Next, open file tsconfig.app.json already available at the root, now add “googlemaps” in types array as shown below Make sure you have enabled three types of API’s services on your key. Maps JavaScript API (Show Map), Places API (Places search results) & Geocoding API (Convert Lat & Long to address) In app.component.html we will now add a search bar with address and Latitude & Longitude information. The marker is now draggable so that user can drag it to the desired location. [markerDraggable] option is set to true. (dragEnd)the event will fire to get the address of current drop. In app.component.ts we will add Google map Autocomplete method to return address data set. Geocoder location service will return address based on provided Latitude & Longitude in the method getAddress() For making it look beautiful we have added bootstrap.css in index.html That’s it! Run application $ ng serve --open to see a Google Map with Search Bar to search places. The marker can be dragged anywhere to get location address with coordinates. my search bar is not functioning, help Hi Farhan, if you are getting an error in response from Google API, then make sure your API KEY is allowed three types of authentications for Places API, Geocoder API, and Google JS Maps API in Google Developer console as mentioned in the article above. i solved it by add ( language: ‘en’, libraries: [‘geometry’, ‘places’]) at app.module: @NgModule({ declarations: […], imports: […, AgmCoreModule.forRoot({ clientId: ”, //apiVersion: ‘xxx’, // optional //channel: ‘yyy’, // optional //apiKey: ‘zzz’, // optional language: ‘en’, libraries: [‘geometry’, ‘places’] }) ], providers: […], bootstrap: […] }) i couldn’t resolve , which you did involve I also did not work in search. the search bar dont work and get err in ts can not find name google Hi Mohamed, there is some change in code, I have updated code in file app.module.ts. Please check the updated post. Thanks for pointing out 🙂 Hi Jolly, What a beautiful tutorial, however I’m getting the same error “TypeError: Cannot read property ‘maps’ of undefined”. Please tell where I can find the updated code in the file app.module.ts. @Jolly.exe, you forgotten to install “@types/googlemaps”: “^3.30.4”, in dev-dependencies instead add types in tsconfig file How to make the search bar give suggestion on places rather than addresses? Hey…I am getting error Uncaught (in promise): TypeError: Cannot read property ‘nativeElement’ of undefined. Help please sir to the uber animation example give animate MARKER ON GOOGLE MAP WITH CURRENT LOCATION in Angular google Maps tengo un herror aqui : @ViewChild(‘search’) public searchElementRef: ElementRef; y me dice esto al ejecutar ng serve –open ERROR in src/app/home/map/map.component.ts(18,4): error TS2554: Expected 2 arguments, but got 1. Try using @ViewChild(‘search’, {static: false}) um não, muitos… how to calculate distance between two latitude and longitude points in angular 6 Hi Vinoth, you need to use Haversine Formula which takes two sets of Lattitude and Longitude coordinates to give the distance on a flat surface. Try this will this work in ios Hi Jolly! I’m using this in ionic, and I get an error – cannot read property ‘Autocomplete’ of undefined. declare geoCoders in @Component geoCoder:any; google variable its not defined nowhere, thats the problem, why? Thanks, working great with Ionic 4 Hello, markerDraggable not working. Please help. Code- ERROR TypeError: Cannot read property ‘nativeElement’ of undefined at BrowseSuppliersComponent.map Great tutorial Jolly! Currently the address only changes based on the marker position on the map. How do you make the address change based on what’s been typed in the search box? Any help much appreciated. .component.ts(32,27): error TS2304: Cannot find name ‘google’. .component.ts(34,30): error TS2304: Cannot find name ‘google’. .component.ts(40,22): error TS2503: Cannot find namespace ‘google’. Angular CLI: 7.3.9 Node: 8.12.0 OS: win32 x64 Angular: 7.0.3 npm install @types/google-maps –save import { google } from “google-maps”; declare var google : google; @Component({ selector: ‘app-store-add’, templateUrl: ‘./store-add.component.html’ }) Hi. Great tutorial. Interesting enough everything is working including dragging marker but my search does not provide a drop down list of suggestions Actually I’ve noticed it’s not working since I am placing it on a modal. How can one make it work in a modal? So I discovered the problem. Google autocomple has a class .pac-container that has a z-index of 1000 whereas bootstrap modal has a z-index of 1050. To sort this out just set the following in your style.css .pac-container { z-index: 1052 !important; // Must be higher than z-index of bootstrap modal } or you could check for the event of shown.bs.modal and set the z-index of .pac-container as follows $(‘#your-modal’).on(‘shown.bs.modal’, () => { $(‘.pac-container’).css(‘z-index’, 1055); }); ERROR Error: “Uncaught (in promise): TypeError: _this.searchElementRef is undefined error TS2304: Cannot find name ‘google’ not showing all locations Search bar is also not working, can anyone help me Hi! Works pretty well. Is there any way we can force the suggestions to be relatively close to the location? […]… […] Why do I get this problem: ./node_modules/@agm/core/fesm5/agm-core.js 4538:120-128 “export ‘ɵɵinject’ was not found in ‘@angular/core’ ?? It gives error at view child says two parameters are required given one Use {static:false} as second parameter in @ViewChild
https://www.freakyjolly.com/angular-google-maps-using-agm-core/
CC-MAIN-2020-16
refinedweb
1,111
57.27
I have an entity bean (CMP) I need to design which must have a primary key field generated automatically, which is independent of the data stored in the entity. I (at least for now) will be using JBoss. Is there a standard way in java to create a unique string, like GUID stuff in windows? Am I thinking about this wrong? Should I be doing something else instead? I am moving a PHP web application to J2EE, and will be loosing the auto incrementing column DB feature if I do this so it is DB independent. Thanks for any help, -Pete Standard UID (GUID?) Generation for entity bean? (3 messages) - Posted by: Peter Daly - Posted on: March 04 2002 13:05 EST Threaded Messages (3) - Standard UID (GUID?) Generation for entity bean? by Neeraj Nargund on March 04 2002 14:25 EST - Standard UID (GUID?) Generation for entity bean? by Peter Daly on March 04 2002 14:38 EST - Standard UID (GUID?) Generation for entity bean? by Neeraj Nargund on March 05 2002 02:14 EST Standard UID (GUID?) Generation for entity bean?[ Go to top ] <I have an entity bean (CMP) I need to design which must have a primary key field generated automatically, which is independent of the data stored in the entity - Posted by: Neeraj Nargund - Posted on: March 04 2002 14:25 EST - in response to Peter Daly >> Does this mean at the time of deployment you do not know the primary key field...??? Or just the data...(Which is ok).in this case you can implement your own PK generator which does this. Standard UID (GUID?) Generation for entity bean?[ Go to top ] I just need a generic PK field, which does not relate to the data being stored. It is for a "Company" entity, in our user management/tracking piece. It should be possible to have identical companies, from all aspects except the PK, which is used to track, and relate to other entities. I just need a generic unique pk. - Posted by: Peter Daly - Posted on: March 04 2002 14:38 EST - in response to Neeraj Nargund It sounds like you are suggesting I write my own PK generator. What I was wondering was if something like that was already implemented as part of the Java libraries, or JBoss. Yes, I could write my own, but I'd rather use a "standard" one if such a thing exists. -Pete Standard UID (GUID?) Generation for entity bean?[ Go to top ] In that case i shall refer u too - Posted by: Neeraj Nargund - Posted on: March 05 2002 14:14 EST - in response to Peter Daly <Entity Bean Primary Key Generator Posted By: Floyd Marinescu on July 13, 2000 on TSS ** * This Entity Bean generates unique id's for any client that needs one. * * @author Floyd Marinescu */ public class UniqueIDGeneratorBean implements EntityBean { protected EntityContext ctx; // Environment properties the bean was deployed with public Properties env; private String idName; private long nextID; /** * Create a PK generator. * * @param idName the name for this unique id tracker * @return The primary key (just a string) for this index */ public UniqueIDGeneratorPK ejbCreate( UniqueIDGeneratorPK key ) throws CreateException, RemoteException { this.idName = key.idName; this.nextID = 0; ... //Build SQL query pstmt = conn.prepareStatement("insert into uniqueIDs (idName, nextID) values (?, ?)"); pstmt.setString( 1, this.idName); pstmt.setLong( 2, this.nextID); //insert row in databse pstmt.executeUpdate(); return key; ... } public UniqueIDGeneratorPK ejbFindByPrimaryKey(UniqueIDGeneratorPK key) throws FinderException, ObjectNotFoundException, RemoteException { ... // Find the Entity in the DB pstmt = conn.prepareStatement("select idName from uniqueIDs where idName = ?"); pstmt.setString( 1, key.idName ); rs = pstmt.executeQuery(); // iterate to the first row in the resultset if( ! rs.next() ) //if generator does not exist in database { throw new ObjectNotFoundException("ID Generator " + key + " does not exist in databse" ); } // No errors occurred, so return the Primary Key return key; ... } public void ejbLoad() throws RemoteException { // Query the Entity Context to get the current // Primary Key, so we know which instance to load. this.idName = ((UniqueIDGeneratorPK) ctx.getPrimaryKey()).idName; ... pstmt = conn.prepareStatement("select nextID from uniqueIDs where idName = ?"); pstmt.setString(1, this.idName); rs = pstmt.executeQuery(); // iterate to the first row in the results rs.next(); // populate with data from database this.nextID = rs.getLong("nextID"); ... } public void ejbRemove() throws RemoteException { // Query the Entity Context to get the current // Primary Key, so we know which instance to load. String pk = ((UniqueIDGeneratorPK)ctx.getPrimaryKey()).idName; ... pstmt = conn.prepareStatement("delete from uniqueIDs where idName = ?"); pstmt.setString(1, pk); //Throw a system-level exception if something bad happened. if (pstmt.executeUpdate() == 0) { //log errors here throw new RemoteException("UniqueIdGenerator " + pk + " failed to be removed from the database"); } ... } public void ejbStore() throws RemoteException { ... // Store account in DB pstmt = conn.prepareStatement("update uniqueIDs set nextID = ? where idName = ?"); pstmt.setLong( 1, this.nextID); pstmt.setString(2, this.idName); pstmt.executeUpdate(); ... } public long generateUniqueId() throws EJBException { this.nextID++; return this.nextID; } The bean is very versatile, and can be used in many different ways. The way I use it is that each entity bean "class" in my application uses a different instance of the UniqueIDGeneratorBean. In the create method of an entity bean class (for example, MessageBean), I would execute: UniqueIDGeneratorHome.findByPrimaryKey("MessageBean"); Which will return an instance of the UniqueIDGeneratorBean that is maintaing count just for my MessageBean class. If you want primary keys to be unique across all beans in your application, then simply use one instance of your UniqueIDGenerator across your entire application. >>>>
http://www.theserverside.com/discussions/thread.tss?thread_id=12266
CC-MAIN-2017-09
refinedweb
903
50.73
[oneiric] ImportError: can't import mx.DateTime module Bug Description I'm running Oneiric with the latest updates as of today. This is a fresh install from the desktop image of today and I just installed python-psycopg2. Then, this is what I get when I try to import the module: $ python Python 2.7.2+ (default, Jul 10 2011, 09:48:58) [GCC 4.6.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import psycopg2 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/ from _psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID ImportError: can't import mx.DateTime module For your information, python- By the way, branching the downstream project from lp:ubuntu/psycopg2, building it with debuild and then installing the resulting deb file produces a working package, eg no ImportError exception is raised but not sure whether that's because HAVE_MXDATETIME was not defined during compilation. I will try in an sbuild environment later. So, I tried rebuilding psycopg2 in an sbuild environment and installing the package has the same outcome as when I rebuilt with debuild, eg no ImportError exception is raised. Might it be possible that the oneiric package building environment has changed since the python-psycopg2 package was last built, so maybe just rebuilding it might solve the problem? Hello, I am one of the current psycopg maintainers. This problem was addressed in release 2.4.2. In previous releases, if the mx include files were found (by setup.py) the mx support was compiled and mx.DateTime was unconditionally imported at psycopg2 import time. This creates the error reported here (and in http:// The issue was fixed in psycopg 2.4.2, where the import is no more unconditional even if the mx support is built at compile time. I suggest to upgrade the package to the most recent version. 2.4.2 is in Debian, but autosyncs have been disabled because we've got Ubuntu-specific differences. They look superficial though: psycopg2 (2.2.1-1ubuntu2) natty; urgency=low * Rebuild to add support for python 2.7. -- Matthias Klose <email address hidden> Fri, 03 Dec 2010 00:07:07 +0000 psycopg2 (2.2.1-1ubuntu1) maverick; urgency=low * Merge from Debian (LP: #611040). Remaining changes: debian/control, debian/rules: Install a seperate testsuite package. -- Mikhail Turov <email address hidden> Wed, 28 Jul 2010 21:20:45 +0100 Certainly the first change is unimportant, and I'll take a closer look to see why we need to carry the second one. If we don't then a sync to Debian's latest should do the trick. ...and we'd get a conversion to dh_python2 for free! :) That's not the only thing going on here. I notice that if I just rebuild psycopg2, it works again. It looks like mxDateTime checks API version in its import function (mxDateTime_ The sync will fix it. Thanks for looking at this in more detail Stefano. I've re-pinged the archive admins on the psycopg2 sync. I think we've waited long enough for a response on the Ubuntu delta. My opinion is that we don't need it, and we should just do the sync. Prevents the package from being used, so should be High importance. --- wiki.ubuntu. com/BugSquad Ubuntu Bug Squad volunteer triager http://
https://bugs.launchpad.net/ubuntu/+source/psycopg2/+bug/811115
CC-MAIN-2017-09
refinedweb
560
66.03
OSI Turns Down 4 Licenses; Approves Python Foundation's 154 Russ Nelson writes "The Open Source Initiative turned down four licenses this week. Not to name names, but one license had a restrictive patent grant that only applied to GPL'ed operating systems. Another was more of a rant than a license. Another was derived from the GPL in violation of the GPL's copyright. And the fourth had insufficient review on the license-discuss mailing list (archives). The one license that did pass was the Python Software Foundation License." Hypocrisy (Score:1, Troll) It's not like we're going to make RMS starve if we copy it. He doesn't make his living selling copies does he? Re:Hypocrisy (Score:4, Insightful) The GPL is, in its essence, an ideological manifesto. Disallowing others from modifying your manifesto is not inconsistent with the GNU philosophy - the only thing they desire is that you allow others to modify your code, not your thoughts. Re:Hypocrisy (Score:1) From GNU's Free Documentation License: The purpose of this License is to make a manual, textbook, or other written document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. It certainly seems that they don't feel that their license needs to be "free" is the sense of the word they apply to other written documents. Does the FSF offer some explanation for why some written documents should be "free" and others not? Does the same line of argument apply to software? Re:Hypocrisy (Score:1) Re:Hypocrisy (Score:2) I'm pretty sure you can add your own amendments to the GPL, but I think that's pretty much on a case by case basis and that does not necesarrily give the right to call it a GPL. But don't quote me on it. Re:Hypocrisy (not) (Score:2) So, with that goal in mind, how would you construct a license that is both modifiable under the terms of the GFDL (which you quote) and still accomplishes the stated goal? The GPL can be used as a guide in creating your own license. This has certainly been done often enough. But, to modify the license itself would hurt the aim of Free Software. I'm also not certain what the legal implications are if a license agreement affords me the right to modify it. The GPL has teeth that come directly from copyright law. Under copyright law, you are not allowed to modify or distribute the code except in accordance with fair use doctrin. The GPL acknowledges this fact, and then offers you an "out" in the form of a license (this is in direct contrast to EULAs and other "shrink-wrap licenses", which require you to accept the license before USING the software) which you can take or leave as you see fit. Now, if you were allowed to modify the license, your software would have to refer to some "license archetype", perhaps backing that usage up using trademark (e.g. you can modify the GPL, but only if you give it your own name). This is sticky, and keep in mind that the GPL was a daring bit of legal hackery that has still yet to be tested in court. To add yet another complication to the core oddity of defending right-to-modify with copyright law would risk the basic goal by making the GPL harder to defend than it already was. All that asside, I think it's of questionable value to refer to the restrictions on the GPL as hypocritical. The GPL is a software license, not a work of art or engineering. I'm not quite certain why you feel that it would be hypocritical to say that software is an area of human endevor where freedom to modify is important but contracts and licenses are not. You may disagree, and you are most welcome to. But even if I accept that the two should be treated the same (and I do not, obviously), you make a challenge of hypocricy here which I do not believe you have explained. Re:Hypocrisy (not) (Score:1) I didn't say there was anything hypocritical going on. I said I don't understand what is going on. I would have to understand before I could accuse them of hypocrisy. Re:Hypocrisy (not) (Score:1) This isn't necessary for, say, BSD-style licenses (BSD, ZLib, LibPNG) because they're simpler and shorter - it's reasonable to include the entire BSD license in each file of your source, so you don't say "this is BSD-licensed", you say "you may do this, this and this but not this". "BSD license" is just a convenient shorthand for describing things - but from a legal point of view, the license consists of a couple of paragraphs embedded in each source file. However, it's obviously not reasonable to include the whole GPL in the same way. The GPL is long (20K?) because of copyleft - it's less permissive than the BSD license, so it can't just grant blanket permissions like the BSD license does (although an abbreviated GPL without the preamble/manifesto would be nice, since they're not really part of the license as such). If the GPL was free (in the FSF sense of the word) or open source, you'd get people redefining what it meant, and much confusion would ensue. ("Our software is licensed under the GPL." "No it isn't, ours is the real GPL!") absolute freedom is not useful (Score:1) Once you acknowledge that all who live are bound, discussing freedom becomes a matter of discussing how they should be bound, and to what they should be free. The FSF takes the position that people should have certain things as freedoms, and other things, such s the ability to deprive people of those freedoms, they should not have as freedoms. Neither the FSF or RMS ever claimed to want anarchy or complete freedom (i.e. no rules at all). Where on earth did you ever think that they did? Hell, the abolitionists in the US wanted all people to be free in the sense of not being slaves - they didn't want people to be free in the sense of free to own slaves. Were they hypocrits? Re:Hypocrisy (Score:2) Probably to avoid companies to change the license so it doesn't allow FreeSoftware anymore. But this is a good idea, why don't we try to submit this modification to GPL v3 or even GPL v4? Yep thats great! (Score:2, Insightful) should read: OSI Releases information on licenses, slashdot poster excited, no one else cares. Open source needs less licences not more.. Re:Yep thats great! (Score:1) That's why this is good news. The licenses rejected were among the most bizarre, redundant or useless licenses yet submitted to OSI. Perhaps that's why they made the news. IRC Clients (Score:2) -russ Re:IRC Clients (Score:1) Great! (Score:1, Insightful) THE license (Score:4, Funny) From the end of the PSF license: (Score:4, Funny) All that legalease will keep most mortals a hare's breadth away from comprehending. I wonder if "tortious" action is like a gui user dropping back into his/her "shell"? {SEG} sorry for the bad puns...I can hear most of you going "tcsh-tcsh"... tortious (Score:3, Informative) Legal language has lots of latin in it, and the words have very precise meanings. Re:tortious -WRONG (Score:1, Informative) TORT -. Re:tortious (Score:1) But just the latin words. English words like "free" or "all" and even "no" very often mean "limited", "some", and "a few" in legalese.... especialy in constitutional law. Ummmm...thanks for the update (Score:5, Informative) Re:Ummmm...thanks for the update (Score:1) It was eggs with cheese, sausage and banana bread. I've got to ask, this _is_ ((eggs+cheese)+sausage+banana bread), right? Not (eggs+cheese+sausage+banana bread), which is how I initally read it. Re:Ummmm...thanks for the update (Score:1) I think it was ((eggs+cheese)+((sausage+banana)+bread)), or in other words, your normal eggs and cheese as a side to a banana sausage sandwich. Just as bad as calling a Kiwi and Aussie ! (Score:1) Let the games begin ! Re:Just as bad as calling a Kiwi and Aussie ! (Score:2, Funny) Seeing as this is about *nix licensing, shouldnt that be a tar.gz pit? Re:Just as bad as calling a Kiwi and Aussie ! (Score:2) ony for the GPL crowd. For everyone else, it's tar.bz2 :) hawk WhooHoo! (Score:5, Insightful) And what a bizarre license that was (not to name names). It was essentially the BSD license word for word, with the aforementioned patent grant. Yet you couldn't legally use the software on a BSD licensed operating system. Another was more of a rant than a license. A delicious rant to be sure. I quite enjoyed it, despite its wrongheadedness. It could not be approved of course, since it explicitly denied its own validity. The one license that did pass was the Python Software Foundation License. Whoohoo! In this age of a million open source licenses, it's nice to see that a sensible license that fills a gap in open source gets approved while the frivolous crap gets flushed. Re:WhooHoo! (Score:5, Interesting) I'm not denying that it fills a gap, but a cursory reading of the license doesn't seem to indicate to me what gap it's filling. Why was it not possible/desirable to license Python under one of the existing Free Software licenses, and instead necessary to come up with another one? Re:WhooHoo! (Score:3, Interesting) Because the Python source code was, at various times, "owned" (copyright was in the name of) Stichting Mathematisch Centrum, the Corporation for National Research Initiatives, BeOpen, Digital Creations, and the Python Software Foundation. Guido couldn't release it under the GPL, because it wasn't entirely "his" software to license. (Google cache of the license [google.com]) Re:WhooHoo! (Score:2) More correctly, Guido didn't want to license Python under the GPL, but did want it to be able to be integrated with GPLed software, as well as software under virtually any sort of license. (25 pages of Python license history snipped... see the full scoop [python.org] for the current license.) Re:WhooHoo! (Score:3, Informative) Re:WhooHoo! (Score:2) Until that happens, this whole story is as pointless as the whole "It" fiasco, which I note reared it's ugly and decidedly non-pointed head on Wired again today. Re:WhooHoo! (Score:2) (Not only that, but the airline industry got skewered as well. Although not as much as John Travolta;) The Cretan License (Score:2) Re:The Cretan License (Score:1) Or here [chrisbrien.co.uk] for a plain text (slighly older) version. Yes, that is my website. Do I get karma for being the subject of a Some software covered by the license can be found here [chrisbrien.co.uk], and here [chrisbrien.co.uk]. The latter uses DirectX, but works under Wine (that's vanilla Wine (yummy?) not WineX). The source for that is here [chrisbrien.co.uk]. Re:The Cretan License (Score:1) Do I get karma for being the subject of a /. story? You would have, except you blew it at the last minute (i.e. penultimate word) in your license by using "it's" instead of "its". A poet wouldn't have done that ;-) Tim "Poetic"? Not original (Score:2) Re:"Poetic"? Not original (Score:1) Restrictive Patent Grant License (Score:5, Informative) Intel modified the BSD license in the following ways: Exactlly what OS is licensed under the GPL? (Score:2) Licensing something for GPLed `OS's is nearly as bad as the FHS saying Let's name some more names... (Score:5, Informative) The Poetic License [chrisbrien.co.uk] states that: ." (har de har har) The CMGPL [crynwr.com] The GPL without a bunch of sections? Which ones, you ask? Mostly the ones that don't count! The Intel BSD+Patent License [crynwr.com] Like BSD, but grants a patent license. Patent license is specifically not granted to use under non-GPL OS's, or with modified versions, although copyright license is the same as BSD. Re:Let's name some more names... (Score:2, Insightful) Re:Let's name some more names... (Score:2) So what are they? (Score:2) Anyone care to enumerate the other three licenses? GPL (Score:4, Funny) Re:GPL (Score:3, Insightful) The GPL is a tool which was created with one goal: to allow modification and distribution of software. The goal was not (even given the FSF's fondness for recursion) to allow modification of the GPL. Re:GPL (Score:3, Insightful) I personally don't have a problem with companies restricting redistribution of code (eg. forcing others to purchase it), so long as once you've purchased it, you get the source and can modify it (or distribute the patches to others who have purchased it). My *guess*, however, is that many companies are afraid they'll be forced to support software others modify if they give out the code. Re:GPL (Score:1) That already exists - since copyright only applies to distribution anyway, you can take any GPL'd code, hack it up whichever way, and use it internally to your organization. As long as you don't distribute any binaries built from it you are under no oblication to ever provide source code. I don't see how you think this would help companies open up and write Open Source code, though - source that never leaves the company walls is effectively not open, and those changes will never get rolled back into the community. That doesn't seem like a very productive outcome. Re:GPL (Score:1) Re:GPL (Score:1) Copywrite applies to distribution, not to modification. Private modifications are derivitave works, they just aren't distributed and so never fall under the control of copyright law. Re:GPL (Score:1) Re:GPL (Score:1) Well shut my mouth - I guess I learned something today :) Although in practice 106(2) would be very difficult to enforce. I could be mixing up derivative Britney Spears tracks in my basement (god forbid) for years and no one would be the wiser. Re:GPL (Score:1) Although in practice 106(2) would be very difficult to enforce. I could be mixing up derivative Britney Spears tracks in my basement (god forbid) for years and no one would be the wiser. Modification for private use almost certainly falls under fair use. But there are exceptions. See Lewis Galoob Toys, Inc. v. Nintendo (game genie is legal), or Micro Star v. FormGen Inc. (Nuke It is not). This is why I said that I think we mainly agree on the law (modifications for private home use are legal), just not on the specific semantics. Re:GPL (Score:1) KRL... (Score:2) (It was apparently submitted for application but never approved). Somebody confirm please, me curious Re:KRL... (Score:1) Artistic license (Score:5, Funny) Yeah, I know there's plenty of room for argument all around, but my sympathies are with small software vendors who need some way to get enough revenue from 100-5000 licenses to pay salaries. The Artistic License strikes me as compact and commonsensical, and a good model for many situations. And of course it has the coolest name. Re:Artistic license (Score:2) Re:Artistic license versus GPL (Score:1) I think the heart of the matter is sociological rather than legalistic or economic. It's a question of how to create and sustain a user community. You might regard a piece of code as 'your baby,' and only be prepared to share it with people who promise not to make money off your baby. In that case, it's 'all about you.' But you might instead want to get as many people excited about your baby as possible -- in which case giving new users the ability to make some money off it is a positive inducement. To use an extreme example: Suppose you designed a kewl language, wrote an efficient compiler, and got lots of praise from your initial users. But you craft a license agreement that not only restricts sale of the compiler code, but further restricts users from selling any applications built with your language. You might get praise from RMS and others who feel that all software should in principle be free; but you won't win the hearts and minds of developers whose salaries depend on the ability to build software products. It's so hard to promulgate a new language anyway; and this extra restriction would cut out the very people most likely to have an open mind about new technology. Even end-user organizations would balk at building their custom apps with your language, if they must give up the option of ever reselling them, and if their software vendors aren't embracing it. So I guess my point is that I see a need for several licensing regimes, appropriate for different kinds of software and software users. There are certain applications that are too specialized and expensive to get built through an open source community, and will thus require a commercial R&D team that can only be funded through proprietary licensing. There are widely-used and widely-needed applications that are best served by communities working under something like GPL or LGPL. And there are certain components that should be as widely-distributed as possible, where everybody benefits through standardization even outside the open source community; and if these components have very nonrestrictive licenses, it's easier to proselytize effectively and reduce the temptation or need for anybody to roll their own solution. But of course, that's only my opinion. When will they approve the MGPL? (Score:5, Funny) Re:When will they approve the MGPL? (Score:2) I'm gonna sue their asses off! Just what we need, another type of "free" software (Score:2, Funny) I wonder if I should submit my license (Score:2, Funny) And then for some of my political software work, I used the Freeware for Feminists license - basically free, so long as the user was sympathetic with a feminist cause, and not granted for anti-feminist usage. Kind of viral, but I did make a splash screen and gave out source code with the compiled code. - hmm (Score:1, Insightful) IGNORED in the past 2 years? if you have big $, OSI will grant approval. if not, you will be ignored. Licences and contracts are copyrightable? (Score:1) Re:Licences and contracts are copyrightable? (Score:1) A Good License (Score:3, Interesting) (The following is my opinion, so please read it as such. When I refer to a "good" open source license, I am making a qualitative assessment, and not trying to set up criteria for any approval process but my own.) The purpose of open source licenses are to grant the user a broad set of permissions and rights over and above those granted by copyright law. Their purpose is not to bind the user to the will of the licensor. A good open source license must be based on copyright law, not contract law. The first thing a good license should do is grant unconditional permission to use the software. This should be so basic it to not be worth mentioning, but you would be surprised as some of the licensed submitted. Additionally, the use of the software should not be trigger for anything else. We don't want any EULA's here, thank you. The second thing a good license should do is clearly inform the user of their permissions. These permissions must not be predicated upon acceptance of any agreement. A permission may have conditions attached to it. If there is anything you wish the user NOT to do, make it a condition. Next the license should have a warranty disclaimer, to assure the user that they will not be sued if they contribute stuff to the project. You may (and should if you're a commercial project) include a real warranty as a separate legal document. Notice that I haven't included anything about what you require the user to do. No blanket obligations. That's on purpose. Open Source and Free Software are NOT about making people do things. It is okay to make an obligation be a condition to a permission. It is not okay to make an obligation be a condition to the entire license. Remember, this is about what the user can and cannot do. Software licenses as contracts was an invention of the proprietary software industry. There was a time not that long ago when copyright law as very vague as to the status of software. So the industry decided to use contract law instead, and created licenses that had such bizarre phrases in them as "by reading this sentence you agree to the following obligations...". That's bullshit and Open Source and Free Software should have nothing to do with such rubbish. Re:A Good License (Score:2) First off, licenses aren't written for end users. Yes, they're purportedly intended to inform a user of their privileges, but the true audience of a license is a judge and a pack of lawyers. The important thing for a license isn't that it's clear to joe blow, but that it's clear in court: a contract that's clear to joe blow is meaningless if a court can't make heads or tails of it. Confusing terminology in contracts is the result of two problems. First, colloquial language is very subjective and very slippery, and so legal documents have to be written in a specialized dialect of English that has arisen over centuries of effort. It's the same problem with programming languages: we can't have a truly natural language programming language because it's too imprecise. But just as engineers have an easy time reading and understanding source code, so do lawyers have an easy time reading legalese. The second problem is that most lawyers have a very poor mastery of both English The next thing is that all licenses are based in contract law. There is no room in copyright law for granting permissions beyond those explicitly enumerated (and irrevocable) in copyright law. If you want to grant extra permissions, or revoke certain permissions, then you Next, you have to consider the purpose behind an OSS license. One person may have a different purpose from you. To me, the term OSS means "you can read my source code, and you can contribute changes back to me". That's it. It doesn't say anything about whether or not someone can use the source code. It doesn't say anything about whether or not someone can produce derivative works. Sure, GPL talks about those things, but that's because GPL goes farther than the simple concept of OSS. The same goes for other OSS licenses: they will almost always go beyond the simple concept of OSS, building on top of the concept in order to further the purposes that the author has in offering the software as OSS in the first place. As an example, someone might be making their software OSS for the purpose of crushing Windows. It shouldn't be too surprising to see that their license contains a clause prohibiting the porting of the software to any Microsoft operating system, either natively or under an emulator. Does OSS say anything about that? Nope. Does GPL say anything about that? Nope. But that author wants to crush Windows, so he's not very well going to allow his software to be used under Windows, now is he? He's got a purpose, and his license reflects that purpose. Then there's the last point. Software licenses The only thing that software copyrights do is to allow the author of software to restrict his software's distribution. Once that's in place, the author can then impose a license (read: contract) on the user for the software. Without software copyrights, the licenses would be meaningless, because a user could just say "naw, I'm not gonna agree to the license, so it doesn't bind me, but I'm gonna use the software anyway because you can't prevent me from getting a copy of it." Re:A Good License (Score:1) Firstly, I think that in any legal agreement that's put forwards in good faith one of the most fundamental objectives must be that the parties to it understand it. A contract between Microsoft and IBM can be thoroughly incomprehensible without careful interpretation by a team of lawyers because that's the level of attention that those companies can give it. A contract between IBM and its most junior employees should be comprehensible by those employees with the level of resources they can reasonably be expected to have. Software licences, whether proprietary or free, should be designed to be comprehensible to the people who are being asked to agree to them. There's nothing wrong with a highly legalistic licence expected to be used amongst companies or individuals that will have lawyers examine them closely and that can be the objective in a free software licence but if you're intending it to be used more widely then you should make sure it makes sense to others. Would you want to be in the position of suing someone who had agreed to your licence in good faith believing they had more rights under it than they did simply because you'd made it incomprehensible? Or do you want to discourage anyone from actually contributing to your project because they don't understand what rights they have?. That's the whole point of it. If I own copyright on a work then I can say "you may copy this", no contract there it's purely a permission (a licence) I can choose to give. Equally I can say "you can copy this once", "you can copy this but only once per year", "school teachers may copy this but nobody else" or "anyone anyone whose foot fits this slipper may copy this". None of those are contractual. They are all permissions (licences) that can be granted by a copyright holder. Re:A Good License (Score:2) You have to look at it the same way as a program and its users' guide. We programmers have little problem reading source code (even complex source code) and figuring out what's going on. But joe blow can't do that. Both of us benefit from the users' guide: joe blow so he can make sense of what he's seeing, and us so that we can figure out whether or not a particular behavior is a bug or if it's intentional in some strange way. But as far as the computer is concerned, the users' guide is meaningless. The code is the be all and end all of the system's function.. That's certainly true. Unfortunately there isn't a whole lot of precedent about the validity of choice of jurisdiction clauses. English common law countries tend to obey them (and US courts seem to always do so). But I can't speak about European common law countries, Confucian law countries, or others. But this whole issue about things happening internationally is really quite new. Before the computer age, international commercial agreements were nearly always exchange-of-goods, so there really wasn't any kind of licensing issue. There were certainly cases of a foreign manufacturer buying a production model of someone's invention, then copying it and producing it themselves in their own countries. And the few court proceedings in these matters were nearly always ineffective. But with the rate at which things are going, we can expect to start seeing a whole lot more cases discussing international licensing.. You're thinking about contract law. Copyright law says "this thing is owned by the copyright holder, and he can prevent anyone he wants from getting access to it" (basically). The only variant that exists in copyright law is the concept of "public domain": a person who would otherwise hold copyright over something can put that thing into the public domain, in which case no one has differential rights to that thing. Contract law says "different parties have different rights, resources, and privileges, and they can negotiate exchanges of those rights, resources, and privileges under these rules." Copyright grants a privilege to one party that is excluded from other parties. Contract law influences how the copyright holder can grant his priveleges to other people. Re:A Good License (Score:2) Okay, it's some years since I got my law degree and it's in UK law whereas I would guess you're talking from an American perspective but imho you're just plain wrong. Granting someone permission to copy and distribute your copyrighted work does not in any way require a contract just as giving someone a gift doesn't require contract law and telling someone they can enter your house, use your computer, or borrow your car does not require contract law. You can use a contract in any of those situations but if all you're doing is giving permissions then contract simply doesn't come into it. Re:A Good License (Score:1) And, yes, I'm talking about US law. Re:A Good License (Score:2) Cool. Too bad it doesn't apply to most Open Source licenses. The operative phrase in your quote is "negotiate exchanges". Since that's how I always understood contract law, we must be in agreement on something! But I don't understand where the negotiation or the exchange comes in when I download the Linux kernel and start distributing it. I have negotiated nothing! I have given nothing back to Linus and Friends! If you look at the typical contract, you will see certain attributes. First, both parties are aware of each other. Second, negotiation of terms is possible even if the negotiation does occur. Third, both parties receive something of benefit. Finally there is an explicit agreement. None of these attributes are present when I download and start legally distributing the Linux kernel. Linus and Friends are not aware of me, or of the fact that I possess a legal copy of the kernel. And it is not possible to negiate terms because a line of communication has not been established (although that communication could be initiated by me). I receive benefit from the kernel, but Linus and Friends receive nothing from me, not even the satisfaction of knowing that I am even using it. Finally, there is no explicit agreement. No signature, no handshake, no verbal "I agree", no clickthrough, no filling out of registration cards, nothing. A transaction of sorts has occured, but there is no contract. Re:A Good License (Score:2) It depends on the license. For the MS EULA, you are certainly correct. But software written by developers that expect/invite/encourage participation in the development process should have a license written for developers. Of course lawyers and judges will be one audience for the license, but they will not be the primary audience. The next thing is that all licenses are based in contract law. There is no room in copyright law for ranting permissions beyond those explicitly enumerated (and irrevocable) in copyright law. Here's a sample license: "You may freely copy, distribute, modify, translate, or otherwise transmit and transform this software without restriction." Just how does this qualify as a contract? Where is the agreement? Where is the consideration? Are you saying that the above license is not valid? What exchange? In the case of the Linux (as an example) I have given nothing to Linus Torvalds. No money. No pledges of royalties. No promises that I will ever contribute anything back to the project. I'm not grantingd him any permissions or benefits. Heck, Linus and I have never even met, so how can we possibly exchange anything! Once that's in place, the author can then impose a license (read: contract) on the user for the software. Contracts are never "imposed." They must be agreed to voluntarily by both parties. Re:A Good License (Score:2) Why would you ever write a license that you don't want to be enforceable? If you want it to be enforceable, then it has to be (principally) comprehensible in court. As to determining whether or not a contract is clear, the court will look at the language itself, not defer to the statements of the defendant or plaintiff. The only exception is when both defendant and plaintiff agree on the interpretation of a particular clause, in which case the court will take that interpretation rather than the interpretation that the court might find on its own. But in such a case, I think you'd agree that the license is sufficiently clear. Here's a sample license: "You may freely copy, distribute, modify, translate, or otherwise transmit and transform this software without restriction." Just how does this qualify as a contract? Where is the agreement? Where is the consideration? Are you saying that the above license is not valid? Agreements between parties fall exclusively under the jurisdiction of contract law. The parties may, in fact, act under an agreement that is an invalid contract, but it is still under the jurisdiction of contract law. More directly, let's look at your sample license. Presumably, the person offering the license possesses a copyright in the software. So the holder is granting certain distribution and modification privileges. That's the consideration that he's giving. The license doesn't explicitly enumerate consideration that the recipient is granting back to the holder, but (and this is an important principle in contract law) since the holder is the party that offered the contract, it is presumed that the holder is gaining an automatic intangible concession in return (such as the pride of knowing that other people want to use his software). The important thing is that the person who offers terms is presumed to agree to the terms; if he didn't agree to them, he wouldn't've offered them in the first place. Then, if the recipient of the software actually does exercise one of the privileges granted to him in the license, then he has also agreed to the license (contract). This is another important concept in contract law: implied consent. If one party exercises a privilege granted only under a contract, then that party has consented to that contract. This concept actually doesn't exist in the text of the legislation that forms contract law. This exists in a more important place: legal precedent. So the example license that you present has offer, has exchange of consideration, and has consent. All three of the keystones that are required for a contract to be valid. What exchange? In the case of the Linux (as an example) I have given nothing to Linus Torvalds. No money. No pledges of royalties. No promises that I will ever contribute anything back to the project. I'm glad that you brought up Linux. Linux uses a fragmented intellectual property model, in which the entire body is covered by a single license, but the ownership of the individual pieces are retained by the original authors. So when you use Linux, you are entering into an agreement with each of the separable authors of the kernel. As one of the authors of the Linux kernel (interval timers, original I can say for certainty that the same consideration applies to a large number of the people involved in the Linux kernel, but I will neither name names, nor will I attempt to enumerate all the considerations that are gained by all the individual contributors to Linux (primarily because I don't know them all). I'm not granting him any permissions or benefits. Heck, Linus and I have never even met, so how can we possibly exchange anything! If physical meeting were a requirement for a binding contract, then the only commerce that would exist would be face-to-face barter. Mail order, internet sales, telephone solicitation, early book sales (even in person), credit cards, checks, ATM cards, and even paper currency all exist only because of nonlocal agreements to contracts. Contracts are never "imposed." They must be agreed to voluntarily by both parties. That's only sort of true. As the holder of a copyright, I can offer a contract without allowing negotiation. True, the contract doesn't bind unless the other party agrees, but in that case the other party doesn't get my software, either. So I have effectively imposed my contract on all people who want to use my software. This is the central argument behind the (many) suits over the years asserting that Microsoft has illegally leveraged its monopoly power to impose contracts. Re:A Good License (Score:1) As an aside... What happens when Microsoft becomes weakened? Will you subsequently abandon Linux development? Re:A Good License (Score:1) If microsoft gets so weak that there's no chance of it ever recovering, then I'll consider that goal to be achieved. If there's something new that I need out of Linux, I'll probably keep working on it (I like my mini beowulf cluster, after all). Otherwise I might dust off that old OS that I was writing and abandoned when I started up with Linux. Re:A Good License (Score:1) And as an aside, I did not necessarily consider the GPL to be a "good" license by my criteria. It is very borderline. The entirety of the license is be based on copyright law, and clause 5 even says the same. Yet clause 5 is attempting to place it under contract law. I can only assume that this is for the purpose of satisfying the overly pedantic lawyers working with the FSF. There's no way you can read the list of four freedoms of the FSF and conclude that RMS thinks software should be distributed under contract. Re:A Good License (Score:2) Funny, I didn't see anything in the FSF's four freedoms about binding people to the wills of other. That antithetical to freedom. (it's also antithetical to contract law, which is why unilateral contracts are Evil) Copyright law has already given the user the right to use the program. No ifs, ands or buts. Since so many commercial licenses say "by using this software you agree to...", I felt it necessary, if redundant, to explicitly assure the user that they can use the software no matter what else the license says. After all, I even know of one person who holds the belief that you may not use GPLd software unless you agree with the philosophical preamble in its entirety. Hmmm, you must be a lawyer, as no one else has so much trouble parsing standard English. Let me restate. You give the user a set of permissions. You then let the user know what these permissions are. You do not keep them secret for the user to guess. Let me give an example: "you may distribute this post to anyone." There. I granted a permission to all readers of this post, and also informed that of it. Yes, I did say that. But double check my post anyway. I had more than one criteria. The right to *use* the software unconditionally is the first of my criteria. Other rights, such as distribution, modification, etc., may be conditional. My Standard Software Disclaimer (Score:4, Funny) to real persons, living or dead is purley substaintial amount of non-tobacco ingredients. Colors may, in time, fade. We have sent the forms which seem to be right for you. Slippery when wet. For recipt.. This supersedes all previous notices. Re:My Standard Software Disclaimer (Score:2) Re:My Standard Software Disclaimer (Score:1) Re:My Standard Software Disclaimer (Score:1) May contain traces of nuts. Falling Rocks Do Not Stop. No Seatbelt Fine Exceeds $100. and of course Offer void where prohibited by law. : Fruitbat : becoming your parents (Score:1) Why do we have an organisation telling us what licenses we can and cannot use? I used a disapproved of, but still open source IMHO license...what then? Will the OSI call up the FBI to bust down my door? Fuck the establishment, we don't need anymore conformity factories. My favorite "License" (Score:2) This software comes with no warranty. If it breaks, you get to keep both pieces. That Monty Python License in Full (Score:2) 1. No Poofters 2. This program may not be used in a bat of custard if there is anyone looking 3. Three shall be the number of the count and the number of the count shall be three, thou shalt not count to two unless thou also counteth to three, nor shall thou count to four, five is right out. 4. There is no 4 5. Is right out 6. SPAM SPAM SPAM SPAM! Wonder SPAM! Wonderful SPAM 7. The program to which this license is attached may be used for any purpose whatsoever without payment provided that (1) this license is included in its entirety intact and (2) the provisions of sections 2, 4, 5 and 8 are complied with on alternate Wednesdays and sections 8, 9 and 4 are complied with at all other times 8. All copies of this program be distributed with the distributors choice of (a) the program source or (b) a bottle of Wostershire Sauce made from genuine Wostershires. 9. EEEK! 10. Naaawwwwww... Re:That Monty Python License in Full (Score:1) What is the license of this License? can i use it for any software i'd like to release in the wild? Re:That Monty Python License in Full (Score:2) Re:Wait a minute! (Score:1) Re:Wait a minute! (Score:2) Free Documentation License (FDL) Re:Wait a minute! (Score:2) Yeah, right. I log into a shell account and am held hostage to the wild fantasies of anyone who wrote some innocuous seeming library or kernel module. It's Python, for chrissakes. How can you not end up using it? This is getting crazy. A previous poster only wants feminists or fetishists to use his work. Sheesh. When do we go back to being normal people? Private citizens are left alone to tinker and share, businesses pay some royalties. If things get muddled up, we have a few beers and then forget what we were fighting about. Ah, the old country. Re:Wait a minute! (Score:1) Re:Wait a minute! (Score:5, Informative) You can however provided added or amended licensing conditions without modifying the actual text of the GPL; for example "this program may be distributed under the terms of the GNU GPL with the added requirement that [blah blah]." Re:Wait a minute! (Score:2) The reasoning for this was that if modification were allowed it would dilute the usefulness of the license, as "GPL-derived" licenses might not even be Free Software or Open Source. I disagree. The MPL is more or less GPL-derived. It's just that they got their lawyers together and made it look "different enough" so that nobody would accuse them of hacking the GPL, and that has not diluted the "usefulness" of the GPL. Also, there are several other licenses (e.g., Sleepycat) that are GPL-like, but not expressly derived from the GPL. The copyright restriction on the GPL can't prevent the proliferation of licenses. It just makes it harder for people who might want to use the GPL as a starting point. Their desire to prevent the "GPL brand" from being diluted is understandable. A more fair solution would be to allow unlimited modification of the GPL, as long as you didn't call your license the GPL. Re:Wait a minute! (Score:2) The Free Software Foundation isn't worried about the GPL brand. They simply aren't interested in making a GPL derivative an easy thing to do. The Free Software Foundation wants you to use the GPL (duh!) so they have copyrighted the GPL in order to prevent people from easily making clones. If you want your own GPL-like license then hire your own lawyers and hope that they are as well acquainted with software copyrights as the folks who have worked on the GPL (good luck). This might seem like a contradiction, but the Free Software Foundation isn't the "Information Must Be Free As In Free Beer Foundation." They are specifically trying to make sure that software comes with source code (and documentation :). They are not trying to make it so that all information is free. So while you are certainly right that the GPL copyright can't make license proliferation impossible, it certainly does make it more difficult (and more expensive), and that's a net win for the FSF. Re:Wait a minute! (Score:1) Since the GPL hasn't been tested in court, we don't really know how well the authors understand software copyrights. Re:Wait a minute! (Score:1) If it were a bad license some dumbass lawyer would have ate it for lunch already. Re:Wait a minute! (Score:1) The GPL is a copyrighted document that grants you explicit permission to redistribute it in unmodified form. Regardless of what the FSF wants to tell you, licenses which are GPL derivitives are not protected by copyright. The GPL is a functional document, changing any word or letter in it changes the function of the document. Therefore the protections against derivitives of the document fall under patent law, not copyright law, and the GPL has not been patented. Re:Wait a minute! (Score:2) Then what's the point of OSI?? (Score:1) But isn't this what OSI is for? They approve the license as open source or not. If someone modifies the GPL but it still statisfies the OSI requirments then it shouldn't be an issue if it was derived or not. The spirit of the license is the same as the GPL. In fact the derivative may be an attempt to strengthen that spirit in a court of law. If the derived work is suitable to the OSI then the FSF should allow it. Now I can see issues if the derived license wasn't OSI compliant but that doesn't seem to be the case here.
https://slashdot.org/story/01/11/30/1730207/osi-turns-down-4-licenses-approves-python-foundations?sdsrc=prevbtmprev
CC-MAIN-2017-43
refinedweb
7,852
63.7
perltutorial fokat <a name="about"><h2>What this tutorial is all about...</h2></a> <p>Some of us, monks who love Perl, also have to deal with the complexities of IP addresses, subnets and such. A while ago, I wrote <a href="">NetAddr::IP</a> to help me work out tedious tasks such as finding out which addresses fell within a certain subnet or allocating IP space to network devices. <p>This tutorial discusses many common tasks along with solutions using <a href="">NetAddr::IP</a>. Since Perl lacks a native type to represent either an IP address or an IP subnet, I feel this module has been quite helpful for fellow monks who like me, need to work in this area. <READMORE> <p. <hr> <a name="new"><h2>Specifying an IP Address or a subnet</h2></a> <p>A <a href="">NetAddr::IP</a> object represents a subnet. This involves storing an IP address <b>within</b> the subnet along with the subnet's netmask. Of course, using a host netmask (<tt>/32</tt> or in decimal notation, <tt>255.255.255.255</tt>) allows for the specification of a single IP address. <p>You can create a <a href="">NetAddr::IP</a> object with an incantation like the following: <CODE> use NetAddr::IP; my $ip = new NetAddr::IP '127.0.0.1'; </CODE> <p>which will create an object representing the 'address' <tt>127.0.0.1</tt> or the 'subnet' <tt>127.0.0.1/32</tt>. <p>Creating a subnet is equally easy. Just specify the address and netmask in almost any common notation, as in the following examples: <CODE>'; </CODE> <p>The following is a list of the acceptable arguments to <tt>->new()</tt> and their meanings: <ul> <li><tt>->new('broadcast')</tt> <p>Equivalent to the address <tt>255.255.255.255/32</tt> which is often used to denote a broadcast address. <li><tt>->new('default')</tt> <p>Synonym to the address <tt>0.0.0.0/0</tt> which is universally used to represent a default route. This subnet is guaranteed to <tt>->contains()</tt> any other subnet. More on that later. <p>For the benefit of many Cisco users out there, <tt>any</tt> is considered a synonym of <tt>default</tt>. <li><tt>->new('loopback')</tt> <p>The same as the address <tt>127.0.0.1/8</tt> which is the standard <tt>loopback</tt> address. <li><tt>->new('10.10.10.10')</tt> or <tt>->new('foo.bar.com')</tt> <p>This represents a single host. When no netmask is supplied, a netmask of <tt>/32</tt> is assumed. When supplying a name, the host name will be looked up using <a href="">gethostbyname()</a>, which will in turn use whatever name resolution is configured in your system to obtain the IP address associated with the supplied name. <li><tt>->new('10.10.1')</tt> <p>An ancient notation that allows the <em>middle zeroes</em> to be skipped. The example is equivalent to <tt>->new('10.10.0.1')</tt>. <li><tt>->new('10.10.1.')</tt> <p>Note the trailing dot. This format allows the omission of the netmask for classful subnets. The example is equivalent to <tt>->new('10.10.1.0/24')</tt>. <li><tt>->new('10.10.10.0 - 10.10.10.255')</tt> <p>This is also known as <em>range notation</em>. Both ends of an address range are specified. Note that this notation is only supported if the specified subnet can be represented in valid CIDR notation. <li><tt>->new('10.10.10.0-255')</tt> <p>This notation is a shorthand for the <em>range notation</em> discussed above. It provides for the specification of an address range where both of its ends share the first octets. This notation is only supported when the specified range of hosts defined a proper CIDR subnet. <li><tt>->new(1024)</tt> <p>Whenever the address is specified as a numeric value greater than 255, it is assumed to contain an IP address encoded as an unsigned int. <li><tt>->new()</tt> with two arguments <p>Whenever two arguments are specified to <tt>->new()</tt>, the first is always going to be interpreted as the IP address and the second will always be the netmask, in any of the formats discussed so far. <p>Netmasks can be specified in dotted-quad notation, as the number of one-bits or as the equivalent unsigned int. Also, special keywords such as <tt>broadcast</tt>, <tt>default</tt> or <tt>host</tt> can be used as netmasks. </ul> <p>The semantics and notations depicted above, are supposed to comply strictly with the DWIM approach which is so popular with Perl. The general idea is that you should be able to stick almost anything resembling an IP address or a subnet specification into the <tt>->new()</tt> method to get an equivalent object. However, if you can find another notation that is not included in the above list, please by all means let me know. <hr> <a name="overloading"><h2>Simple operations with subnets</h2></a> <P>There is a number of operations that have been simplified along the different versions of the module. The current version, as of this writing, provides support for the following operations: <ul> <li>Scalarization <p>A <a href="">NetAddr::IP</a> object will become its CIDR representation whenever a scalar representation for it is required. For instance, you can very well do something like <tt>print "My object contains $ip\n";</tt>. <li>Numerical comparison <p>Two objects can be compared using any of the numerical comparison operators. Only the address part of the subnet is compared. The netmask is ignored in the comparison. <li>Increments and decrements <p>Adding or substracting a scalar from an object will change the address in the subnet, but always keeping it within the subnet. This is very useful to write loops, like the following: <CODE> use NetAddr::IP; my $ip = new NetAddr::IP('10.0.0.0/30'); while ($ip < $ip->broadcast) { print "ip = $ip\n"; $ip ++; } </CODE> <p>which will produce the following output: <CODE> ip = 10.0.0.0/30 ip = 10.0.0.1/30 ip = 10.0.0.2/30 </CODE> <li>List expansion <p>When required, a <a href="">NetAddr::IP</a> will expand automatically to a list containing all the addresses within a subnet, conveniently leaving the subnet and the broadcast addresses out. The following code shows this: <CODE> use NetAddr::IP; my $ip = new NetAddr::IP('10.0.0.0/30'); print join(' ', @$ip), "\n"; </CODE> <p>And the output would be <CODE> 10.0.0.1/32 10.0.0.2/32 </CODE> </ul> <hr> <a name="common"><h2>Common (and not so common) tasks</h2></a> <p>Below I will try to provide an example for each major feature of <a href="">NetAddr::IP</a>, along with a description of what is being done, where appropiate. <a name="compact"><h3>Optimising the address space</h3></a> <p>This is one of the reason for writing <a href="">NetAddr::IP</a> in the first place. Let's say you have a few chunks of IP space and you want to find the <em>optimum</em> CIDR representation for them. By optimum, I mean the least amount of CIDR subnets that exactly represent the given IP address space. The code below is an example of this: <CODE> use NetAddr::IP; push @addresses, NetAddr::IP->new($_) for <DATA>; print join(", ", NetAddr::IP::compact(@addresses)), "\n"; __DATA__ 10.0.0.0/18 10.0.64.0/18 10.0.192.0/18 10.0.160.0/19 </CODE> <p>Which will, of course, output <tt>10.0.0.0/17, 10.0.160.0/19, 10.0.192.0/18</tt>. Let's see how this is done... <p>First, the line starting with <tt>push ...</tt> creates a list of objects representing all the subnets read in via the <tt><DATA></tt> filehandle. There should be no surprises here. <p>Then, we call <tt>NetAddr::IP::compact</tt> with the list of subnets build earlier. This function accepts a list of subnets as its input (actually, an array of objects). It processes them internally and outputs another array of objects, as summarized as possible. <p>Using <tt>compact()</tt> as in the example is fine when you're dealing with a few subnets or are writing a throw-away one-liner. If you think your script will be handling more than a few tens of subnets, you might find <tt>compactref()</tt> useful. It works exactly as shown before, but takes (and returns) references to arrays. I've seen 10x speed improvements when working with huge lists of subnets. <p>Something that gets asked quite frequently is <em>"why not <tt>@EXPORT</tt> or at least, <tt>@EXPORT_OK</tt> methods such as <tt>compact()</tt>?"</em>. The answer is that I believe <tt>compact()</tt> to be a very generic name, for an operation that is not always used. I think fully qualifying it, adds to the mnemonics of what's being done while not polluting the namespace innecesarilly. <hr> <a name="split"><h3>Assigning address space</h3></a> <p>This problem can be tought as the complement to the prior one. Let's say a couple of network segments need to be connected to your network. You can carve slices out of your address space easily, such as in the following code: <CODE> use NetAddr::IP; print "My address space contains the following /24s:\n", join("\n", NetAddr::IP->new('10.0.0.0/22')->split(24)), "\n"; </CODE> <p>Which will divide your precious address space (the one specified in the <tt>NetAddr::IP->new()</tt>) in subnets with a netmask of 24 bytes. This magic is accomplished by the <tt>->split()</tt> method, which takes the number of bits in the mask as its only parameter. It returns a list of subnets contained in the original object. <p>Again, in situations where the split might return a large number of subnets, you might prefer the use of <tt>->splitref()</tt>, which returns a reference to an array instead. <p. <hr> <a name="wildcard"><h3>Cisco's wildcard notation (and other dialects)</h3></a> <p>Those of you who have had to write an ACL in a Cisco router, know about the joys of this peculiar format in which the netmask works the opposite of what custom says. <p>An easy way to convert between traditional notation and Cisco's wildcard notation, is to use the eloquently named <tt>->wildcard()</tt> method, as this example shows: <CODE> use NetAddr::IP; print join(' ', NetAddr::IP->new('10.0.0.0/25')->wildcard()); </CODE> <p>As you might have guessed, <tt>->wildcard()</tt> returns an array whose first element is the address and its second element is the netmask, in wildcard notation. If scalar context is forced using <tt>scalar</tt>, only the netmask will be returned, as this is most likely what you want. <p>In case you wonder, the example outputs <tt>10.0.0.0 0.0.0.127</tt>. <p>Just for the record, below is a number of outputs from different methods for the above example: <ul> <li>Range (The <tt>->range()</tt> method) <p>Outputs <tt>10.0.0.0 - 10.0.0.127</tt>. Note that this range goes from the <em>network address</em> to the <em>broadcast address</em>. <li>CIDR notation (The <tt>->cidr()</tt> method) <p>As expected, it outputs <tt>10.0.0.0/25</tt>. <li>Prefix notation (The <tt>->prefix()</tt> method) <p>Similar to <tt>->range()</tt>, this method produces <tt>10.0.0.1-127</tt>. However, note that the first address is <b>not</b> the network address but the first host address. <li><em>n</em>-Prefix notation (The <tt>->nprefix()</tt> method) <p>Produces <tt>10.0.0.1-126</tt>. Note how the broadcast address is not within the range. <li>Numeric (The <tt>->numeric()</tt> method) <p>In scalar context, produces and unsigned int that represents the address in the subnet. In array context, both the address and netmask are returned. For the example, the array output is <tt>(167772160, 4294967168)</tt>. This is very useful when serializing the object for storage. You can pass those two numbers back to <tt>->new()</tt> and get your object back. <li>Just the IP address (The <tt>->addr()</tt> method) <li>Just the netmask as a dotted quad (The <tt>->mask()</tt> method) <li>The length in bits of the netmask (The <tt>->masklen()</tt> method) </ul> <hr> <a name="contains"><h3>Matching against your address space</h3></a> <p>Let's say you have a log full of IP addresses and you want to know which ones belong to your IP space. A simple way to achieve this is shown below: <CODE> </CODE> <p>This code will output only the addresses belonging to your address space, represented by <tt>$space</tt>. The only interesting thing here is the use of the <tt>->contains()</tt> method. As used in our example, it returns a true value if <tt>$ip</tt> is completely contained within the <tt>$space</tt> subnet. <p>Alternatively, the condition could have been written as <tt>$ip->within($space)</tt>. Remember that TIMTOWTDI. <hr> <a name="iteration"><h3>Walking through the network without leaving the office</h3></a> <p>Some of the nicest features of <a href="">NetAddr::IP</a> can be better put to use when you want to perform actions with your address space. Some of them are discussed below. <p>One of the most efficient ways to walk your address space is building a <tt>for</tt> loop, as this example shows: <CODE> use NetAddr::IP; push @space, new NetAddr::IP->new($_) for <DATA>; for my $netblock (NetAddr::IP::compact @space) { for (my $ip = $netblock->first; $ip <= $netblock->last; $ip++) { # Do something with $ip } } __DATA__ 10.0.0.0/16 172.16.0.0/24 </CODE> <p. <p>Everything up to the inner loop should be pretty clear by now, so we just ignore it. Since a couple of new friends were introduced in the inner loop of our example, an explanation is in order. <p>This C-like <tt>for</tt> loop uses the <tt>->first()</tt> function to find the first subnet address. The first subnet address is defined as that having all of its <em>host bits</em> but the rightmost set to zero and the rightmost, set to one. <p>We then use the numerical comparison discussed earlier to see if the value of <tt>$ip</tt> is less than or equal to whatever <tt>->last()</tt> returns. <tt>->last()</tt> returns an address with all of its host bits set to one but the rightmost. If this condition holds, we execute the loop and post-increment <tt>$ip</tt> to get the next IP address in the subnet. <p?). <p>This other way is invoked with the <tt>->hostenum()</tt> or the <tt>->hostenumref()</tt> methods. They return either an array or a reference to an array respectively, containing one object for each <em>host</em> address in the subnet. Note that only valid host addresses will be returned (as objects) since the network and broadcast addresses are seldom useful. <p>With no further preamble, I introduce an example that kids shouldn't attempt at home, or at least in production code. (If you find this warning superfluous, try adding <tt>64.0.0.0/8</tt> to the <tt>__DATA__</tt> section and see if your machine chews through it all). <CODE> use NetAddr::IP; push @space, new NetAddr::IP->new($_) for <DATA>; for my $ip (map { $_->hostenum } NetAddr::IP::compact @space) { # Do something with $ip } __DATA__ 10.0.0.0/16 172.16.0.0/24 </CODE> <p>If you really have enough memory, you'll see that each host address in your IP space is generated into a huge array. This is much more costly (read, slow) than the approach presented earlier, but provides for more compact one-liners or quickies. <hr> <a name="num"><h3>Finding out how big is your network</h3></a> <p>Have you wondered just how many IP addresses can you use in your current subnet plan? If the answer to this (or to a similar question) is <em>yes</em>, then read on. <p>There is a method called <tt>->num()</tt> that will tell you exactly how many addresses can you use in a given subnet. For the quick observers out there, you can also use something like <tt>scalar $subnet->hostenum</tt> but this is a really expensive way of doing it. <p>A more conservative (in resources) approach is depicted below: <CODE> use NetAddr::IP; my $hosts = 0; push @space, new NetAddr::IP->new($_) for <DATA>; $hosts += $_->num for @space; print "You have $hosts\n"; __DATA__ 10.0.0.0/16 172.16.0.0/24 </CODE> <p>Sometimes, you will be surprised at how many addresses are lost by subnetting, but we'll leave that discussion to other tutorials. <hr>
http://www.perlmonks.org/index.pl?displaytype=xml;node_id=190497
CC-MAIN-2014-35
refinedweb
2,891
62.68
----------------------------------------------------------------------------- -- | -- Module : that it may be beneficial to evaluate the first -- argument in parallel with the second. Returns the value of the -- second argument. -- -- @a ``par`` b@ is exactly equivalent semantically@. -- par :: a -> b -> b #ifdef __GLASGOW_HASKELL__ par = GHC.Conc.par #else -- For now, Hugs does not support par properly. par a b = b #endif -- | Semantically identical to 'seq', but with a subtle operational -- difference: 'seq' is strict in both its arguments, so the compiler -- may, for example, rearrange @a ``seq`` b@ into @b ``seq`` a ``seq`` b@. -- This is normally no problem when using . -- pseq :: a -> b -> b #ifdef __GLASGOW_HASKELL__ pseq = GHC.Conc.pseq #else pseq = seq #endif
http://hackage.haskell.org/package/parallel-3.2.0.2/docs/src/Control-Parallel.html
CC-MAIN-2014-15
refinedweb
108
59.7
django_backstage 0.0.82 Django project and site deployment using uWSGI, nginx, etc. In the early days of Django, running apps were very much tied to specific websites. In order to make apps and pages reusable between websites, the Sites framework was integrated into Django. This historically makes sense in the context of an Apache-modwsgi environment, which hardwired individual running Django apps to individual web domain names. Jump ahead to today where the emerging recommended standard is uwsgi + nginx. This is an era where your Django apps may be providing RESTful services, WebSockets, and any of a variety of other services, in additional to 'traditional' html/css/js content. And thanks to uWsgi, you may have as many of these various services running on as many different ports (or sockets) as you wish, blissfully running along without the need for any additional webserver (goodby apache) whatsoever. It is not until later (if at all) when we bind one of our happy little uWsgi services to a webserver (hello nginx - but more about that later) that we are in the logical realm of talking about 'sites' in the context of web domain names and the Sites framework. Some may just be building blocks to larger services and not exposed in a public namespace so referring to them as Sites is clearly meaningless in this context. We need a new way to talk about and manage these services. So, we introduce here the Django Backstage project and use (and probably over-abuse) a Jazz metaphor to represent its structural and functional components: class Act: """An Act is essentially a runnable Django application. (Runnable, not necessarily Running.) The analogy is a musical Act (performer, ensemble, group, etc.) Miles Davis sometimes had the reputation of being reclusive. Davis may have gone periods between performances, but remained a musical Act all the while. It's the existence, not their performance (in the case of musicians) that define them as Acts; likewise it is the existence of your app, not the fact that it is running, that defines it as an Act.""" - Downloads (All Versions): - 532 downloads in the last day - 1997 downloads in the last week - 11028 downloads in the last month - Author: MiddleFork - Package Index Owner: MiddleFork - DOAP record: django_backstage-0.0.82.xml
https://pypi.python.org/pypi/django_backstage/0.0.82
CC-MAIN-2015-18
refinedweb
381
57.81
On Tue, Jun 05, 2012 at 03:42:30PM +0300, Avi Kivity wrote: > On 06/05/2012 01:40 PM, Jan Kiszka wrote: > > On 2012-06-05 12:25, Avi Kivity wrote: > >> On 06/05/2012 04:00 AM, Michael Roth wrote: > >>> Add our annotations according to QIDL documentation. > >>> > >>> +qc_declaration typedef struct RTCState { > >>> + ISADevice _immutable dev; > >>> + MemoryRegion _immutable io; > >>> uint8_t cmos_data[128]; > >>> uint8_t cmos_index; > >>> struct tm current_tm; > >>> int32_t base_year; > >>> - qemu_irq irq; > >>> - qemu_irq sqw_irq; > >>> - int it_shift; > >>> + qemu_irq _immutable irq; > >>> + qemu_irq _immutable sqw_irq; > >> > >> How is qemu_irq immutable? We're raising and lowering it many times a > >> second. It's _derived, perhaps, but not immutable. > > > > No, IRQState in its current form is immutable, doesn't contain any > > volatile state. > > Good point. So it's just like any pointer: it depends on the pointed-to > type. If it saves its state, then great, but if the pointed-to type > doesn't, then it's broken. > > > > >> > >>> + int32_t _immutable it_shift; > >>> /* periodic timer */ > >>> QEMUTimer *periodic_timer; > >>> int64_t next_periodic_time; > >>> /* second update */ > >>> int64_t next_second_time; > >>> - uint16_t irq_reinject_on_ack_count; > >>> + uint16_t _derived irq_reinject_on_ack_count; > >> > >> It's not derived from anything. It's _host, maybe. > > > > I think it is _broken. Agreed, using _derived was an error on my part > > I think it's _complicated. Migration involves downtime and so lost > ticks. In the case of RTC we probably need to trigger compensation code > that will try to handle it according to policy. We'd likely only be able to compensate based on calculated downtime or something along that line. I think we'd still want to send accumulated ticks as well, even if it's of little importance, since it's still guest state of a sort (in the sense that it guest-perceivable) that we should avoid discarding. > > >>> + LostTickPolicy _immutable lost_tick_policy; > >> > >> _host; nothign prevents us from changing it dynamically in theory. > > > > _host makes no sense to me. Either it is _immutable or _broken - or is > > properly saved/restored. What should be the semantic of _host? > > An emulated device manages some state, and also talks to a host device, > often through an interface like BlockDriverState or CharState. _host is > anything related to the host device that we should be able to > reconstruct on resume. > > Example of host state is a CharDriverState filename. Even if we allow > it to change, there is no point in migrating it since it belongs to the > source namespace, not destination. > > -- > error compiling committee.c: too many arguments to function >
https://lists.gnu.org/archive/html/qemu-devel/2012-06/msg00684.html
CC-MAIN-2020-10
refinedweb
398
65.01
Discussions EJB programming & troubleshooting: EJB - Serializable Object Passing - Garbage Collection EJB - Serializable Object Passing - Garbage Collection (3 messages) Hi, I want to know how the serializable objects returned by the session bean to the client garbage collected in the server. Please find below the details. AccountBean - Method defined in the AccountBean [Session Bean] public Accounts getAccounts( String accountType) Accounts public class Accounts implements Serializable { public Accounts( String accountType, long timestamp, AdvAccount[] advAccountsArray) { this.timeStamp = timestamp; this.advAccountsArray = advAccountsArray; this.accountType = accountType; } } AdvAccount - Java Bean with getters/setters - Serializable When i invoke the bean from a client, Account Bean talks to DB and returns the number of Account object which inturn contains number of AdvAccount objects. For an example particular search results 438 AdvAccount objects. This is being returned to the client. Since Accounts is a serialized object in the server when this object is garbage collected in the server? When i run JProbe i could see these 438 AdvAccount objects are in memory. Even when i force to do gargabe collection these objects never garbage collected and stay in memory. Please guide me. I am using WSAD and WebSphere 5.0.2 and JProbe 6.0.2 Thanks Selva - Posted by: Selvapandian Gurunathan - Posted on: October 08 2006 03:20 EDT Threaded Messages (3) - Re: EJB - Serializable Object Passing - Garbage Collection by Krishna Pothula on October 08 2006 14:20 EDT - Re: EJB - Serializable Object Passing - Garbage Collection by Selvapandian Gurunathan on October 09 2006 01:13 EDT - Re: EJB - Serializable Object Passing - Garbage Collection by J Moyer on October 24 2006 20:20 EDT Re: EJB - Serializable Object Passing - Garbage Collection[ Go to top ] Selva, In my opinion... once the object is serialized and de-serialized... it means the object is transformed to an intermediate form and then back to object. So it should be eligible for garbage collection after the serialization. How ever now a new object with same amount of data is on the Client side. Are you sure you are making this object eligible for garbage collection ? Can you see in Jprobe who is referring to this object in memory? Krishna - Posted by: Krishna Pothula - Posted on: October 08 2006 14:20 EDT - in response to Selvapandian Gurunathan Re: EJB - Serializable Object Passing - Garbage Collection[ Go to top ] Krishna, Thanks for your reply. Client side i am setting the objects to NULL. Initially after getting the EJB home object reference we are not calling the remove method on the EJB Object. Now i am doing that too. To test why the server side serializable objects are stored in memory before sending the objects to the client i am setting the object array itself to null. At that time JProbe created 438 objects and when i invoke GC all the objects are garbage collected. Since i am returning null i am getting null pointer exception in the client side. From this i make sure that once i send all the objects to the client i dont know how to reclaim those objects on the server side. How to see what are referring those objects in JProbe? Thanks Selva - Posted by: Selvapandian Gurunathan - Posted on: October 09 2006 01:13 EDT - in response to Krishna Pothula Re: EJB - Serializable Object Passing - Garbage Collection[ Go to top ] This might sound like a dumb question, but is AccountBean a stateless session bean, or a stateful session bean? Also, what's the implementation for getAccounts()? Is it possible something is caching the AdvAccounts? - Posted by: J Moyer - Posted on: October 24 2006 20:20 EDT - in response to Selvapandian Gurunathan
http://www.theserverside.com/discussions/thread.tss?thread_id=42531
CC-MAIN-2015-48
refinedweb
599
54.12
Create Range of Hours in Python So I'm looking to take input such as 7:00 AM as a start time, and say, 10:00 PM as an endtime, and then compare a set of times (also in the form of 8:00 AM, 3:00 PM, etc) to see if those time are or aren't in the range of time. Is there a good way to create a "day" that has a range of hours (the endtimes can be as late as 4:00 AM, so I was figuring I would go from 6:00 AM - 5:59 AM instead of midnight - 11:59 PM, if that's possible) that I can check times against? For example: starttime = 8:00 AM, endtime = 3:00 AM, and checking time = 7:00 AM would return time as being out of the range. Thanks! Answers Use datetime parsing and comparison capabilities: from datetime import datetime, timedelta def parse_time(s): ''' Parse 12-hours format ''' return datetime.strptime(s, '%I:%M %p') starttime = parse_time('8:00 AM') endtime = parse_time('3:00 AM') if endtime < starttime: # add 1 day to the end so that it's after start endtime += timedelta(days=1) checked_time = parse_time('7:00 AM') # Can compare: print starttime <= checked_time < endtime I'd probably use the datetime library. Use strptime() on your input strings, then you can compare the datetime objects directly. Convert all times into a 24-hour format, taking minutes and seconds as fractions of hours, then just compare them as numbers to see if a time lies in an interval. Python's datetime.* classes already support comparison: >>> import datetime >>> datetime.datetime(2012,10,2,10) > datetime.datetime(2012,10,2,11) False >>> datetime.datetime(2012,10,2,10) > datetime.datetime(2012,10,2,9) True So all you have to do is declare a class TimeRange, with two members startTime and endTime. Then a "inside" function can be as simple as: def inside(self, time): return self.startTime <= time and time <= self.endTime Need Your Help How to draw text with a shadow like Windows XP does on Desktop icons? windows-xp gdi shadow drawtextWindows XP draws icon text with a nice shadow, which helps to read the text on various backgrounds. The font color is white and the shadow is black (if desktop background is white) or there is no s... error tapestry using data gridsource 2 java datagrid error-handling tapestryContinuing tutorial I have came across an error.
http://unixresources.net/faq/10111961.shtml
CC-MAIN-2019-18
refinedweb
412
65.76
Python tool set for interacting with Google Sheets data. Project description sheetFeeder (Formerly googlesheet_tools, GoogleSheetAPITools) Basic Python functions for operations on a Google Sheet. See for more setup details. See API documentation:. This module has been heavily used in Columbia University Libraries' archival data migrations and other activites; a case study involving its use can be found in. Requirements - Python 3.4 or higher. - A Google Apps account. - Python packages: requests google-api-python-client oauth2client httplib2 Setup NEW: Now available as an installable package from pypi.org! Installation There are several ways to use sheetFeeder, depending how you want to manage dependencies like authentication credentials. Three options are described here: system installation, installation in a virtual environment, and stand-alone module use. For testing and portability, the virtual-environment option is most recommended. A. System installation To install into your default Python 3 environment, use the version of pip assocated with that environment (usually pip3). pip3 install sheetFeeder NOTE: You may need to prepend sudoto the avove to install at the system level. If you do not have su permissions to install Python packages, you may do better to use a virtual environment (see below). You will need to note the location where the package is installed for step 2 below. It will be something like: /usr/local/lib/python3.7/site-packages/sheetFeeder B. Virtual environment installation The command venvis used to create a virtual Python environment. See. (Commands below are for a bash shell in Linux or Mac OS; your use of venv may vary,see the venv documentation linked above.) Use venvto create a new virtual Python 3 environment in a convenient location with an appropriate name such as "sfvenv": python3 -m venv sfvenv Activate the virtual environment to which dependencies will be added: source sfvenv/bin/activate (To deactivate the environment use the command deactivate.) Install sheetFeederusing pip: pip install sheetFeeder This will install into the activated virtual environment and only be available while the environment is active. Note the location where the library was installed for step 2 below. It will be something like: sfvenv/lib/python3.6/site-packages/sheetFeeder/ C. Stand-alone installation If you prefer not to install the module as a package but rather wish to use it as a standalone Python module, you will need to install a few dependencies yourself, either in a virtual environment or in your default Python 3 environment. In this case, download sheetFeeder.pyto your working directory and import it from your scripts in the same directory. Dependencies to install into environment: pip install requests pip install --upgrade google-api-python-client pip install oauth2client In this scenario, you will place the credentials.jsonfile from step 2 below in the same working directory as sheetFeeder.py. Obtain API credentials. To begin using the Google Sheets API you need to obtain credentials specific to your Google account and make them available to sheetFeeder. - Go to. Make sure you are signed in as the Google identity you want to enable API access for. - Click "Enable the Google Sheets API" button. Download the API credentials as credentials.json. - Place credentials.jsonin the sheetFeederpackage location as identified in step 1 above (will be different depending on which type of installation you opted for). Authenticate and authorize access to your Google account's API (Quickstart). - Download and run sample.pyin your working directory. - The first time you use the API you will be asked to select the Google identity to use (if more than one are detected) and to verify access. Note that you may see a warning that the application is not verified by Google. You can go to the "advanced" option and proceed with the "Quickstart" authentication process from there. - Click through to grant read/write permission to your Google Sheets account. If successful you will see a message saying "The authentication flow has completed." - If successful, a token.jsonfile should be created in the same folder as the credentials.jsonfile (see step 1 above for location), and a brief readout of sample table data will appear. Once the credentials and token are in place, you be able to access sheets via the API without additional steps; you can verify this by running sample.pyagain—you should get the read-out without the authentication steps. Reusing and revoking API credentials Note that your API credentials ( credentials.json and token.json) can be reused in other environments where sheetFeeder is installed without repeating steps 2–3 above. You may copy them to the appropriate location per step 1 above. To disallow API access and reset to the initial state, simply delete the files. You may also manage API access via the Google API console. Using sheetFeeder The dataSheet() class The core class is dataSheet(id,range). Define a dataSheet to operate on using the id string of a Google Sheet (the long string between "" and "/edit#gid=0" or the like), and a range including a tab name. Example: from sheetFeeder import dataSheet my_sheet = dataSheet('1YzM1diaFchenQnchemgogyU2menGxv5Gme','Sheet1!A:Z') This enables several methods on the dataSheet class, as outlined below. Methods clear() - Empty the contents of range, as defined by dataSheet. - Example: my_sheet.clear(): getData() - Return the contents of dataSheet in a list of lists. - Example: my_sheet.getData() - Result: [['head1', 'head2'],['a', 'b'],['one', 'two']] getDataColumns() - Return the contents of dataSheet rotated as columns, in a list of lists. - Example: my_sheet.getDataColumns() - Result: [['head1', 'a', 'one'],['head2', 'b','two']] appendData(data) - Append rows of data to sheet. Note: the range is only used to identify a table; values will be appended as rows at the end of table, not at end of range. - Example: my_sheet.appendData([[5,"e", 'xx'],[6,"f"],[7,"g"]]) - Result: add some rows. lookup(search_str,col_search,col_result) - Provide string to match, the column to match in, and col(s) to return. The col_result can either be an integer or a list of integers, e.g., col_search=0, col_result=[1,2], which will return an array of results. Will return multiple matches in a list. - Example: my_sheet.lookup('Smith',2,[3,4]) - Result: Return values of columns 3 and 4 for any row where column 2 equals "Smith". matchingRows(queries,regex=True,operator='or') - Return a list of rows for which at least one queried column matches regex query. Assumes the first row contains heads. Queries are pairs of column heads and matching strings, e.g., [['ID','123'],['Author','Yeats']]. They are regex by default and can be joined by either 'and' or 'or' logic. - Example: my_sheet.matchingRows([['ID', '123'], ['Title', '.*Humph.*']]) - Result: Return all rows where ID = 123 or Title matches the regex expression .*Humph.*. - Example: my_sheet.matchingRows([['ID', '123'], ['Title', '.*Humph.*']], operator='and') - Result: Return all rows where ID = 123 and Title matches the regex expression .*Humph.*. importCSV(csv,delim=',',quote='NONE') - Import a CSV file into a designated sheet range, overwriting what is there. Delimeter is comma by default, but can be any character, e.g., pipe ('|'). - Example: my_sheet.importCSV(my_file,delim='|') - Result: Import contents of pipe-delimited text file into dataSheet. Additional subclasses .id: Returns id part of dataSheet .range: Returns range part of dataSheet .initInfo: Returns dictionary of metadata about sheet (all tabs, not just the one defined in 2nd arg of dataSheet). .initTabs: Returns a list of names of tabs in spreadsheet. .url: Returns public url of sheet of form{sheet_id}/edit#gid={tab_id} Notes This is a work in progress. Comments/suggestions as well as forking very welcome. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/sheetFeeder/
CC-MAIN-2019-51
refinedweb
1,277
57.87
FAQs Search Recent Topics Flagged Topics Hot Topics Best Topics Register / Login Win a copy of Spark in Action this week in the Open Source Projects forum! Palak Shah Greenhorn 18 10 Threads 0 Cows since Jan 21, (18 (18/10) Number Threads Started (10/10) Number Likes Received (0/3) Number Likes Granted (0/3) Set bumper stickers in profile (0/1) Set signature in profile Set a watch on a thread Save thread as a bookmark Create a post with an image (0/1) Recent posts by Palak Shah Issue configuring global results I am trying to configure global results in struts but getting exception in logs when starting tomcat: Caused by: org.xml.sax.SAXParseException: The content of element type "package" must match "(result-types?,interceptors?,default-interceptor-ref?,default-action-ref?,default-class-ref?,global-results?,global-exception-mappings?,action*)". My struts.xml is as follows: <.action.extension" value="share" /> <constant name="struts.custom.i18n.resources" value="com.messages.messages"/> <constant name="struts.ui.theme" value="simple"/> <package name="default" namespace="/" extends="struts-default"> <result-types> <result-type </result-types> <global-results> <result name="login" type="tiles">login.page</result> </global-results> <default-action-ref <action name="index"> <result type="redirectAction"> <param name="actionName">login</param> <param name="namespace">/</param> </result> </action> <action name="login" class="com.login.action.LoginAction"> <result type="tiles">login.page</result> </action> Can some one please guide whats wrong with this code? Even <result name="login">login.jsp</result> does not work. I see the same exception. show more 9 years ago Struts Connecting two actions to the same ActionForm yes you can... just give name="sameForm" in both actions. Also mention validate="false" in the action where you don't want to validate but just wanna initialize.... show more 11 years ago Struts Caching of Database Access Objects's possible? I agree James... I need to add the business layer.. However the doubt is about the multiple thread accessing the same method of DAO. Since I have only one instance of DAO, would multiple simultaneous calls to same method of a DAO (for different users) create a problem? Please see my code above and let me know if it'll work! show more 12 years ago JDBC and Relational Databases Caching of Database Access Objects's possible? I would like to cache so that it does not have to create new DAO for each database call. Any idea how to solve problem of concurrency? or it is better to create new DAO every time? show more 12 years ago JDBC and Relational Databases Caching of Database Access Objects's possible? I am working on a web application. I am using Struts for the presentation layer. Also I am using DAO pattern for Database layer. My doubt is about creating instance of DAO's. I am created a DAO Factory which can create instances of DAO's. Whether I can go for caching of DAO or Do I have to create a new DAO for everytime? Would caching done as shown in the code below create Concurrency issues? or please suggest the right way of going about it. My Struts Action code is as follows: public class WelcomeAction extends Action { static Category log = Category.getInstance(WelcomeAction.class.getName()); public ActionForward execute(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws Exception { try{ UserDAO userDAO = (UserDAO)DAOFactory.getInstance("com.share.dao.UserDAO"); ResultObj obj = (ResultObj)userDAO.getUserList(); List userList = (ArrayList)obj.getResults(); request.setAttribute("userList", userList); }catch(Exception e){ e.printStackTrace(); return mapping.findForward("errorPage"); } return mapping.findForward("success"); } UserDAO code is as follows: public abstract class UserDAO extends BaseDAO { public Object getUserList() throws DBException{ Connection conn = null; PreparedStatement stmt = null; Object obj = null; try{ conn = DBConnectionManager.getConnection(); stmt = conn.prepareStatement("Select * from USER_INFO where USER_ID like ?"); stmt.setMaxRows(3); stmt.setString(1, "%"); stmt.executeQuery(); obj = processGetUserList(stmt); }catch(SQLException e){ throw new DBException(e.getMessage()); }finally{ DBConnectionManager.close(stmt); DBConnectionManager.closeConnection(conn); } return obj; } And DAO Factory is as follows: package com.share.dao; import java.util.HashMap; public class DAOFactory { private static HashMap hash = new HashMap(); public DAOFactory() { } public static BaseDAO getInstance(String daoName) { BaseDAO dao = null; try { dao = (BaseDAO)hash.get(daoName); if(dao == null) dao = createNewInstance(daoName); } catch(Exception ex) { ex.printStackTrace(); } return dao; } private static synchronized BaseDAO createNewInstance(String daoName) { BaseDAO dao = null; try { dao = (BaseDAO)hash.get(daoName); if(dao != null) return dao; Class daoClass = Class.forName(daoName + "Impl"); dao = (BaseDAO)daoClass.newInstance(); hash.put(daoName, dao); } catch(Exception ex) { ex.printStackTrace(); } return dao; } } [ April 19, 2008: Message edited by: Palak Shah ] [ April 19, 2008: Message edited by: Palak Shah ] show more 12 years ago JDBC and Relational Databases Images don't show Hi, I have a JSP page that has some image buttons. I am using struts to control the app. The problem is that some times these images don't show on the page when u goto some other page and comeback. Does anybody know what could be the issue? Thanks. show more 14 years ago JSP Cleared SCBCD exam with 95% Hi Sanjay, I do not have any prior EJB Experience. AS far as preparations are concerned, if you can give 3 hrs a day then 2 months should be good enough. show more 15 years ago Certification Results Cleared SCBCD exam with 95% Hi ALL, I have cleared SCBCD exam with 95%. I referred following material: 1. HFEJB (3 times) 2. MZ's notes (1 time) 3. Tests on ejbcertificate.com and jdiscuss.com (2 times): they are very good Special Thanks to: 1. Kathy and Bert 2. MZ One Special Request to HFEJB authors: 1. You have given many excercises that do not have answers. I understand your motive behind the same. But it would be great if you could provide answers for them (may be at the end of the book) since at the end of the day we would like to verify ourselves. [ June 18, 2005: Message edited by: Palak Shah ] show more 15 years ago Certification Results SCBCD Mock Tests Visit scbcd_exams@yahoogroups.com for links to all SCBCD related mock exams (free/shareware) available online. show more 15 years ago EJB Certification (OCEEJBD) Help me in SCBCD Hi, Following are required to ensure above 85% 1. Read HFEJB. 2. Go through Mikalai Zaikin's EJB Notes 3. Give exams on ejbcertificate.com and jdiscuss.com show more 15 years ago Certification Results SFSB question I Think So! Other experts please confirm. show more 15 years ago EJB Certification (OCEEJBD) When is ejbCreate() called? For stateful session beans, container makes the EJB object and SessionContext. Then container creates the bean instance. Container calls setSessionContext() link the bean to its context. Also the container links the bean to its EJB object by calling ejbCreate(). Is this correct? if not when is ejbCreate() called? For stateless session beans, when exactly is ejbCreate() method called? I ask this question because, in HFEJB it is mentioned that bean can get a reference to its EJBObject from ejbCreate() method for a stateless bean. How is that possible? can anyone please clarify? Thanks in advance. show more 15 years ago EJB Certification (OCEEJBD) How to study for SCBCD? Help Required! Does this mean that reading specs is not essential?? If I skip reading specs in what range can i score?? above 85??? show more 15 years ago EJB Certification (OCEEJBD) How to study for SCBCD? Help Required! Hi ALL, I have completed HFEJB twice. Theer is too much of theory.. Huh.. This method in that interface...... Also I have heard people saying that reading Specs is also required. The Specs has only 500+ pages!!! Does anybody have wayout? means how to go about studying? What to study? How much to study? Please help as I am finding it very difficult to prepare for the Certification. show more 15 years ago EJB Certification (OCEEJBD) HF: AdviceBean tutorial does not run! Hello, I followed all the steps given in HFEJB: AdviceBean tutorial. But when I tried to run 'AdviceClient.class' from command line, I get an exception. Can anybody help? See the exception attached below: AdviceClient.go(AdviceClient.java:18) at AdviceClient.main(AdviceClient.java:11) show more 15 years ago EJB Certification (OCEEJBD)
https://coderanch.com/u/91121/Palak-Shah
CC-MAIN-2020-34
refinedweb
1,378
60.82
Hi guys, I'm facing a new problem. Now I'm learning how to send signals to process in order to make them communicate. I was trying to create a process and when this process is created, send a signal to his father... But I had no success. I get a bunch of errors and warnings. Here's the code: #include <stdio.h> #include <unistd.h> #include <signal.h> #include <stdlib.h> #include <sys/wait.h> #define max 10 void handler(int signo); void (*hand_pt)(int); hand_pt = &handler; int ppid, pid[max], i, j, k, n; // So I created a pointer to a funcion (hand_pt) // and this pointer is going to point to the funcion (handler) void handler(int signo) { printf("It worked"); } main(int argc, char **argv) { signal(SIGUSR1, hand_pt); //this process will wait for the signal SIGUSR1 //and will be handled by hand_pt -> handler pid[0]=fork(); if(pid[0]==0) { printf("Sending signal:\n\n"); ppid=getppid(); kill(ppid, SIGUSR1); exit(0); } return(0); } Maybe I didn't understand how the signal function works... But as far as I know, to use a handler function I have to create a pointer to that function (in this case void (*hand_pt)(int); ) and assign to this pointer the address of the function that will handle the signal ( hand_pt = &handler; ). Is my line of tought correct ? After that I'd be able to send a signal with the kill() fcn . What am I doing wrong ? Thanks
https://www.daniweb.com/programming/software-development/threads/431571/can-t-use-signal-and-kil-funcions
CC-MAIN-2021-17
refinedweb
246
73.88
- System.Xml.XmlDocument class provides handling XML - There are two main .NET APIs for working with XML - XML document object model (DOM) - XmlReader and XmlWriter - Tree Based XML handling - XML data loaded into memory. - Can search for any node. What is XML DOM ? - It provide standard mechanism for programitically working with data. Read XML data - XML data is load into XmlDocument object - So first we have to create a XmlDocument object . XML data look like this... <bookstore> <book category="CHILDREN"> <title>Harry Potter</title> <author>J K. Rowling</author> <year>2005</year> <price>29.99</price> </book> <book category="WEB"> <title>Learning XML</title> <author>Erik T. Ray</author> <year>2003</year> <price>39.95</price> </book> </bookstore> Contains root node and multiple child nodes (root node - bookstore) Each node has the parent node (Except root node) Each node can have siblings (Except root node). So if you want to learn XML deeply just go to w3schools.com. There are lot of tutorials available. OK lets do this. Now we are going to make a new project in Microsoft visual studio 2008 to read a simple XML file. First you need to launch Visual studio 2008 and Select new Project like this... After that select Visual C# Windows Forms Application and Give a name to the project and save where ever you want... Right Click the project and Select Add --> New Folder . Rename that folder into XML. Right Click the XML folder that you create and Select Add -> New Item . Select XML file and name it as BookStore.xml and Click Add. Copy and Paste this XML code into BookStore.xml file. Now right click the Form1.cs and select view design . Create design like this and rename button name as btnLoad and Text as Load Data.Now right click the Form1.cs and select view design . Create design like this and rename button name as btnLoad and Text as Load Data.<?xml version="1.0" encoding="utf-8" ?><bookstore><book publicationdate="2000" ISBN="212"><title>C#</title><author>Author 1</author><price>5000</price></book><book publicationdate="2009" ISBN="312"><title>Java for dummies</title><author>Author 2</author><price>12000</price></book><book publicationdate="2008" ISBN="412"><title>ASP.NET</title><author>Author 3</author><price>6000</price></book></bookstore> Also drag and drop a textbox and click the small arrow in the right upper corner and tick the multiline. Change the name as txtresult. Double click the button and it will automatically direct to the code. First you need 2 references and a varialble.... using System.Xml; using System.IO; private string _xmlFile; After that go to the design view and double click the Form1.cs and it will automatically go to code and create Form1_Load event.In the Form1_Load paste this code.. Now create the private bool SetxmlFilePath() method and paste this code inside that method..Now create the private bool SetxmlFilePath() method and paste this code inside that method..if (!SetxmlFilePath()){MessageBox.Show("Unable to load XML file.");} FileInfo fi = new FileInfo(Application.StartupPath + @"\..\..\XML\BookStore.xml"); if (fi.Exists) { _xmlFile = fi.FullName; } else { return false; } return true;Now goto design area and click the Load button and you will get the btnLoad_click. Inside that paste this code.. DumpContents(_xmlFile);Now Create private void DumpContents(string _xmlFile) mathod and paste this code inside that method.. Create this method..Create this method..string publishDate = null;string title = null;string author = null;string ISBN = null;string price = null;XmlElement element = null; XmlDocument doc = new XmlDocument();doc.Load(_xmlFile); foreach (System.Xml.XmlNode node in doc.SelectNodes("//book")){title = GetNodeValue(node, "title");author = GetNodeValue(node, "author");price = GetNodeValue(node, "price"); element = (XmlElement)node;publishDate = element.GetAttribute("publicationdate");ISBN = element.GetAttribute("ISBN"); DisplayBook(title, author, publishDate, ISBN, price);} private void DisplayBook(string title, string author, string publishDate, string ISBN, string price) { string results = string.Format("{0} ({1}) by {2} (ISBN: {3}), ${4}",title, publishDate, author, ISBN, price); AddText(results); }After that create AddText methos like this.. At last create this method to get the node valueAt last create this method to get the node valueprivate void AddText(string Text){txtResult.AppendText(Text + Environment.NewLine);} private string GetNodeValue(XmlNode parentNode, string nodeName) { string retval = null; XmlNode node =parentNode.SelectSingleNode(nodeName); if (node != null) { retval = node.InnerText; } return retval; }If you do it correctly right click the project and select build . If there any compilation error you have to fix it to get the result. If there are no errors click the debug button , after the form shows click the Load Data button and the result textbox shows the xml file. You can download the sample project Nice example to read xml files, thanks. So...I recommend mediafire.com to store your files, is the best ;) Thank you so much! Helped me a lot! =D Great Article C# Online Training | ASP.NET Training | ASP.NET MVC Training Dot Net Interview Questions .net training online | Dot Net Training in Chennai | .Net Training in Chennai | Dot Net Training Institutes in Chennai Thanks for sharing this. Really your post is really very good and I appreciate it. It’s hard to sort the good from the bad sometimes, but I think you’ve nailed it. You write very well which is amazing. I really impressed by your post. Downloadfreepdf//CISSP//DA-100
http://blog.rajithdelantha.com/2010/09/read-xml-file-from-c-using-microsoft.html
CC-MAIN-2022-40
refinedweb
888
53.07
Consider the following code. public class Car { private static final int SPEED_LIMIT = 120; private int speed; ... public boolean isLegal() { if (speed > SPEED_LIMIT) return false; return true; } } OK, in C our constants were #define macros, and we put them in uppercase. Because macros are quite distinctive animals, compared to domesticated code. In Java by convention we put them in uppercase as well. They are distinguished from other class attributes by being static and final. Since other attributes might also be static, and others might also be final, semantically this distinction is weak. Let's say we decide to make the above class more customisable where the SPEED_LIMIT is configuable, eg. for different countries. So we make it not final. public class Car { private static int SPEED_LIMIT = 120; private int speed; ... public static void setSpeedLimit(int newSpeedLimit) { SPEED_LIMIT = newSpeedLimit; } } Of course as soon as we take away the final modifier, we should rename it speedLimit. The argument i'm trying to illustrate is that we should drop this legacy hangover of uppercase and underscores for "constants." Because they aren't macros anymore. They are class attributes with a final modifier, and in my book that's not enough for any special treatment, such as breaking the clean Java camel-cap naming convention. To summarise, LOUD UPPERCASE WORDS AND _UNDERSCORES_ SHOULD BE BANISHED, as relics of a bygone era, don't you think!? I disagree. It's one of those conventions that is applied universally and thus is very reliable as a guide when reading the code. Patrick Posted by: pdoubleya on September 28, 2006 at 08:26 AM I agree. It's a form of hungarian notation. It's a convention sure, but it can be violated easily by humans. So there's opportunity to deceive. Posted by: demian0311 on September 28, 2006 at 08:40 AM The new feature of the current era are IDEs, text editors, and refactoring. In NetBeans you can ctrl-shift-R when underneath one of the decelerations and it will rename it to whatever you want it to, wherever it is! But another vestige from a bygone era is ctrl-H, search and replace. Posted by: shemnon on September 28, 2006 at 09:05 AM Hi Evan, Sorry man, but I think your case is too weak and would be easily beaten on any court. IANAJL (I'm not a Java lawyer :-), but I can quote a couple of arguments against it: - 'backward' compatibility: as Patrick mentioned, it's already universally used, and make code easier to read - to worse item 1, you can (although you should not) acccess static variables through instance references. So, you could have Car car; int a = car.speed; int b = car.speedLimit; Which one is static and which is an attribute? - to make the previous example worse, we could use import static: import static Car.*; int x = 3*speedLimit; - they are not only class attributes with a final modifer; they could be enums as well (I know the final result in the bytecode would be the same, but not for someone reading the code). - references to constants can be inlined and then eliminated from the bytecode. That can cause confusion afterwards, as the identifier wouldn't be present(say in a debug session or after decompiling the code). - the semantic of these constants is that they can only be change on compile-time. Your example does not make sense in this aspect - if the speed limit changes according to the country, it should not be a compile-time constant, but a proper attribute (or object or even a resource in a bundle). -- Felipe Posted by: felipeal on September 28, 2006 at 09:09 AM I can't help it I have never liked UPPER_CASE constants, I just don't like the way they SHOUT_AT_YOU when you're reading code. Although I have to admit that the convention argument and the fact that it makes the code clearer are good points. So all I can say is:- I_USED_TO_BE_CONFUSED butNowImNotSoSure. Posted by: panaseam on September 28, 2006 at 12:28 PM non constant constants, i love it :) i hear you buddy, in that special case one might use Car.getSpeedLimit(); Posted by: liquid on September 28, 2006 at 10:52 PM I totally agree ! I stopped used MAJ_CONSTANT some times ago, because it's unreadable and too for the case you expose. I found more and more open-source code that agree with us, so please everyone stop this MAJ_CONSTANT that haven't any sense ! In my IDE (Eclipse) constant are "italic" so why put in in caps ?? There is no-sense... Thanks to spread the work guy, Posted by: alois on September 28, 2006 at 11:09 PM -1 Posted by: ge0ffrey on September 29, 2006 at 12:57 AM Thanks for all your comments :) Another consideration is that all public class attributes are typically constants, so all attributes we expose in our API are uppercase - in which having them in uppercase doesn't distinguish them from other (non-constant) public attributes, because there aren't any! Posted by: evanx on September 29, 2006 at 02:18 AM This should be handled in modern times with a proper IDE-- for example with a different color for static final immutable members. Eclipse now does a great job of making members standout from regular variables, now there is no need for distracting prefixes like "this.", "m_" or "my". I bolded immutable because you didn't mention it, and it's also a requirement to upper case a name. Posted by: firefight on September 29, 2006 at 07:45 AM Since javac (in my opinion confusingly) optimizes many static finals by inlining them, they somewhat still are macros. And I agree that non-final statics are usually evil. Also that it's an almost universal convention still in Java. But the IDE point is a good one, too. The IDE knows it, so just believe it. Who knows? Maybe it will change with time. Posted by: tompalmer on September 29, 2006 at 07:51 AM GoodForTwoWordsButNearlyUnreadableThereafterCamelCase is more readable than UPPERCASE constants? Give me a break. At least UPPERCASE shouts in your face to remind you of it's CONSTANT nature. What does the hallowed CamelCase tell me about? How is it better than the more readable convention of using _ to separate words, besides saving some keystrokes (debatable in the days of autocompletion). It's more natural in (at least) English language to join words using - (hypen) as in oh-so-hackeneyed-argument, I haven't seen anyone writing 'OhSoHackenyedArgument' without getting sued for unreadability. Underscores are the second-best substitute for Hypens (as they're already takes as minus operator). Fault is not with the constants but , unfortunately, with the language. Java has no clean 'n simple way of defining a constant ( lacks 'const' keyword). Requiring dual declarations -Final Static is a more of a hack than a feature -- which ironically -- neccessiates UPPERCASE convention. Otherwise, how do you distinguish between Final Statics and Non-Final Statics from the readability/human understanding perspective? Ditto for the _instance_variable, another hack forced on me by the language which allows function arguments/local variables to shadow instance variables. I find better comfort in the less verbose _ convention, instead of the fascist this.some_var = some_var. I know Compiler will spit an error, Eclipse will show a red-cross , show in italics etc. I am talking here about ,us, humans understanding the code and it's intent. 'My IDE doesn't need me to do' that isn't a good enough answer to a language that gets coded in multiple IDEs and many times using Programmers Text Editors like vim, emacs, jedit etc. As a developer who has used and is using multiple IDEs and text editors, it's plain annoying to depend on IDEs to figure things out ( italics, color-coding etc.). These things are better handled at the language and convention level (you can switch IDEs with less pain). Personally, I like the Ruby way @ for instance variables, @@ for class variables and, yes, UPPERCASE for constants. Posted by: vivekbp on September 29, 2006 at 09:21 AM I don't see any reason for non-static fields to even exist, given that immutable objects lead to less bugs, i.e., you can throw away an object when you want to change it, and create a new one instead (removing problems with shared objects). So where do you put the data? Let Java create the fields implicitly - use anonymous classes. I also don't see any reason for non-final static fields to exist, because they are global variables. So all fields left from this cutting analysis are static and final. Most finals in Java are not constants, nor are they anything like constants. Consider a simple Complex class ;) (and ignore my rules above): class Complex { final double real; final double imag; Complex(double real,double imag) { this.real=real; this.imag=imag; } } Here, real and imag are not anything like constants, because they vary per-instance. Making them uppercase would make them look like constants, and would be confusing. Similarly: public static final Image image=new ImageIcon("images/bill.png"); image is not very much like a constant - why name it as IMAGE? Its value is not determined at compile time. The only time I would use upper case is when I have a constant, e.g., public static final int MAX_VALUE=100; Note that public static final int RANDOM_VALUE=(int)(Math.random()*100); is also misleading, its value is not determined at compile time. Cheers. Posted by: ricky_clarkson on September 29, 2006 at 12:00 PM Concerning vivekbp's comments, Ruby's consistency is nice, although it's not pervasive (methods and locals look the same, for instance). Also, just the first char in Ruby matters for constants as in Uppercase. Which also means that Classes and Constants look the same. And that also means that Classes usually are Constants. And they're values, too, so maybe that's okay that they aren't completely distinguished. Posted by: tompalmer on September 29, 2006 at 03:18 PM I think that getBlah() is the best way to go. I have seen lots of code where constants are used for text. Making these i18n becomes very difficult as you have to re-code. Encapsulate. Posted by: rob005 on September 30, 2006 at 05:34 AM vivekbp: I really take issue with the idea that because people might be using a different IDE you should code to the lowest common denominator. This idea keeps dragging coding into the 80's. On every development project I've been we've standarized on one IDE. Text editors are another matter, but you want something with at least intellisense when coding. Think about this: we are still living in a era where we have to manually maintain our comments at 80/chars line even though MS Word had this figured out in 1983. Posted by: firefight on September 30, 2006 at 07:50 AM @toppalmer : You are right about only first Char for a const needs to be uppercase in Ruby. But using all Uppercase will allow to distinguish consts from classnames. As regards distinguishing locals from method calls, can you show me any language that allows you to do it? Ruby may not do it 100%, but it's handled nicely than in most other languages (I know) @fireright : There's a huge difference in using IDE features and in depending on them. I didn't mean use LCD available. My objection was to depending on IDE features like italicsfor statics (for e.g.) I use autocompletion and intelli-sense. They do assist me in coding faster. But at the same time, they've also reduced the number of people who read Javadocs before calling the methods. I find it dangerous that programmers call methods offered by auto-completion without understanding it's pre- and post-conditions. Even if IDE allows you to read javadocs as a tooltip, It isn't the same as reading it in a web-page with white background. Also, Class-level javadocs seldom get read ( which sometimes contain valuable info. regarding methods too). On auto-imports, well, Eclipse once imported a JSF class into my Swing Code.I typed Radio and it got me RadioRenderer, I deleted the wrong class, but the import remained and was hidden by folding, until another developer got a compilation error in Ant compilation. 98% of Eclipse warnings (which now stand at 1000s) are due to friendly Import-on-copy feature, which doesn't delete them if class references get deleted later. ( Yes, this can be fixed by 'Order Imports'. I know that. But needs developer to do this extra-step - voluntarily - everytime before Check-in). My point - friendly features sometimes tend to be painful too. Yes, even in my projects also IDE is standardized per project. But I am concurrently working on 3 different projects. One uses NetBeans, second uses IRAD(Eclipse)/Websphere 6.0, third one is on WebLogic 8.0/Weblogic Workshop 8.0 ( custom BEA IDE). To make it more poignant we have a home-grown library which we use in all three projects. I use TextEditors like JEdit, TextPad etc. for quick-fixes ( instead of waiting 10 minutes for the IDE to launch). Again, I use different editors - not because I like to use many - but they differ in features. Some do certain tasks better than others. Posted by: vivekbp on September 30, 2006 at 10:43 AM I agree with you in principle, but this convention is so firmly established now that I think we're stuck with it. It's better to stick with the anachronism of uppercase constants just for the sake of consistency. Posted by: quelgar on October 05, 2006 at 09:49 PM There will always be inconsistencies across APIs like naming conventions and such. But the language, libraries, and conventions should evolve. Personally i abandoned uppercase in my libraries. Maybe Java needs a macro system like C had, in order to support both backwards compatibility and evolution, which includes fixing bugs, since that can break backwards compatibility too. Posted by: evanx on October 06, 2006 at 03:04 AM
http://weblogs.java.net/blog/evanx/archive/2006/09/a_case_against.html
crawl-002
refinedweb
2,381
63.8
tangent of spline point On 02/05/2013 at 11:28, xxxxxxxx wrote: User Information: Cinema 4D Version: r14 Platform: Language(s) : C++ ; --------- hi, can someone tell me how i can get the tangent at point(x) of a spline, where x is a number in GetPointCount? or how i get the spline position that point x has, so i can use GetSplineTangent? anyone? cheers, ello On 02/05/2013 at 12:38, xxxxxxxx wrote: Read the about SplineObject class in the SDK it has methods to deal with tangents. Or get your spline objects tags, it will return a hidden PointTag like all PointObjects do and a TangentTag. To understand point and tangent tags read about VariableTag in the SDK. On 02/05/2013 at 14:07, xxxxxxxx wrote: but how do i access the spline tangent at a real point, not at the position (0...1)along the spline. thats my problem. On 02/05/2013 at 15:05, xxxxxxxx wrote: Your first posting implied that you want to get the tangents for the control points of your spline, as you spoke of PointObject.GetPointCount(). Both PointTags and TangentTags hold the data of control points/ vertices . In the SplineObject there are the GetTangent methods to access the indexed tangents. I am not sure what you do understand as a 'real' point, as you seem to refer to a vector with it, while the term real in that context is actually bound to 0.0-1.0 offset - real offset as a reference to the space in real world units (200cm instead of 0.5). If you just want to sample an arbitrary point between two vertices aka control points, I am not sure why cannot use the general real offset methods provided by cinema 4d. There is also the SplineHelper class providing some convenience methods to handle spline objects. You can also get the underlying LineObject of a spline to access the sampled sub points rather than dealing with mathematical perfect representation of the spline. Happy rendering, Ferdinand edit: fixed some bs i wrote ;) On 02/05/2013 at 18:52, xxxxxxxx wrote: I'm having a hard time understanding exactly what you want too. Here's an example of creating a spline. And setting all of the tangents to a specific Y value. If you only want one/or certain tangents to be set instead of all of them. Then use a conditional statement with an array index value inside of the loop. Instead of using "i" for setting all of them: //This is an example of creating a spline and setting it's points and tangent handles SplineObject *sp = SplineObject::Alloc(3, SPLINETYPE_BEZIER); //Create a bezier spline with 3 points if(!sp) return NULL; doc->InsertObject(sp, NULL, NULL, 0); //Add it to the OM Vector *gp = sp->GetPointW(); //Assign the array of spline points to a variable gp[0] = Vector(10,0,-100); //Place the first spline point here gp[1] = Vector(0,0,0); //Place the second spline point here gp[2] = Vector(-10,0,100); //Place the third spline point here Tangent *tangents = sp->GetTangentW(); //Assign the array of tangents to a variable //Loop through all of the tangents for (LONG i = 0; i < sp->GetTangentCount(); i++) { tangents[i].vl.y = 200; //Set each spline point's left tangent to this position tangents[i].vr.y = -200; //Set each spline point's right tangent to this position } sp->Message(MSG_UPDATE); //Update the spline's changes -ScottA On 05/05/2013 at 10:55, xxxxxxxx wrote: thank you. i didnt try GetTangentW, because in the sdk it says that it returns the first element. now i get the tangents. however, i had the hope to use the tangents to create a correctly offset position for the points. but, in liniear splines there are no tangents. how can i calculate the direction for the offset so that positive values go into one direction, and negative values go to the opposite direction? (see image) i did it like this: Vector winkel = VectorToHPB(p2-p1); Matrix richtung = HPBToMatrix(winkel, ROTATIONORDER_DEFAULT); however, that resulted in point offsets to unwanted directions, like if you changed the green and red for only some of the points in the image. thats why i thought using the tangents could help.. maybe there is another solution? cheers, Ello On 05/05/2013 at 11:57, xxxxxxxx wrote: If I wanted to move the points of a spline. This is the way I'd do it: #include "lib_splinehelp.h" BaseDocument *doc = GetActiveDocument(); BaseObject *spline = doc->SearchObject("Spline"); PointObject *sp = ToPoint(spline); AutoAlloc<SplineHelp> sh; if(!sh) return FALSE; sh->InitSpline(sp); if(!sh->Exists()) return FALSE; Vector *spPnts = sp->GetPointW(); //Get all the points in the spline Real offsetX = 100.0; Real offsetY = 100.0; //Some offset values that can be bound to GUI gizmos Real offsetZ = 100.0; LONG pntCnt = sh->GetPointCount(); for(LONG i=0; i<pntCnt; i++) { spPnts[i].x += offsetX; spPnts[i].y += offsetY; //Move all the points in the spline by this offset amount spPnts[i].z += offsetZ; } sp->Message(MSG_UPDATE); //Update the changes made to the spline -ScottA On 06/05/2013 at 00:15, xxxxxxxx wrote: yes, i know how to move points, but how do i get the direction when there is no tangent and when i need to take care that the points need to move all relatively to the right or left side of the spline.. thats the main problem i have here On 06/05/2013 at 03:07, xxxxxxxx wrote: You can get the bisecting vector of a point by calculating the arithmetic middle of the adjacent points. On 06/05/2013 at 04:26, xxxxxxxx wrote: i guess you mean something like tRot = Vector((getAngle(points[i-1],points[i]).x+getAngle(points[i+1],points[i]).x)/2,0,0); Matrix richtung = HPBToMatrix(tRot, ROTATIONORDER_DEFAULT); tPos += Vector(0,0,_ipOffset)*richtung; this still makes some objects move to the one direction end others to the opposite direction... On 06/05/2013 at 14:13, xxxxxxxx wrote: double post, sorry. On 06/05/2013 at 14:15, xxxxxxxx wrote: Not exactly. But I expressed myself not 100% correct. You can get a point that lies on the bisecting line this way: p1 = points[i] p2 = (points[i - 1] + points[i + 1]) * (1 / 2.0) // I think division is not supported by the vector class bv = (p2 - p1).GetNormalized() Then again you can cross it with one of the adjacent lines and cross it again and you have the tangent: n = bv.Cross(points[i] - points[i - 1]) tangent = bv.Cross(n) Best, Niklas On 06/05/2013 at 21:32, xxxxxxxx wrote: thank you niklas. i'll try if this helps me solve my problem and report back... On 07/05/2013 at 01:44, xxxxxxxx wrote: hm, this still produces the wrong direction for some points. i have no idea how to get this working... On 07/05/2013 at 03:18, xxxxxxxx wrote: Hi ello, here, some example code quickly plucked together. import c4d class Tag(c4d.plugins.TagData) : SIZE_SPHERE = c4d.Vector(10) COLOR_SPHERE = c4d.Vector(1, 0.66, 0.02) LENGTH_TANGENT = 20 COLOR_TANGENT = c4d.Vector(0.1, 0.95, 0.5) def Draw(self, tag, op, bd, bh) : if not isinstance(op, c4d.SplineObject) : return True bd.SetMatrix_Matrix(op, op.GetMg()) segments = [op.GetSegment(i) for i in xrange(op.GetSegmentCount())] if not segments: segments = [{'cnt': op.GetPointCount(), 'closed': op.IsClosed()}] pi = 0 for segment in segments: cnt = segment['cnt'] closed = segment['closed'] start = pi end = pi + cnt i_start = start i_end = end if not closed: i_start += 1 i_end -= 1 bd.DrawSphere(op.GetPoint(pi), self.SIZE_SPHERE, self.COLOR_SPHERE, 0) bd.DrawSphere(op.GetPoint(pi + cnt - 1), self.SIZE_SPHERE, self.COLOR_SPHERE, 0) bd.SetPen(self.COLOR_TANGENT) if start < end: for i in xrange(i_start, i_end, 1) : left = i - 1 right = i + 1 if left < start: left = end - 1 if right >= end: right = start print left, i, right pleft = op.GetPoint(left) pright = op.GetPoint(right) point = op.GetPoint(i) mid = (pleft + pright) * 0.5 bv = point - mid n = bv.Cross(point - pleft) tangent = n.Cross(bv).GetNormalized() * self.LENGTH_TANGENT pmin = point - tangent pmax = point + tangent bd.DrawLine(pmin, pmax, 0) pi += cnt print "-------------------" return True if __name__ == "__main__": c4d.plugins.RegisterTagPlugin(100003, "Spline Tangents", c4d.TAG_VISIBLE, Tag, "Tsplinetangents", None) Best, -Niklas On 07/05/2013 at 03:20, xxxxxxxx wrote: Originally posted by xxxxxxxx this still makes some objects move to the one direction end others to the opposite direction... Uhm, I just see: The result in the image is correct. What else did you expect? The only thing left would be to rotate them about 90° to get the actual tangent (see my example code). On 07/05/2013 at 04:05, xxxxxxxx wrote: thank you very much, but when i rotate the direction the problem still remains. it just doesnt move left/right but back and forward.. still some point to the opposite..
https://plugincafe.maxon.net/topic/7140/8116_tangent-of-spline-point
CC-MAIN-2020-16
refinedweb
1,506
65.32
2009/8/25 Martijn Faassen <faas...@startifact.com>: > You have zope.browsermenu, zope.browserpage, zope.browserresource. I > propose instead we name them zope.menu, zope.page and zope.resource. Advertising -1 These things are really only for browser, and ZCML directives are in "browser" namespace, while, for example, "zope.resource" is a quite abstract name that could be taken by more appropriate package in future. > I think we can safely claim these names in the 'zope' namespace as > these *are* the Zope Toolkit menu, page and resource > implementations. I'm not sure if they are "reusable without having to buy into the rest of the Zope Toolkit". Currently these packages have a note that they are not reusable, as recommended in steering group decisions list, because they depend on the publishing system, which is a really large part. -- WBR, Dan Korostelev _______________________________________________ Zope-Dev maillist - Zope-Dev@zope.org ** No cross posts or HTML encoding! ** (Related lists - )
https://www.mail-archive.com/zope-dev@zope.org/msg30750.html
CC-MAIN-2016-50
refinedweb
157
58.48
(Moving to Hurd-devel) On Thu, May 09, 2002 at 03:24:37PM -0400, Roland McGrath wrote: > If you are aware of any problems that are holding up the whole libio > transition stuff, then feel free to remind me frequently. We still have a problem with the spurious declarations of: (stdio.h) /* These variables normally should not be used directly. The `strerror' function provides all the needed functionality. */ #ifdef __USE_BSD extern int sys_nerr; extern __const char *__const sys_errlist[]; #endif #ifdef __USE_GNU extern int _sys_nerr; extern __const char *__const _sys_errlist[]; #endif This trips up libiberty. libiberty is peculilar in that it's not using AC_REPLACE to insert in its objects. It always compiles everything and decides what to do within those source files based on `configure'-type defines. It should certainly be fixed, but we should also remove the declarations. Drepper's response[0] suggests that he might not mind deprecating it (if it hasn't been already) all over. [0] Tks, Jeff Bailey -- One of the great things about books is sometimes there are some fantastic pictures. -- George W. Bush
https://lists.gnu.org/archive/html/hurd-devel/2002-05/msg00002.html
CC-MAIN-2022-21
refinedweb
180
62.58
0 Now this question is being written in Scala, but Scala is Java compatible. I will translate the java code into Scala. // Write a recursive function "upto" // that takes two Int parameters "start" and "end" and produces a "List[Int]" // that counts DOWN from "start" to "end" (inclusive at both ends) one at // a time. If "start < end", the empty list must be returned. //This is what I have come up with so far. We had 10 questions, I have 9 of them done, //just trying to figure this one out. I just need guidance on how to use the recursive //call correctly def otpu (start:Int, end:Int) : List[Int] = { if (start < end) List() //return empty list else (otpu( (start-1), 0) ) //returns 20, 19, 18, 17, 16, 15 (if upto(20, 15) entered. } Just point me in the right direction. I always have trouble grasping / defining the recursive function. Edited by transplantedNYr: n/a
https://www.daniweb.com/programming/software-development/threads/383194/recursive-method
CC-MAIN-2018-34
refinedweb
156
81.73
andand constexpr int factorial(int n) noexcept {// define constexpr function! return (n == 1) ? 1 : (n * factorial(n-1)); } std::array<int, factorial(5)> a; // use it! for (auto i : { 1, 2, 3, 4, 5} ) std::cout << i << " "; // range-based for!I mean, really, who can't love that? More complicated stuff works, too, like defaulting and deleting and automatically generating move operations. And nullptr (already present in VC10) joins the party, too. Fun, fun, fun. Because C++11 is no longer a draft standard (even if there are still some bureaucratic levers to be moved) and compiler support for C++11 is increasingly common, there's no need for me to keep updating the feature availability summary, so, modulo bugs in the existing data (I'll fix those as they're brought to my attention), I'm freezing it as is. That will give me more time to play around with the newly-minted and schnazzed up C++, and that's a lot more rewarding than putting little letters in boxes on a spreadsheet. Have fun with C++11. How can you not? Scott 7 comments: auto, lambda and range-based for-loop is awesome! This is a big step in usability side. Time for Even More Effective C++? :D Let's just say you're not the first to have suggested something like this :-) Scott Update of More Effective C++ - I'll toss my vote in for what it's worth. Mr. Meyers, I read your books about C++ and they are the best books I've read about programming ever. I especially don't like it when I grab a 1,000 page book about C++ hoping it is useful to me and then notice that it's full of beginner-level blah blah "this is how you will output text to the console" crap. I've noticed that libraries mostly have these kinds of books which serve no real purpose for someone who has some level of understanding of the language already. Then the really advanced books are a bit too complicated, or dare I say, boring! Or even containing a lot of information which is not actually useful when you want to create a program (so it's more like for compiler writers). Your books were not boring but they did clearly go beyond the usual beginner-level books. Thanks. To someone who has not yet read "Effective C++", would you suggest to buy the 2005 edition, or wait a little for the next one? @NoName: I have not yet started work on a new edition of Effective C++ (or in fact any of my books), so I suggest you buy the current edition. For my perspective on the content of that book in a C++11 world, I suggest you consult the special forward I wrote for it. Thanks for your interest. Scott
http://scottmeyers.blogspot.com/2011/08/c11-feature-availability-spreadsheet.html
CC-MAIN-2014-15
refinedweb
478
71.04
293 [details] Demonstration project showing the error. Every time you tap an UIButton it's Layer gets re-referenced, therefore any custom state saved on it is gone. A good addition i've just found. When you bind the layerClass native method to your UIButton and assign it to a CALayer subclass, the UIButton's Layer property doesn't get recreated anymore. demo: //////// public class WeakButtonLayer : CALayer { public WeakButtonLayer() { Console.WriteLine("new WeakButtonLayer()"); } } public class WeakButton : UIButton { public WeakReference<CALayer> WeakLayer; [Export("layerClass")] public static Class GetLayerClass() { return new Class(typeof(WeakButtonLayer)); } public WeakButton(CGRect frame) : base(frame) { this.WeakLayer = new WeakReference<CALayer>(this.Layer); } } ////////// Hello I could not reproduce this using an iPhone SE on 9.3.5 nor on an iPhone 6 Plus on 10.0. Can you share with us your version information? The easiest way to get exact version information is to use the "Xamarin Studio" menu, "About Xamarin Studio" item, "Show Details" button and copy/paste the version informations (you can use the "Copy Information" button). Hello, Below is my current installation info: /////// === Xamarin Studio Community === Version 6.1 (build 5383) Installation UUID: 4cdc71f8-5b1e-44d7-b378-b8c7087320cf Runtime: Mono 4.6.0 (mono-4.6.0-branch/3ed2bba) (64-bit) GTK+ 2.24.23 (Raleigh theme) Package version: 406000182 === NuGet === Version: 3.4.3.0 === Xamarin.Profiler === Not Installed === Apple Developer Tools === Xcode 8.0 (11239.2) Build 8S201h === Xamarin.iOS === Version: 9.99.5.54 (Xamarin Studio Community) Hash: 974ea0b Branch: cycle8 Build date: 2016-08-30 18:12:06-0400 === Build Information === Release ID: 601005383 Git revision: 37159a7d0c1ed6c9f661210879e9e233ca92e65d Build date: 2016-08-30 22:07:20-04 Xamarin addins: 00a35f60101b50c8fa28b2b8c0e5b8ade85f7083 Build lane: monodevelop-lion-cycle8 === Operating System === Mac OS X 10.11.6 Darwin El-Capitan.local 15.6.0 Darwin Kernel Version 15.6.0 Thu Jun 23 18:25:34 PDT 2016 root:xnu-3248.60.10~1/RELEASE_X86_64 x86_64 === Enabled user installed addins === NuGet Package Management Extensions 0.11.1 ////// When you run the attached project, press the "Click me!" button. The current (bugged) output for the attached project should be: 2016-09-02 17:36:38.188 ProofLeak[21859:1483917] WeakLayer.Target = 2016-09-02 17:36:38.378 ProofLeak[21859:1483917] WeakLayer.Target = 2016-09-02 17:36:38.690 ProofLeak[21859:1483917] WeakLayer.Target = 2016-09-02 17:36:38.866 ProofLeak[21859:1483917] WeakLayer.Target = And the expected output for the attached project (upon fix) should be: 2016-09-02 17:37:55.494 ProofLeak[21871:1484949] WeakLayer.Target = <CALayer: 0x7b7f7700> 2016-09-02 17:37:55.643 ProofLeak[21871:1484949] WeakLayer.Target = <CALayer: 0x7b7f7700> 2016-09-02 17:37:55.796 ProofLeak[21871:1484949] WeakLayer.Target = <CALayer: 0x7b7f7700> 2016-09-02 17:37:55.939 ProofLeak[21871:1484949] WeakLayer.Target = <CALayer: 0x7b7f7700> 2016-09-02 17:37:56.091 ProofLeak[21871:1484949] WeakLayer.Target = <CALayer: 0x7b7f7700> Best regards. Hello Jairo Yup that is the output that I am getting Could you tell me on which device are you testing this? Alex I've tested on every single device from iOS Simulator. Also tested it on my physical device (iPhone 5) with no luck. I'm Installing a fresh Xamarin copy on another Mac and going to testing it. I'll keep you informed! Hi Jairo, did you ever get this tested on a fresh mac? @Jaro any comments or information regarding the issue? @Jaro If you are still experiencing this issue please re-open the bug report. Thanks!
https://bugzilla.xamarin.com/43/43937/bug.html
CC-MAIN-2021-39
refinedweb
584
54.39
GML_scannerprocedure GML_tokenstructure GML_valueenumeration (scanner related entries) GML_tok_valstructure GML_parserprocedure GML_statstructure GML_pairstructure GML_valueenumeration (parser related entries) GML_pair_valstructure GML_free_listprocedure GML_print_listprocedure GML_initprocedure GML_errorstructure GML_errorenumeration gml_demoprogramm GML, the Graph Modelling Language, is a portable file format for graphs. GML has been developed as part of the Graphlet system, and has been implemented in several other systems, including LEDA, GraVis and VGJ. This document describes a sample scanner and parser for GML. Unlike other implementations, this one uses ANSI C and does not rely on external tools such as lex and yacc. This implementation is also designed to be highly portable and can be used as a library. The procedure GML_scanner implements the scanner for GML files: struct GML_token GML_scanner (FILE* file); GML_scanner reads the next input token from file and returns it in a GML_token structure. file must be open for read access; the caller is responsible for opening anc closing the file. The type GML_token is defined as follows: struct GML_token { GML_value kind; union tok_val value; }; Where kind determines the type of the token and value is its value. kind is of type GML_value, which is listed in Table 1. The value field in GML_token is of type GML_tok_val, which is defined as follows: union GML_tok_val { long integer; // used with GML_INT double floating; // used with GML_DOUBLE char* string; // used with GML_STRING, GML_KEY struct GML_error err; // used with GML_ERROR }; The procedure GML_parser implements the parser for GML files: struct GML_pair* GML_parser (FILE* file, struct GML_stat* stat, int mode); Input parameters for GML_parser are the file, a pointer to a GML_stat structure and and the operations mode. file must be open for reading; the caller is responsible for opening and closing the file. stat must point to a structure of type GML_stat, which is defined as follows: The variableThe variablestruct GML_stat { struct GML_error err; struct GML_list_elem* key_list; }; erris used to report errors (for information on GML_errorsee below) If an error occurs during parsing, stat->err.err_numis set to the corresponding error code, and additional information is written into the data structure pointed to by stat->err. If no error occurs, then stat->err.err_numhas the value GML_OK. modeis almost always 0. The other field in GML_statis key_list, a pointer to a singly linked list of the strings used for keys. You can access the first key-string with key_list->keyand the next node with key_list->next. The parameter mode needs further clarification. GML_parser parses lists recursively. Therefore, A closing square bracket (]) means the end of a list in a recusive call and a syntax error ( GML_TOO_MANY_BRACKETS) at the top level. Therefore, mode is used to discriminate between the top level ( mode == 0) and a recursion step ( mode == 1). 0 at the top level and and 1 in a recursion step. GML_parser returns a structure of type GML_pair, which is defined as follows: struct GML_pair { char* key; GML_value kind; union pair_val value; struct GML_pair* next; }; Each object in a GML_pair structure corresponds to a key-value pair in the GML file, where key is a pointer to the key, and kind and value hold the value. For example, the sequence "id 42" translates into a GML_pair structure where key is "id", kind is GML_INT and value.integer is 42. implements GML lists. next is a pointer to the next element within the current in the list, or NULL if there are no more elements in the list. The field kind determines which of the fields in value is used. kind is of type GML_value, which is listed in Table 2. The data structure GML_pair_val is defined as follows: union GML_pair_val { long integer; // kind is GML_INT double floating; // kind is GML_DOUBLE char* string; // kind is GML_STRING struct GML_pair* list; // kind is GML_LIST }; Note: string contains no characters with ASCII code greater than 127 because these are converted into the iso8859-1-format. See the GML Manual for details. The following auxiliary procedures are defined for GML_pair: void GML_free_list (struct GML_pair* list, struct GML_list_elem* key_list) Frees recursivly all storage allocated for list and for key_list (which is decribed above). void GML_print_list (struct GML_pair* list, int level) Writes list to stdout, using level for indentation. This meant for debugging only. The currently read line and column are stored in the global variables GML_line and GML_column. If you are interested in where an error occured, you should call before calling parser or scanner the first time. It will set both variables to 1.before calling parser or scanner the first time. It will set both variables to 1.void GML_init () The procedures GML_scanner and GML_parser read until they find an error, or an end of file. If the parser encounters an error, it returns the GML structure parsed so far and provides error information in its error parameter. The structure GML_error reports scanner and parser errors: struct GML_error { GML_error_value err_num; int line; int column; }; line is the input line in which the error occured, and column is the corresponding column. Both line and column start at 1. err_num is of type GT_error_value, which is listed in Table 3. The following example reads a GML file (filename is specified in the command line) and writes the parsed key-value pairs and the list of keys to standard output. #include "gml_parser.h" #include <stdio.h> #include <stdlib.h> void print_keys (struct GML_list_elem* list) { while (list) { printf ("%s\n", list->key); list = list->next; } } void main (int argc, char* argv[]) { struct GML_pair* list; struct GML_stat* stat=(struct GML_stat*)malloc(sizeof(struct GML_stat)); stat->key_list = NULL; if (argc != 2) printf ("Usage: gml_demo <gml_file> \n"); else { FILE* file = fopen (argv[1], "r"); if (file == 0) printf ("\n No such file: %s", argv[1]); else { GML_init (); list = GML_parser (file, stat, 0); if (stat->err.err_num != GML_OK) { printf ("An error occured while reading line %d column %d of %s:\n", stat->err.line, stat->err.column, argv[1]); switch (stat->err.err_num) { case GML_UNEXPECTED: printf ("UNEXPECTED CHARACTER"); break; case GML_SYNTAX: printf ("SYNTAX ERROR"); break; case GML_PREMATURE_EOF: printf ("PREMATURE EOF IN STRING"); break; case GML_TOO_MANY_DIGITS: printf ("NUMBER WITH TOO MANY DIGITS"); break; case GML_OPEN_BRACKET: printf ("OPEN BRACKETS LEFT AT EOF"); break; case GML_TOO_MANY_BRACKETS: printf ("TOO MANY CLOSING BRACKETS"); break; default: break; } printf ("\n"); } GML_print_list (list, 0); printf ("Keys used in %s: \n", argv[1]); print_keys (stat->key_list); GML_free_list (list, stat->key_list); } } }
http://www.cs.rpi.edu/~puninj/XGMML/GML_XGMML/gml_parser.html
crawl-002
refinedweb
1,042
52.29
Draft WireToBSpline The To convert a wire to a bspline, or vice versa, pass the Draft User documentation Outdated translations are marked like this. Next: Draft2Sketch Description The Draft WireToBSpline command converts Draft Wires to Draft BSplines and vice versa. Converting a Draft Wire to a Draft BSpline, and a closed Draft BSpline to a closed Draft Wire Usage - Select a Draft Wire or a Draft BSpline. - There are several ways to invoke the command: - Press the Draft WireToBSpline button. - Select the Modification → Wire to B-spline option from the menu. - A new object is created. Notes - The command may result in a closed, self-intersecting Draft Wire or Draft BSpline with a face. Such an object will not display properly in the 3D view. Its VeriMake Face property, or its VeriClosed property, must be set to false. Scripting See also: Autogenerated API documentation and FreeCAD Scripting Basics. To convert a wire to a bspline, or vice versa, pass the Points property of the source object to the make_bspline method, or respectively the make_wire method, of the Draft module. Example: import FreeCAD as App import Draft doc = App.newDocument() p1 = App.Vector(1000, 1000, 0) p2 = App.Vector(2000, 1000, 0) p3 = App.Vector(2500, -1000, 0) p4 = App.Vector(3500, -500, 0) base_wire = Draft.make_wire([p1, p2, p3, p4]) base_spline = Draft.make_bspline([-p1, -1.3*p2, -1.2*p3, -2.1*p4]) points1 = base_wire.Points spline_from_wire = Draft.make_bspline(points1) points2 = base_spline.Points wire_from_spline = Draft.make_wire(points2) doc.recompute() Next: Draft2Sket
https://wiki.freecadweb.org/Draft_WireToBSpline/tr
CC-MAIN-2021-49
refinedweb
251
70.09
From: Ed Brey (brey_at_[hidden]) Date: 2001-04-24 09:18:03 From: "Paul A. Bristow" <pbristow_at_[hidden]> > // math_constants.hpp <<< math constants header file - the interface. > namespace boost > { > namespace math_constants > { > extern const long double pi; > } // namespace math_constants > } // namespace boost Having only a long double defined makes for more typing by users who are using less precision. For example, someone working with floating point would have to write "a = pi * r * r" as "a = float(pi) * r * r". The solution should make it easy to get the precision desired. One approach is "float pi = float(math_constants::pi);", which is fine by itself, but doesn't scale well when working with many constants (see the later point on multiple constants). How is constant folding accomplished, given that the definition appears to be out-of-line? How would generic algorithms that do not know the desired type at coding time be written? > // math_constants.h <<< the definition file > // Contains macro definitions BOOST_PI, BOOST_E ... as long doubles > #define BOOST_PI 3.14159265358979323846264338327950288L /* pi */ What is the purpose of the macro? How is it invisioned to be used? > cout << "pi is " << boost::math_constants::pi << endl; > using boost::math_constants::pi; // Needed for all constants used. > // recommended as useful documentation! whereas: > // using namespace boost::math_constants; // exposes ALL names in > math_constants. > // that could cause some name collisions! Pulling all math constants into the global namespace is indeed asking for trouble in general, although it would be nice to allow it as it can be practical within a function. However, it is also less than desireable to have to perform a using directive for every constant in use. It's a lot of code that isn't directly related to getting the job done, and it creates a maintence problem because there is no easy way to garbage collect as constants go out of use. Fortunately, namespace renaming solves this problem well (although it do esn't solve the problem with having the constants be the right types described above). Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2001/04/11304.php
CC-MAIN-2020-45
refinedweb
351
58.38
About tgpolls tgpolls is a Pluggable application for TurboGears2 that allows you to create polls and let registered and anonymous users to vote a question created by manager users. The polls are both single choice or multiple choice! And have a termination time! Installing tgpolls can be installed both from pypi or from bitbucket: easy_install tgpolls should just work for most of the users Plugging tgpolls In your application config/app_cfg.py import plug: from tgext.pluggable import plug Then at the end of the file call plug with tgpolls: plug(base_config, 'tgpolls') You will be able to access the registration process at. Available Hooks tgpolls makes available a some hooks which will be called during some actions to alter the default behavior of the appplications: Exposed Partials tgpolls exposes a bunch of partials which can be used to render pieces of the blogging system anywhere in your application: Exposed Templates The templates used by registration and that can be replaced with tgext.pluggable.replace_template are:
https://bitbucket.org/puria/tgapp-tgpolls/src
CC-MAIN-2017-34
refinedweb
166
56.69
"Alex Martelli" > I recommend you use "ANSI C" style as a habit: > > static PyObject* > proto_reduce(HyProtoObject* self) That's what I get for copy/pasting from the python source code. > Rather, you probably want to expose a factory function to be > passed as the first item in the tuple returned by __reduce__ > (you may then want to return a 2-items tuple rather than a > 3-items one from __reduce__, perhaps). Just make sure said > first item exposes an attribute named __safe_for_unpickling__ > with a true value, or register it as a "safe constructor" with > module copy_reg. If it's not one thing, it's another. I decided to try this route but have been barricaded again. I stuck a _build function in the module method table and used copy_reg in the init function to make it a "safe constructor." module = PyImport_ImportModule("copy_reg"); if(module == NULL) return; method = PyDict_GetItemString(PyModule_GetDict(module), "constructor"); if(method == NULL) return; func = Py_FindMethod(mod_methodlist, NULL, "_build"); if(func == NULL) return; args = PyObject_CallFunction(method, "(O)", func); Py_DECREF(func); if(args == NULL) return; else Py_DECREF(args); Now cPickle is complaining that _build is not found as __main__._build. I tried, in my python script, to directly stick it in the namespace, but it says it's not the same. from proto import proto import cPickle Root = proto('Root') Root.spam = 'spam' d = cPickle.dumps(Root) cPickle.PicklingError: Can't pickle <built-in function _build>: it's not found a s __main__._build and when I put from proto import proto, _build cPickle.PicklingError: Can't pickle <built-in function _build>: it's not the sam e object as __main__._build I was easily able to do this in pure python using copy_reg, but the reason why I even started to port this type to C was because it was pickling/unpickling too slowly. I have a feeling this has something to do with wrapping the function object from C into python. My next attempt will be to create a factory type, like _protobuilder, that will be the first element of the tuple returned by __reduce__. I have the pickling working, now I just have to figure out how to make _protobuilder return a proto object. Tom
https://mail.python.org/pipermail/python-list/2002-September/143277.html
CC-MAIN-2016-44
refinedweb
368
60.04
SOAP::Lite + wsdl misunderstanding? Discussion in 'Perl Misc' started by John Bokma, Jan 24,92 - Aaron Brady - Jan 1, 2009 SOAP::Lite +wsdlAndrew Tkachenko, Mar 5, 2005, in forum: Perl Misc - Replies: - 0 - Views: - 160 - Andrew Tkachenko - Mar 5, 2005 SOAP::Lite and WSDL namespace - please helpspacegoat, Nov 4, 2005, in forum: Perl Misc - Replies: - 2 - Views: - 192 - spacegoat - Nov 6, 2005 SOAP::Lite with Microsoft Project Server PDS.WSDLashgromnies, Apr 24, 2006, in forum: Perl Misc - Replies: - 0 - Views: - 203 - ashgromnies - Apr 24, 2006 SOAP::Lite - getting WSDL using basic authenticationJ. Gleixner, Feb 12, 2008, in forum: Perl Misc - Replies: - 0 - Views: - 412 - J. Gleixner - Feb 12, 2008
http://www.thecodingforums.com/threads/soap-lite-wsdl-misunderstanding.901566/
CC-MAIN-2014-52
refinedweb
109
67.69
The stdlib C Library function atexit registers the function pointed to by func to be called when the program terminates. Upon normal termination of the program, function pointed by func is automatically called without arguments. You can register your termination function anywhere in program. You can call atexit function more than once, they are all executed in reverse order of their call(the last function to be registered will be the first function to be called). Function prototype of atexit int atexit(void (*func)(void)); - func : It is a pointer to a function to be called at the termination of the program. This function must not return any value and take no arguments. Return value of atexit This function returns zero value, if the function was successfully registered, otherwise a non-zero value if unsuccessful. C program using atexit function The following program shows the use of atexit function to call a function at the time of program termination. #include <stdio.h> #include <stdlib.h> void myFunction(){ printf("Program end, Bye Bye\n"); getch(); } int main(){ printf("Program start\n"); atexit(myFunction); return 0; } Output Program start Program end, Bye Bye
https://www.techcrashcourse.com/2015/08/atexit-stdlib-c-library-function.html
CC-MAIN-2020-16
refinedweb
191
53.21
tag:blogger.com,1999:blog-33080685004697700352017-09-23T14:19:08.783-05:00Linux, Computing, Contemplation, and MetaCrisisAll things computational...Karim Lalani Internship Experiment - Top 5 Challenging Scala and MongoDB concepts <h3>Overview</h3>In the recent <a href="" target="_blank">internship that I had offered</a>, I wanted the interns to experience new and emerging technologies that they were less likely encounter in their school's curriculum, and hence would add to the overall challenge. <a href="" target="_blank">Scala</a>, <a href="" target="_blank">Play Framework</a>, and <a href="" target="_blank">MongoDB</a>.<br /><br /><h3>5. MongoDB: Joins and Aggregations</h3>Schools.<br /><br /><h3>4. Play/Reactive MongoDB Driver: Writing code for asynchronous execution</h3>Coding.<br /><br /".<br /><br /><h3>3. Scala: Implicit Conversions</h3>Implicit.<br /><br /><h3>2. MongoDB: Accessing the _id for an inserted document</h3>Mong.<br /><br /.<br /><br /><h3>1. Scala: Type Inference</h3><div>Type inference is a very powerful feature of the Scala language and compiler. If a type can be inferred, chances are, you can get by without specifying the type of the object you are creating. This can, however, result in some unexpected behavior, and compile time errors, that could be confusing to interpret if not used carefully.<br /><br /.<br /><br /><h3>Conclusion</h3>The above challenges were in no way a reflection on the interns' aptitude or ability to solve problems. I remember encountering similar challenges when I was first introduced to Scala, Play, and MongoDB driver for Scala. My academic training had not prepared me sufficiently for these challenges either. That was the primary reason why I chose these technologies for the internship. The interns admitted their frustration on my decision, but in the end were grateful for the same, as it pushed them out of their comfort zones, and gave them a safe platform to explore these technologies and concepts.</div><img src="" height="1" width="1" alt=""/>Karim Lalani Internship Experiment<h2>The Internship</h2>Ever.<br /><br /><h2>Recruitment</h2><div><div.<br /><br /><h2>Areas of Focus</h2></div>The project was to create a web based anonymous survey creation and collection tool. The internship was designed to cover the following areas:<br /><div><ol><li>Software development life cycle and methodologies</li><li>Requirements gathering and documentation</li><li>Use case documentation</li><li>Requirements and Use Case traceability </li><li>Software Specifications documentation</li><li>Software Specification and Use Case traceability</li><li>Unit, Integration, and End-to-End testing</li><li>Work division and delegation</li><li>Source code version control </li><li>Timelines, Deadlines, and Accountability</li><li>Software development tools and technologies</li></ol><div>The software programmer interns were asked to use the following technologies for their internship project:</div></div><div><ol><li>Source Version Control using Git and Github</li><li>Programming using Scala and Play Framework</li><li>NoSql data persistence using MongoDB</li></ol><h2>The Experience</h2><div.<br /><br /><h2>Conclusion</h2>At the end of the Software Development internship, we had a working product that was about eighty percent ready, and while it had a few rough edges, it was able to demonstrate nearly all the functionality and use cases that were described in the Business Requirements Document. The post-internship feedback from the interns was overwhelmingly positive and I received some constructive feedback on how the experience could be improved and enhanced for future interns. The purpose of the internship was to provide interns with a safe collaborative learning platform, to experience and participate in a Software Development Project, and to help them better prepare for the opportunities ahead of them. It also provided me with valuable lessons in software project and team management. Overall, it has been a very fulfilling and satisfying experience.</div></div></div><img src="" height="1" width="1" alt=""/>Karim Lalani Tether - When the Linux Laptop won't connect to WiFi but the Phone does <h2>Linux users and the age old WiFi problem</h2>A Linux die-hard, who'd rather distro-hop, than run any non-Linux based OS. After spending hours trying to figure it out, managed to get the problematic WiFi module to work with the home WiFi router. The laptop connects reliably to the Internet. Life is bliss. Or it is, until the fateful visit with the laptop to a friends or relative's place, a hotel room, or a local coffee shop, that has graciously allowed its patrons to use the Internet connection over WiFi, and the Laptop refuses to connect to the WiFi.<br /><i><br /></i><i>For best impact read the above again, this time in Rod Sterling's voice, like the opening from "The Twilight Zone".</i><br /><div><br />Those familiar with this scenario can feel the frustration one goes through in this situation. I've gone through it many times over the years, and recently, my wife had to as well. The biggest cause for this is that, majority of WiFi modules that are packaged with laptops these days, come with little or no manufacturer support under Linux. That is the unfortunate state of WiFi networking under Linux.<br /><br />For situations like this, I started carrying with me a USB WiFi module known to work under Linux, as a fail-safe. I sacrifice bandwidth for reliability and it has proved useful over the years. The flip side of this was that, the Android phones I've used over the years, always connected successfully to almost any WiFi network, and I could reliably access Internet over them.<br /><br />The irony of the situation was that, the solution to my problem was in my pocket all this time, if only I had cared to check it there. </div><h2>Android Phone comes to the rescue</h2><div>Almost all modern Android phones natively provide multiple forms of tethering options: WiFi tethering, USB tethering, and Bluetooth tethering. The good news is, some of these options could come in handy when faced with the WiFi connectivity problems.</div><h3>WiFi Tethering</h3>WiFi Tethering allows you to share your phone's data connection (3g, 4g, LTE, etc.) with computers that are within range, over WiFi. It transforms your phone into a WiFi Hotspot. The problem with this approach is, you are burning through your phone's data, which doesn't come cheap. Many carriers also impose additional charges to even allow data sharing over WiFi.<br /><h3>USB Tethering</h3>This one took me by surprise. I had used USB tethering before; to share my phone's data connection with my laptops. Setting it up used to be more involved, requiring running a few commands, but newer Android phones have made it seamless to the point, where, it only requires one to check a checkbox on their phone, to activate it. What I hadn't thought of, or tested, earlier was that I could use USB tethering to even share the network that the phone was accessing over its WiFi connection. You do need a data capable USB cable for this to work. You'll know if your USB cable is data capable, if your laptop identifies your phone, and the phone identifies that it is connected to a computer.<br /><br /><div>Using this option, you'll be able to access your phone's Internet connection over a USB cable connected to you laptop. The connection speed will vary depending on your laptop and the phone capabilities. I was able to get nearly 15Mbps downloads and around 5.5Mbps uploads using this approach with my Nexus 5 phone.<br /><br /></div><div><img alt="SpeedTest over USB" src="" /><br /><h3>Bluetooth Tethering</h3>Modern laptops are shipping with WiFi modules that also have a Bluetooth radio. This allows the users to share files with other paired Bluetooth devices, as well as, do some other fancy Bluetooth things. For instance, I can pair my phone with my laptop over Bluetooth, and then all of my phone's sound notifications and other media playing, can be heard on my laptop. Another thing you can do over Bluetooth is share the Internet connection.<br /><br /></div><div>Like the earlier two options, you can extend your phone's data connection to the paired device. You may incur additional charges, but it is good to know you have that as an option. You could also, like with the USB tethering option, share your phone's WiFi network connection over to the paired devices - in this case, your Bluetooth enabled laptop.<br /><br /></div><div>Don't expect phenomenal speeds. This is Bluetooth we are talking about. My TWC Internet connection clocks nearly 20Mbps over WiFi. But when tethered over my phone's Bluetooth, it drops to mere 0.6Mbps. It's still better than your laptop's WiFi if it doesn't connect at all.<br /><br /></div><div><img alt="SpeedTest over Bluetooth" src="" /><br /><h4>Initial issues with Bluetooth Networking</h4>One issue I ran into, while attempting to setup Bluetooth tethering, was that, while the laptop identified the phone's Bluetooth network, and I could transfer files back and forth over Bluetooth, I wasn't able to connect to the Bluetooth network on my phone. Network Manager would show "configuring network interface" message, but always failed to connect.<br /><br /></div><div>I later realized that somehow the default configuration of the Bluetooth pair on my phone was interfering with this. The paired connection on my phone, for the laptop, did not expose "Internet Connection Sharing", as an option. I only saw "Media audio" and "Contact sharing", and both were checked. When I unchecked "Media audio" and tried to connect to the Bluetooth network "Nexus 5 Network" from my laptop, I was able to connect to the network. Checking the paired connection's option again on my phone revealed the a new checkbox option, "Internet Connection Sharing", which was already checked.<br /><h2>The Verdict</h2>If you are on the go, with a laptop containing an unreliable WiFi module, you do have options. As long as you have an Android phone (only tested with an Android phone) and a USB data cable or a Bluetooth module for you laptop, you'll be able to stay connected. Of the two, USB tethering offers superior connections speeds, but Bluetooth tether offers yet another fail-safe in case you don't have a data capable USB cable with you. While you may not be able to reliably stream videos from YouTube or Netflix over Bluetooth, as with everything else Linux, it is yet another option.</div><img src="" height="1" width="1" alt=""/>Karim Lalani 2-in-1 laptop that just works under Linux: Dell Inspiron 13 7000 Series i7347<div>For quite some time, I've felt the need for an x86 tablet computer that I could carry with me when I was away from my main workstation. This need became a necessity when I started travelling for business, and while my trusty Bonobo Extreme, from System76, was everything I ever needed for a computer to be, it was also a little too bulky and heavy, which made it almost impractical to operate in tight spaces like in a car - while not driving, of course - and in planes. I knew that I needed something that was modern with a tight form factor like that of a Tablet, yet full featured, unlike many Tablets. </div><div><br /></div><div>The ideal tablet computer needed to be cost effective and with known compatibility under Linux 32bit and 64bit. I had considered buying the Surface or the Transformer earlier, but since this was going to be a companion computer and not the primary one, and I needed offline storage to install all my software for when I was on the go, the Surface didn't have enough storage or compatibility to just justify the cost, and in addition to the lack of sufficient storage, the Transformer wasn't powerful enough for my needs.</div><div><br /></div><div>I discovered Dell Inspiron 13 7000 Series i7347 while researching 2-in-1 laptops on Amazon. After spending time researching it, it seemed like it was created with my needs in mind. The additional $100 off the sticker price of the Dell's original $599 made it an even more attractive purchase I could not afford to miss. I was a purchase I did not regret making.</div><div></div><div>The laptop is very slick and professional looking, which works great, since I intend to take it with me on my business and family trips instead of my more powerful, yet bulky, 17 inch Bonobo Extreme Laptop/Workstation. The hinges on the Inspiron are very sturdy and the laptop easily folds into a tablet. The touchscreen is very responsive. With a 500 GB hard disk at my disposal, I am able to use all my office productivity applications, as well as, software development tools and related services, without any difficulty. Now granted that this model is not an i7 or even an i5, but neither is it an Atom or ARM based. It provides a full computing experience with enough horsepower for smoother operation and functioning.</div><div><br /></div><div>Installing Linux on it was a breeze. Almost everything worked right out of the box, and the only hiccup under Linux was, the not-so-well supported Wireless/Bluetooth module with Broadcom chipset. However, with little research, I was able to find Intel's Dual Band AC 7260 module, with the proper form factor, that was compatible and well supported under Linux. A version of that module, 7265, is also listed in Dell's specifications document for this laptop as an option. I suppose, if I had bought this directly from Dell, I might have been able to choose the Intel over the Dell Wifi module.</div><div><br /></div><div>I also made the right call, when I decided to upgrade the WLAN module myself. The laptop is very easy to upgrade manually. The HDD, the RAM, the battery, and the WLAN module, among other things, are easily accessible, once you unscrew the base lid. On their website, Dell provides a Service Repair guide for this model, which provided all the necessary steps needed to help with the hardware upgrade, and that has only boosted my confidence in this machine and it's quality.</div><div><br /></div><div>I am very happy with the purchase and would recommend it to anyone who intends to run Linux on it.</div><div><br /></div><div>The 2-in-1 can be purchased from Amazon using the link below.<NJNEEAW&asins=B00NJNEEAW&linkId=CPBSIOLTGFWPBRZF&show_border=true&link_opens_in_new_window=true" style="height: 240px; width: 120px;"></iframe></div><div><br /></div><div>Note: there are two form factors the Wifi module is available under. You are looking for the smaller of the two.</div><div><br /></div><div>7260NGW Intel® Dual Band Wireless-AC 7260 802.11ac, Dual Band, 2x2 Wi-Fi + Bluetooth 4.0</div><div><br /></div><div>The Wifi Module can be purchased from Amazon using the link below. </div><div><br /><GUNZUG0&asins=B00GUNZUG0&linkId=2323II3TV3UCFBPE&show_border=true&link_opens_in_new_window=true" style="height: 240px; width: 120px;"></iframe></div><div><br /></div><div>Thank you for reading and I hope you enjoyed the article.</div><img src="" height="1" width="1" alt=""/>Karim Lalani I learned to program in Scala: The Nexus 5 experiment<p dir="ltr">Few months back I broke up my long term affair with Mono and I started looking into the the Play Framework from TypeSafe as a replacement stack.</p><p dir="ltr">Through Play Framework, I discovered Scala and the wonders of functional programming. My interest grew even deeper when I installed Scala on my laptop and tried some example code in the REPL tool (think Scala interactive shell).</p><p dir="ltr">However, I found very soon that I was not able to dedicate enough time during the week to learning Scala due to my full time day job doing .Net development.</p><p dir="ltr">Something had to be done.</p><p dir="ltr">I thought, I have this wonderfully powerful device on me all the time - my Nexus 5. Surely, I should be able to put it to use to help me spend more time with Scala. So began the experiment.</p><p dir="ltr">I rooted my Nexus 5, installed "Linux Deploy" from Google Play Store and installed a chroot Arch Linux environment. I also installed JuiceSSH client and a VNC viewer application. I already had "Terminal IDE" installed which comes with a great android keyboard geared towards software development on Android.</p><p dir="ltr">The last piece to the puzzle, I bought a $25 ANKER bluetooth keyboard compatible with my Nexus 5.</p><p dir="ltr">I fired up the Arch chroot on my phone, downloaded tmux and vim using pacman and oracle java 7 packages from AUR (some PKGBUILD tweeking was needed).</p><p dir="ltr">Then I installed TypeSafe Activator from AUR. I also downloaded vim plugins like vim-scala and CtrlP which proved to be of great help.</p><p dir="ltr">It worked!!!</p><p dir="ltr">Using the setup described above, I was able to not only learn how to program Scala, I am also able to write software using Play Framework. I can test the same on my phone's browser. The code is backed up on github and I can work on the same codebase from anywhere - as long as I have my Nexus 5 and data connection.</p><p dir="ltr">And after the recent KitKat update with Cast Screen - I am able to cast my phone's screen to my big screen HD TV over ChromeCast. This solves the issue of developing on the relatively small Nexus 5 screen.</p><p dir="ltr">This experience has made me appreciate VIM, TMUX, and the terminal in general at a whole new level. Even when on my laptop, I now find myself doing more and more coding in VIM instead of IntelliJ.</p><p dir="ltr">Think about it, you can now get a decent android box or even a phone, and with a setup similar to what I described, have an almost full fledged development environment. Rich graphical applications can be written in Scala using Swing (sigh) and possibly other graphical toolkits and can be tested within the Arch (or other Linux distro) chroot using VNC.</p><p dir="ltr">Side note about Oracle Java instead of OpenJDK: I went with Oracle Java instead of OpenJDK due to an "issue" with OpenJDK on arm chips. Its something to do with JVM running in "mixed mode".</p><img src="" height="1" width="1" alt=""/>Karim Lalani - New project - using the Play Framework<div>I recently came across the <a href="" target="_blank">Play Framework</a> and decided to take it for a spin. Needless to say I was amazed. I've decided to use it instead of .Net/Mono on a new project I've recently started work on.<br /></div><div>The project is called <a href="" target="_blank">CloudNotes</a>. The goal of the project is to create a web-based collaborative notes taking and sharing application that is accessible from all platforms. As mentioned above, it'll be built use the Play Framework. I've chosen <a href="" target="_blank">MongoDB</a> as the database backend for this. <a href="" target="_blank">LessCss</a>, <a href="" target="_blank">CoffeeScript</a>, and <a href="" target="_blank">jQuery</a>, all of which come packaged as part of Play Framework, will be used extensively.</div><div><br />Currently, the application features:</div><div><ol><li>Create and share notes using a web interface</li><li>Notes dimension and position on screen are stored along with notes - think web based collaborative brainstorming tool</li><li>GoogleDocs like collaborative editing of notes, although not yet realtime.</li></ol><div>Features, I hope to develop into the application in the coming weeks:</div></div><div><ol><li>User authentication</li><li>Access Control List on Notes and entries</li><li>Realtime interface with AJAX calls (currently causes full postbacks)</li><li>Mobile and desktop clients</li><li>Items reordering</li><li>Checklist module</li></ol><div>The source code is hosted on <a href="" target="_blank">github</a> and licensed under <a href="" target="_blank">GPL v2</a>.</div><div>Once complete, I'll probably host a demo on a <a href="" target="_blank">DigitalOcean</a> droplet.<br /><br /><a href="" target="_blank">DigitalOcean</a> offers cloud hosting for as low as $5/month or $0.007/hr for dedicated server with:</div><div><ul><li>512MB Memory</li><li>1 Core Processor</li><li>20GB SSD Disk</li><li>1TB Transfer</li><li>Additional bandwidth transfer is only 2¢ per GB</li></ul></div></div><img src="" height="1" width="1" alt=""/>Karim Lalani 1 - Progress so far - IrrlichtDotNet - .Net library for irrlicht that works under LinuxFor original article, please go to <a href="" target="_blank">this article</a>.<br /><br />After I made the declaration of starting this project on <a href="" target="_blank">irrlicht forums</a>, <a href="" target="_blank">roxaz</a> from the forums expressed interest in the project and immediately forked it on <a href="" target="_blank">github</a>. We bounced around ideas and while I won't yet merge all of his modifications into <a href="" target="_blank">my repo</a>, I did like many of this ideas and implemented them after tweaking them to fit in with my code. The ideas that I implemented from his repo are:<br /><ol><li>folder reorganization </li><li>a python script that refreshes the CS project file with all the SWIG generated cs files</li><li>make.sh file to automate many initial steps</li></ol>I also spent some time getting myself more familiar with <a href="" target="_blank">SWIG typemaps</a> and was able to map <a href="" target="_blank">irr::core::stringc</a> to System.String for almost all instances. These originally got translated to a SWIG generated generic wrapper which wasn't any useful unless you created a companion helper class in C++ to handle the translation both ways between irr::core::stringc and char *, which gets mapped correctly to System.String. Once all occurrences of irr::core::stringc are mapped to System.String, helper class won't be necessary any more. I will also attempt to deploy more typemaps to handle other such mappings in IrrlichtDotNet project.<img src="" height="1" width="1" alt=""/>Karim Lalani compatible .Net bindings for irrLicht using SWIG<br />Ever since I tried <a href="" target="_blank">IrrLicht</a>, an Open Source 3d Graphics library, I fell in love with it. It is a C++ library and my C++ isn't as good as I'd like it to be and I am more comfortable with C# under .Net.<br /><br />Since IrrLicht is open source, I figured, someone might have written an .Net binding for it, and I was correct. A few such bindings do exist, although, they are truly only .Net bindings for IrrLicht for Windows platform only.<br />I also attempted to use <a href="" target="_blank">MonoGame</a>, which is known to work under Linux, but without any success.<br /><br />A few days ago, while researching an unrelated subject, I stumbled across <a href="" target="_blank">SWIG</a>. SWIG stands for Simple Wrapper and Interface Generator. It is used when you want write wrappers to your C/C++ code for other languages. It also happens to support C#. So I decided to give it SWIG a swing and initial results were more than promising.<br /><br />I decided to test it but exposing the Main method of my C++ test code written to try IrrLicht and called it from a .Net application. It worked!<br />I decided to take a shot at generating a wrapper for IrrLicht itself. So far, using the bindings that I was able to create with SWIG, I have been able to create simple 3d scenes in C# code.<br /><br />I've created a project on GitHub to create a Linux compatible .Net binding for IrrLicht using SWIG. The repo is at <a class="postlink" href="">.</a><br /><br />Initially, much of the focus will be on creating a Linux centric binding. However, since SWIG is cross platform, if the community is interested in the effort, it could become more cross platform than that.<img src="" height="1" width="1" alt=""/>Karim Lalani 2 - Porting nopCommerce to Linux - built in data storage<div xmlns=""><div>For some odd reason, I was now able to debug from within MonoDevelop again and that made me happy. Now that the Installation page loaded correctly, I decided to test the SqlServerCe built in data storage for nopCommerce. I provided the email and password, unchecked the "Create sample data" checkbox, selected the "Use built-in data storage (SQL Server Compact)" option and hit Install. Failed.<br /></div><div>Setup failed: The requested feature is not implemented.</div><div><br />I set my breakpoint on the InstallController.cs and began walking through the code line by line to see where it had failed. I landed on SqlCeConnectionFactory.cs file line 41.</div><div><br />Contract.Requires(!string.IsNullOrWhiteSpace(providerInvariantName));</div><div><br />I tested the value of the providerInvariantName parameter to see if it was indeed null or whitespace, but it wasn't. For now I decided to comment this line and see if I could proceed. Same error. Examining the stack trace I found that Database.cs in the EntityFramework project had another Contract clause.</div><div><br />Contract.Requires(value != null);</div><div><br />And again, value wasn't null. It seemed like Contract clause wasn't behaving as designed. I decided to comment this one out as well. I found another one in EntityFramework, and then another one. After commenting out quiet a few of these over the next few trial runs, I decided to comment them all using Find and Replace. After fixing some compiler errors EntityFramework compiled without errors. Time to commit changes.<br /><br /></div><div>In next run, the application complained that it couldn't find SqlServerCe library. I noticed that Nop.Data project had an entry for System.Data.SqlServerCe.Entity.dll which had dependency issues under Linux. It appeared as though I couldn't get too far with SqlServerCe as the backend driver. I decided to give MS SQL Server a try instead.<br /><br />In first try I got the familiar Feature Not Implemented error.</div><div><br />Setup failed: The requested feature is not implemented.</div><div><br />However, an empty database did get created. I did a search and commented Contract.Requires from the EntityFramework project. I also set a breakpoint on the FailFast method in the Environment.cs core file which gets invoked when Contract clause failed.</div><div><br />Setup failed: The provider did not return a ProviderManifest instance.</div><div><br />Examining the stack trace hinted to this location: EntityFramework/Core/Common/DbProviderServices.cs:208.<br /><br />There was also an inner exception being raised.</div><div><br />Invalid URI: The format of the URI could not be determined: System.Data.Resources.SqlClient.SqlProviderServices.ProviderManifest.xml</div><div><br />Further exploring down the stack trace took me to EntityFramework.SqlServer/SqlProviderManifest.cs:78 which passed a baseURI parameter. For some reason Mono didn't like the value that was being passed. I used an overloaded method that did not require baseURI to see if it'd let me proceed.</div><div><br />Setup failed: The requested feature is not implemented.</div><div><br />This was being raised from the EntityFramework.SqlServer project. I did another search and comment on Contract.Requires this time on the EntityFramework.SqlServer project.<br /><br /></div><div>System.StackOverflowException: The requested operation caused a stack overflow.</div><div><br />After another round of placing breakpoints and tracing through the code, I found that the stack overflow was being caused by EntityFramework/Core/Common/Utils/Memorizer.cs:71 when attempting to evaluate result.GetValue(). I surrounded it with a try catch and returned the default value of the generic TResult in case of an exception to see if it'd let me proceed.</div><div><br />Setup failed: An unexpected exception was thrown during validation of 'Name' when invoking System.ComponentModel.DataAnnotations.MaxLengthAttribute.IsValid. See the inner exception for details.</div><div><br />This was coming from EntityFramework/Internal/Validation/ValidationAttributeValidator.cs:75. Inner exception hinted of an unimplemented function.</div><div><br />IsValid(object value) has not been implemented by this class. The preferred entry point is GetValidationResult() and classes should override IsValid(object value, ValidationContext context).</div><div><br />In order to proceed, I forced a ValidationResult of Success.</div><div><br />Setup failed: The handle is invalid.</div><div><br />Reviewing the stack trace took me to EntityFramework/Core/Metadata/Edm/LightweightCodeGenerator.cs:193. Examining the code showed that CreatePropertyGetter was creating a dynamic method to map to the getter method of a property on an Entity. After some more trial and error, it became apparent that while the Type handle was mapping correctly, the RuntimeMethodHandle held an invalid value. Fortunately, the getter method on a property has the naming convention of get_PropertyName. This could come in handy to boot strap this code to allow proceeding further. Time to commit changes and review the work.<br /><br />I reviewed the database that was created by the application, and I see tables being created. This means that EntityFramework is facilitating table creation which is good news. This is a good stopping point for the day.<br /><br /></div><div><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="" width="247" /></a></div><br />Conclusion thus far: SqlServerCe driver won't work under Linux, however, after some heavy tweaking and commenting of EntityFramework code, SqlServer driver does work, at least as far as creating tables.</div></div><img src="" height="1" width="1" alt=""/>Karim Lalani 1 - Porting nopCommerce to Linux<div xmlns=""><div><a href="" target="_blank" title="nopCommerce">nopCommerce</a> is an open source, feature rich, and production ready shopping cart and eCommerce solution. They are being used by a number of businesses all over the world as a platform of choice for their <a href="" title="showcase">webstores</a>. Out of the box, nopCommerce supports MS SQL Server and MS SQL Server Compact Edition as backend database engine. I was researching open source eCommerce solutions when I came across nopCommerce few months ago. NopCommerce peaked my interest as it is written using .Net Framework. While Mono project makes applications written on .Net framework cross platform, it lags behind .Net framework in development and lacks the latest features that .Net framework offers. When I first looked into nopCommerce, Asp.Net MVC 3 was not available for Mono and neither was Entity Framework. nopCommerce utilizes both of these features heavily. This made nopCommerce, an open source solution, not cross platform.<br /><br /></div><div>Recently, when the 3.0.3 development branch of Mono made both of these features available, I thought it'd be interesting to try and test Mono framework with its newly added Entity Framework and attempt to run nopCommerce under Linux.<br /><br /></div><div>This article will record the changes that were made to get rid of initial compile time errors for nopCommerce, run the application, and get to the Installation page.</div><div><br />Note: When modifying files in MonoDevelop that originate on Windows platform, and when a source control like Git is involved, it is best to not have MonoDevelop change line endings from windows to unix format to keep you commits clean.<br /><br />Note: The code changes made during this experiment are available on GitHub at <a href=""></a></div><div><br />The primary challenge I face is that in order to utilize Entity Framework and ASP.NET MVC 3 for .Net development, I had to upgrade the stable Mono 2.10.x to Mono 3.0.3 development release. I also had to upgrade MonoDevelop to 3.1.0 development release. This has broken the debugging capabilities from within MonoDevelop.</div><div><br />For porting nopCommerce, initially, I thought that simply switching the .Net Entity Framwork dlls with the Mono's dlls would be enough to run nopCommerce. I downloaded the source code from nopCommerce and loaded it in MonoDevelop. At first glance it didn't look as bad. Looking closely it becomes clear that System.Data.Entity supplied by the project would not work with Mono under Linux and there were dependency issues with EntityFramework and SqlServerCe dlls. I also downloaded the source for EntityFramework for Mono and planned to load EntityFramework as a project instead of simply adding the dll reference to the project. From earlier experience, I knew I'd be making some code changes to the EntityFramework to make it work with nopCommerce.</div><div><br />I added a solution folder for EntityFramework to nopCommerce solution and then added EntityFramework, EntityFramework.SqlServer, and EntityFramework.SqlServerCompact projects to it. For a quick sanity check, I compiled these new projects. EntityFramework complained that the delay sign key provided couldn't be used for assembly signing. That was easily fixed by providing the '-delaysign' compiler arguement. Recompiling didn't cause the same error but instead caused other code related errors, all of which are 'Warning as Error' which could also be fixed by unchecking the "Treat warnings as errors" settings for the project. This fix would also have to be applied to any other projects that complain in similar fashion. Also, the System.Data.SqlServerCe.dll supplied with EntityFramework.SqlServerCompact wasn't compatible with Mono on Linux. I was able to download the file from my Windows installation and that seemed to work ok for the purpose of achieving an error free compilation. The intention here was not to fix the EntityFramework project but to tighten or loosen just enough screws to run nopCommerce under Mono under Linux.</div><div><br />Build: 0 Errors, 54 Warnings.</div><div><br />That's a good place to be. Since EntityFramework project compiled, I commited the changes to the git repository so it would serve as a checkpoint in case I needed to revert any changes I make to the framework.</div><div><br />I replaced the EntityFramework references from the nopCommerce project and use the Entity Framework project as reference instead. Changes would be made only to Nop.Data project and the other projects under the Libraries solution folder for now. I also upgraded the projects from Mono / .Net 4.0 to .Net Framework 4.5 as that's the version EntityFramework would compile under due to other dependencies that are only available under .Net Framework 4.5. Recompiling gave only one error though enough to halt compilation of the project.</div><div><br />Libraries/Nop.Data/Extensions.cs(19,19): Error CS0234: The type or namespace name `Objects' does not exist in the namespace `System.Data'. Are you missing an assembly reference? (CS0234) (Nop.Data)</div><div><br />This was in reference to System.Data.Objects namespace needed on that class. A quick examination revealed that ObjectContext also had an issue and the only way to fix it was to include System.Data.Entity.Core.Objects in the project. This was coming from the EntityFrameworks project I included in the solution. Since I was tracking my changes in Git, I renamed all System.Data.Entity.Core to System.Data to see what it'd do.</div><div><br />Buid: 49 Errors, 34 Warnings.<br /><br />Most of these were instances of EntityState and DbFunction references complaining for missing System.Data.Entity namespace directive. Many more errors were revealed once these errors were resolved but fortunately those required the same fix. Once all the errors were resolved, I tried replacing the references from EntityFramework.dll, System.Data.Entity.dll, and System.Web.Entity.dll with the EntityFramework project in their place. The changes made to the files under EntityFramework solution folder would only be commited to Git if replacing the Entity Framework references did not generated any namespace related errors.</div><div><br />As mentioned earlier, the projects under the Library solution folders would have to be upgraded to use .Net Framework 4.5 from Mono / .Net 4.0 profile since Entity Framework requires .Net Framework 4.5 features. Similarly projects under the Presentation solution folder would need to be upgraded to use .Net Framework 4.5 and references to EntityFramework.dll and System.Data.Entity.dll would need to be replaced with the project reference to EntityFramework project instead. Since all these projects compiled well without any errors, I commited the changes made to EntityFramework projects to Git.</div><div><br />System.InvalidOperationException</div><div>Storage scopes cannot be created when _AppStart is executing.</div><div><br />I had seen this before. A small change in Global.asax.cs file was needed to resolve this. Adding the following lines in the beginning of Application_BeginRequest function and including the necessary namespace references does the trick.</div><div><div><br />var ob = typeof(AspNetRequestScopeStorageProvider).Assembly.GetType("System.Web.WebPages.WebPageHttpModule").GetProperty("AppStartExecuteCompleted", BindingFlags.NonPublic | BindingFlags.Static);</div><div>ob.SetValue(null, true,null);<br /><br /></div></div><div>Giving it another run gave the following error.</div><div><br />System.IO.DirectoryNotFoundException</div><div>Directory '/home/karim/MonoDevelopProjects/nopCommerce_2.65_port/nopCommerce-Linux-Mysql/Presentation/Nop.Web/App_DataLocalizationInstallation' not found.</div><div><br />Now I was getting somewhere. Searching for the path reference in the code took me to the InstallationLocalizationService.cs file which passes the location string to webHelper.MapPath method. This took me to MapPath function in the WebHelper.cs file in Nop.Core project. The else block was written for Windows environment. This was done by the .Replace('/', '\\') method. Searching for that string pattern revealed two places total where this was used. I could check for OperatingSystem information before deciding to use the Replace function. Surrounding the code with "if(Environment.OSVersion.Platform != PlatformID.Unix)" should be sufficient. Recompiling gave no errors but running the project again did. It is a different error this time.<br /><br /></div><div>System.InvalidOperationException</div><div>The view 'Index' or its master was not found or no view engine supports the searched locations.</div><div>The following locations were searched:</div><div>~/Views/install/Index.aspx</div><div>~/Views/install/Index.ascx</div><div>~/Views/Shared/Index.aspx</div><div>~/Views/Shared/Index.ascx</div><div>~/Views/install/Index.cshtml</div><div>~/Views/install/Index.vbhtml</div><div>~/Views/Shared/Index.cshtml</div><div>~/Views/Shared/Index.vbhtml </div><div><br />At first glance it looked counterintuitive because ~/Views/Install/Index.cshtml did exist, but taking a closer look revealed that the system was looking for index with lower case "I" instead of upper case. I changed the URL in the address bar and used Install with upper case "I" and that landed me on the installation page.</div><div>After inspecting Global.asax.cs file at Application_BeginRequest, I found that one of the first thing the application does (ignoring the _AppStart fix) is EnsureDatabaseIsInstalled. Examining it revealed where the controller name install with a lower case "i" came from. I fixed the case there, recompiled and ran the application.</div><div><br />I now had the nopCommerce Installation Page.<br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br /></div></div><img src="" height="1" width="1" alt=""/>Karim Lalani weekend porting NopCommerce to Linux and for MySQL. Why so excited?Because it isn't meant to, at least not yet.<br /><br /><a href="">NopCommerce</a> relies on Entity Framework which was recently open sourced by Microsoft. <a href="">Mono</a> has included it in their latest beta. It doesn't included support for MySQL database. MySQL driver for .Net has Entity Framework features turned off for Mono due to the same reasons.<br /><br /.<br /><br />The admin portion isn't functioning yet due to an issue with generated SQL somewhere.<br /><br />I'll setup a repository in GitHub soon to show the hacks that were put in place to make this possible.<img src="" height="1" width="1" alt=""/>Karim Lalani graphics programming - adding some physics<div>Now that we got the basics of 3d game programming using Irrlicht out of the way, how about adding some physics? I'll demonstrate how physics could be added to a 3d application using the open source <a href="">Tokamak Physics</a> library. In order to better illustrate the source code, I've uploaded it to GitHub. It is accessible via the <a href="">3dGraphicsExamples</a> repository.</div><div>Update the Qt project file to include Tokamak library:</div><div><script src=""></script></div><div>In order to use Tokamak in C++ code, you'll need to include its header file:</div><div><script src=""></script></div><div.</div><div><script src=""></script></div><div:</div><div><script src=""></script></div><div>Create cards using the routine from the previous example however this time also set their physical attributes in Tokamak:</div><div><script src=""></script></div><div:</div><div><script src=""></script></div><div>You'll also need to ensure that your simulation runs consistently regardless of the amount of processing that goes on in your render loop. The below code is adaptation of code used by Adam Dawes on his <a href="">site</a>: </div><div><script src=""></script></div><div>In your render loop, you'll have to traverse through your catalog of 3d objects in Tokamak and apply the position and rotation to the corresponding Irrlicht scene nodes.</div><div><script src=""></script></div><div>There you go! You should now see your 3d objects, all initially facing in different directions, falling from on the floor, colliding with each other and bouncing. The complete source file can be found on github: <a href="">example2.h</a></div><a href="" imageanchor="1" style=""><img border="0" height="180" width="320" src="" /></a><a href="" imageanchor="1" style=""><img border="0" height="240" width="320" src="" /></a> <img src="" height="1" width="1" alt=""/>Karim Lalani to 3d graphics programming using IrrLicht<div xmlns=""><div><a href="">IrrLicht</a> is free and open source 3d graphics library written in C++. I came across IrrLicht when I was looking for a graphics library that was simple to understand and easy to follow. I had tried DirectX SDK under Windows a few years ago and remember having to blindly copy-paste code samples to get something to work without being able to understand how the code in front of me worked. I had since read about Ogre3d and couple other graphics engines.<br /><br /></div><div>When I found IrrLicht, I was surprised by how soon I was able to get myself up and running. The few tutorials on their site covered enough ground to make me feel confident about spending more time experimenting with the library. In this post, I'll do a walkthrough of some code that I've written.</div><div>You'll need to download IrrLicht library or compile it from source. IrrLicht is available for download from their website. It is also available as a download from official repositories of major Linux distributions. IrrLicht is also cross-platform and you'll be able to write 3d graphics applications for Linux, Mac, and Windows.</div><div><br />I use QtCreator to do C++ development but you could use any text editor or IDE of your choice. Just remember to include path to IrrLicht header link against IrrLicht library.</div><div><br />Here is how my Qt project file looks like:</div><pre></pre><pre></pre><pre></pre><pre>TEMPLATE = app<br />CONFIG += console<br />CONFIG -= qt<br /><br />unix:!macx:!symbian: LIBS += -L/usr/lib/ -lIrrlicht<br /><br />INCLUDEPATH += /usr/include/irrlicht<br />DEPENDPATH += /usr/include/irrlicht<br /><br />SOURCES += \<br /> program.cpp<br /></pre><pre></pre><pre></pre><div><br />Add the following lines to your cpp file:<br /><br />#include <irrlicht.h><br />#include <iostream><br /><br />using namespace std;<br />using namespace irr;<br />using namespace core;<br />using namespace scene;<br />using namespace video;<br />using namespace io; a Cube to the Scene.<br /> ISceneNode * node = smgr->addCubeSceneNode();<br /><br /> //Needed to make the object's texture visible without a light source.<br /> node->setMaterialFlag(EMF_LIGHTING, false);<br /><br /> //Add texture to the cube.<br /> node->setMaterialTexture(0,driver->getTexture("/usr/share/irrlicht/media/wall.jpg"));<br /><br /> //Set cube 100 units further in forward direction (Z axis).<br /> node->setPosition(vector3df(0,0,100));<br /><br /> //Add FPS Camera to allow movement using Keyboard and Mouse.<br /> smgr->addCameraSceneNodeFPS(); /><br /></div><pre></pre><div>Compile and Run.</div><div>You should see a textured cube in the middle of the screen. You will also notice that using your mouse and keyboard, you can move aroud the redered world.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="256" src="" width="320" /></a></div><br /><br />Let's extend this example to do some fancy stuff. How about loading pictures from your computer in a 3d world? Let us do just that. Let's add a routine to to which we'll pass path to a directory as an argument.<br /><br />We'll traverse this directory for picture files and for each picture file we find, we'll throw a cube on the screen and scale it to right proportion.<br /><br />#include <irrlicht.h><br />#include <iostream><br /><br />using namespace std;<br />using namespace irr;<br />using namespace core;<br />using namespace scene;<br />using namespace video;<br />using namespace io;<br /><br />#define PICSCOUNT 800 //Maximum number of pictures to load.<br /><br />void processFolder(IrrlichtDevice * device, const path &newDirectory)<br />{<br /> //Get the File System from the device.<br /> IFileSystem * fs = device->getFileSystem();<br /><br /> //Get the Scene Manager from the device.<br /> ISceneManager * smgr = device->getSceneManager();<br /><br /> //Get the Video Driver from the device.<br /> IVideoDriver * driver = device->getVideoDriver();<br /><br /> //If maximum number of pictures already loaded, then don't load anymore.<br /> if(driver->getTextureCount() >= PICSCOUNT)<br /> {<br /> return;<br /> }<br /><br /> //Change working directory.<br /> fs->changeWorkingDirectoryTo(newDirectory);<br /><br /> //Get List of files and sub folders.<br /> IFileList * flist = fs->createFileList();<br /><br /> //Sort by file names and folder names.<br /> flist->sort();<br /><br /> //Loop through the contents of the working directory.<br /> for(u32 i = 0; i < flist->getFileCount(); i++)<br /> {<br /> //If current item is a directory<br /> if(flist->isDirectory(i))<br /> {<br /> //and it is not "Parent Directory .."<br /> if(strcmp(flist->getFileName(i).c_str(),"..") != 0)<br /> {<br /> //process contents of the current sub directory<br /> processFolder(device,flist->getFullFileName(i));<br /> }<br /> }<br /> else //If current item is a file<br /> {<br /> //Get file name<br /> path filename = flist->getFileName(i);<br /><br /> //Get extension from file name<br /> std::string extension = filename.subString(filename.size() - 4,4).c_str();<br /><br /> //If file extension is .png<br /> if(strcasecmp(extension.data(),".png") == 0)<br /> {<br /> //Create a new cube node with unit dimention<br /> ISceneNode * node = smgr->addCubeSceneNode(1.0f);<br /><br /> //Scale the cube to the dimentions of our liking - 75x107x0.1<br /> node->setScale(vector3df(75,107,0.1f));<br /><br /> //Set random X cordinate between -500 and 500<br /> long x = random()% 1000 - 500;<br /><br /> //Set random Y cordinate between -500 and 500<br /> long y = random()% 1000 - 500;<br /><br /> //Set random Z cordinate between -500 and 500<br /> long z = random()% 1000 - 500;<br /><br /> //Create a position vector<br /> vector3df pos(x,y,z);<br /><br /> //Change cordinates such that direction is preserved and length is 800 units<br /> pos = pos.normalize() * 800.0f;<br /><br /> //Apply new position to cube<br /> node->setPosition(pos);<br /><br /> //Get active camera<br /> ICameraSceneNode * cam = smgr->getActiveCamera();<br /><br /> //Set camera to "look" at cube<br /> cam->setTarget(node->getPosition());<br /><br /> //Apply camera's new rotation (as a result of above) to the node<br /> node->setRotation(cam->getRotation());<br /><br /> //Make cube's texture visible without light<br /> node->setMaterialFlag(EMF_LIGHTING, false);<br /><br /> //Set the file's graphics as texture to the cube<br /> node->setMaterialTexture(0,driver->getTexture(flist->getFullFileName(i).c_str()));<br /><br /> //If maximum number of pictures already loaded, then don't load anymore.<br /> if(driver->getTextureCount() >= PICSCOUNT)<br /> {<br /> return;<br /> }<br /> }<br /> }<br /> }<br />} FPS Camera to allow movement using Keyboard and Mouse.<br /> smgr->addCameraSceneNodeFPS();<br /><br /> //Process contents of this folder.<br /> processFolder(device, "/home/karim/Images/Cards-Large/150/"); /><div><br /></div></div><div>The images will appear as though they are spread around and stuck on the inside/outside of a transparent sphere.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="256" src="" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="256" src="" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div>Let change it a little. Make the following changes:<br /><br />From:<br /><br />//Change cordinates such that direction is preserved and length is 800 units<br />pos = pos.normalize() * 800.0f;<br /><br /><br />To:<br /><br />//Set Y cordinate to 0<br />pos.Y = 0;>The images will now appear as though they are spread around like the stone henge. How about a little more fun? Make the following change:<br /><br /><br />From:<br /><br />//Set Y cordinate to 0<br />pos.Y = 0;<br /><br /><br />To:<br /><br />//Set Y cordinate to 0<br />//pos.Y = 0; //Comment>I hope you this tutorial proves helpful in getting you started in your journey through the facinating world of 3d graphics programming.</div></div><img src="" height="1" width="1" alt=""/>Karim Lalani world creation and simulation using Open Source tools and libraries<div xmlns=""><div>Over past few weeks I've been looking into 3D world simulation. I checked out Ogre3d however I found it to be way too complex with a steep learning curve. I then came across <a href="">irrLicht</a> and was greately impressed with its simplicity of use and how quickly it let me jump from following their tutorials to creating my own code. The tutorials on their site are well documented and very easy to follow. Among other things, the tutorials also address basic collision detection. Simple simulations were very easy to make, however, I ran into a road block when attempting to implement slightly more complex physics. I soon realized that while it was possible, I'd have to hand code almost all physics, for instance, gravity, friction, momentum, etc.</div><div>After doing some research I came across <a href="">Tokamak</a> physics engine. I found Tokamak to be surprisingly easy to learn however I only found very few example. Right around the same time I also found <a href="">Bullet Physics</a> which I found to be very well documented with plenty of tutorials and examples available online. Bullet is also a more comprehensive physics engine than Tokamak and comes with a slight learning curve over Tokamak. On the other hand, Tokamak can get you started and running in no time.</div><div>I am currently exploring IrrLicht and both Bullet and Tokamak in couple hobby projects. If the projects gain critical mass, I'll share the experience and perhaps example code in future posts.</div></div><img src="" height="1" width="1" alt=""/>Karim Lalani: A surprise addition to our family computers<div xmlns=""><div:</div><pre>$sudo pacman -Syu</pre><div>I had forgotten about a recent advisory issued at the ArchLinux Wiki about including "--ignore glibc" for the upgrade and to upgrade glibc only after the system upgrade is complete. Now I am not sure if that was the reason or if it was something else, but it rendered the laptop in a non-upgradable state. Passing "--ignore glibc" now was causing other dependency issues. I figured it was time to give another more beginner friendly Linux distribution a try for my dad's computer. I am very happy with ArchLinux on my computer, however, for my dad's computer, I wanted something that would be easier for my dad to manage. The first distribution that came to my mind was <a href="">Zorin OS</a>. Screen shots and features made it look very promising. Zorin OS appears to be very polished distribution and I had also started the download only to notice a 1.4 GB iso that would take a very long time download because of the limited number of mirror sites published on their site. Also, I didn't have a blank DVD available that would be needed in order to make the iso usable on my dad's computer.</div><br /><div>Spending some time on <a href="">Distro Watch</a> revealed some familiar distributions, but one stood out: <a href="">PCLinuxOS</a>. I recalled that <a href="">REGLUE (formerly HeliOS Project)</a> use PCLinuxOS on the computers that they setup for disadvantaged kids in Austin, TX area. I had also read few articles in the past on the E-Magazine that PCLinuxOS publishes regularly and was generally impressed by the content they generated. So I decided to give PCLinuxOS/KDE a try. I read the details about the software included on the iso on their download page. Some software were slightly older that what I had gotten used to on my Arch. However, I chose Arch for my laptop for exactly that reason. Moreover, since my dad only needed a laptop to check his emails, create and review documents and spreadsheets, and use the Internet, I was not going to let a few slightly older versions of software prevent me from giving PCLinuxOS a try.</div><br /><div>I downloaded the iso, burnt it to a blank CD, slapped it in the laptop, and fired up the LiveCD. While I was not very impressed with what I was looking at, I was not disappointed either. I did notice that the built-in wireless adapter that was not functioning correctly under ArchLinux, was now working perfectly fine. This was an excuse enough for me to do a full install, and I proceeded with that in mind.</div><br /><div>During the install, I was presented with the option to use existing partitions to setup PCLinuxOS. I decided to preserve the /home partition, and reformat the / and /boot partitions, a feature I had come to depend on for quiet some time now. With the /home partition intact, my dad would continue to have access to all his documents, audio files, as well as application settings. Once the install was complete, after the first boot, I was asked to set the default timezone and create my first user account. I setup my user account as the first user just like it was before. I was now presented with a login splash screen. It was the same login splash that I had setup for my account when the laptop still had ArchLinux on it. I then setup my dad's account so he could login and begin using the laptop.</div><br /><div>From a very brief glance at the different configuration utilities that were available to me when I logged in were firewall setup, network setup, video setup etc, which I found to be very easy to use. I also noticed that root user was also listed in the user selection at the login screen. I decided to login using that account. I was presented with a popup greeting that reminded me of something that I had forgotten. PCLinuxOS is a rolling release distribution just like ArchLinux. The popup also instructed me to run a system update using Synaptic Package Manager. I followed the instructions and did a full system upgrade, and I restarted the computer after the upgrade. Once the system was upgraded with the latest packages, I was pleasantly surprised to see a more visually pleasing desktop than what I saw prior to the restart. The system looks much more polished now and setting up applications that my dad had gotten familiar with over the months was a piece of cake.</div><br /><div>I can now say with much confidence that we'll be using PCLinuxOS on at least this computer as a permanent distrubution. It is now the number one Linux distribution that I will recommend to people as a first distribution to try.</div></div><img src="" height="1" width="1" alt=""/>Karim Lalani: ArchLinux + ATi Catalyst issue<div xmlns=""><div>After doing some more reading about the ATi Catalyst 12.6 driver release and upgrade information, I realized that the my hardware, Radeon HD 6250, was indeed supported. The "Unsupported Hardware" watermark was probably a bug that will hopefully be fixed in one of the future releases, whenever that might be. I did find a resolution for the time being that removes the watermark on the ArchLinux Wiki page for ATi Catalyst.<br /><br />It involves running this following script:</div><pre><br /></pre><pre>#! /></pre><div>Once this script is run, and the X server is restarted, the watermark disappears. All other functionality, especially OpenGL rendering etc, seems to be intact. Good enough for me.</div><div><br />However, this highlights the problem of having closed source, proprietary hardware drivers. Had the drivers been open, this issue would have been fixed not too long after the drivers get released. Hopefully the open source "radeon" will catch up in performance and features with the proprietary drivers and we won't have to worry about these kinds of issues in the future.</div></div><img src="" height="1" width="1" alt=""/>Karim Lalani - Use at your own risk<div xmlns=""><div><strong><em>Update: <a href="">update-archlinux-ati-catalyst-issue.html</a></em></strong><br /><br />I have an HP Pavilion g series laptop running ArchLinux. With 17" wide screen and quad code AMD A6 processor, I really have nothing to complain. I love Arch. It allows me to be on the bleeding edge. Among all the other distros that I have tried in the past, Arch gave me the most control with out as much hassle. Today, I use ArchLinux exclusively on my laptop. And getting front row passes for being able to try the latest and greatest of features of Linux almost makes me giddy. Except, I am stuck with AMD Radeon graphics built in the APU.</div><br /><div>AMD/ATi are not very keen about providing regular driver updates for the AMD Radeon chip that is in my computer. The open source driver lags far behind the proprietary driver in features. The proprietary driver relies on xorg-server version 1.11 while the latest xorg-server is version 1.12. Staying stuck with xorg-server 1.11 means that while my X server is slightly behind on features and security updates, but also are all the components that depend on it, for instance, XFree86 drivers for my input devices. During every system wide update, I have to be extra careful as to not update any of the XFree86 drivers as there is not a hard dependency extablished for xorg-server between the versions that I use. An incompatible input device driver renders my keyboard and touchpad unuseable as soon as X session takes over, regardless of me using the laptop keyboard and touchpad, or an external keyboard and mouse.</div><br /><div>This was a little easier until a few days ago. The version of Catalyst driver available was 12.4 which had a hard dependency on xorg-server 1.11. All I had to worry about were XFree86 drivers.</div><br /><div>Few weeks ago, AMD released a new Catalyst driver for newer hardware, version 12.6. Unfortunately for me, my hardware is not capable of utilizing that driver as it was excluded from the driver. When I installed it without reading the Arch Wiki page about the upgrade, I noticed that bottom right corner of my screen displayed a watermark - Unsupported Hardware. 12.6 version of Catalyst however is listed as an upgrade to the 12.4 version of Catalyst. So now it shows up in my list of upgrade. Changing the pacman mirror for Catalyst resolved the issue as the new mirror only had 12.4 available for download.</div><br /><div>A few nights ago, I was in bed at around 10:30 PM and I was hoping to sleep right away. I decided that since I hadn't run a system update in the past few days, I'd go ahead and run a quick update before sleeping. I fired up Apper, a universal KDE package manager that works with Pacman as well as Apt-Get for debian based systems, and saw 30 some applications awaiting updates. I scrolled down the list and unchecked the usual suspects: xorg-server 1.12, etc. I realized that somehow, I had failed to notice Catalyst version 12.6 in the list of updatable software and did not exclude it. I realized my mistake one X server restart later when the nasty watermark became visible. I figured it was not as big a deal and fired up Apper with the intention of downgrading the Catalyst driver to 12.4. Except, it wasn't there. I decided to switch to the open source driver for the time being while I investigated what might have happened. The open source driver was installed and I was at the login menu. All looked good, except, my keyboard and touchpad wouldn't work. The open source driver has a dependency on latest version of xorg-server. I realized later that when I installed tthe open source driver which caused the xorg-server upgrade, I hadn't selected the XFree86 input driver for synaptic to be upgraded. The version of the input driver that was installed had a dependency on older version of xorg-server and now the missmatch was causing rendering my laptop useless as soon as X session started.</div><br /><div>Fast forward couple of very frustrating hours and I was able to break the boot process before X session loaded to get terminal access to a half ready system. Then after remounting the root partition as read/write, I unstalled xorg-xserver and all the installed dependencies. A reboot later into a more usable and now network aware system, using terminal, I installed the open source ati driver. Once I was able to log into the KDE session, I fired up Apper again, and this time more careful about it, I manually downloaded the correct version of Catalyst driver from the mirror site, uninstalled the open source driver and downgraded xorg-server and the hard dependencies as well as XFree86 input driver. I then using pacman installed the 12.4 version of the Catalyst driver and restarted X server. I was now able to login and use the system and begin my efforts at documenting my experience.</div></div><img src="" height="1" width="1" alt=""/>Karim Lalani on "The Arch Way"<div xmlns=""><div>I had once started down the Arch path before. Back then the intent was to revive old Fujitsu Tablets. As soon as the Live image fired up I abandoned it. It was then I realized that Live CD doesn't mean Live CD with X. Yesterday I found myself research Arch in my quest to find a lean Linux distribution that was on a rolling release. Arch was the answer and it had the latest versions of all software that I needed current.<br /><br />Having used Linux for over six years, I was still intimidated by Arch because of what I read about it. GUI installers are against "The Arch Way". There was this whole text-file-based configuration editing step that I had read about and was not too fond of. However, the part that scared me the most about the non-GUI installer was the disk partioning. I have 100GB of data on my home partition that I wasn't going to sort through before installing Arch. Also, I was not too thrilled by the idea of running a back up of that data over USB either. Frankly, I felt that "The Arch Way" was frankly getting in my way. But sticking with Kubuntu was no longer the option and I couldn't find any other Linux distribution to be a viable alternative either. I decided to read through the beginner's guide on the ArchWiki and fire up a virtual machine to get a feel for the installation process and to see what the final product might look like.<br /><br /></div><div>To my surprise, the installation was not nearly what I had expected. It was very straight forward and methodical. Having already read the beginner's guide also made it a lot easier to follow through. I made sure I familiarized myself with the partitioning utility during the installation. Besides the computer's host name, I did not have to change any of the defaults in the configuration editing step. The defaults were quite sufficient. I was able to follow the steps from the beginner's guide and setup X and KDE and was able to boot into KDE session on the new Arch VM. To be certain, I tried this routine one more time and made some mental notes.<br /><br />Since I was going to install Arch on my laptop, I decided to not hook up the ethernet cable for Internet and rather do the post installation updates over WiFi. I already had my wpa_supplication configuration file in my home directory. I would come in handy during post installation. I made note of the UID of my user account under Ubuntu. This would surely come in handy once Arch is installed.</div><div><br />During the installation:</div><div>1. On partitioning section, I made sure that I didn't reformat my home partition.</div><div>2. On the configuration editing step, I changed the HOSTNAME to what I had previously had under Ubuntu.</div><div>3. I skipped editing the network configuration as that was needed only for ethernet based Internet updates.</div><div>4. On the package selection step, I selected sudo and all the network and wireless related packages.</div><div>After the installation, after reboot, I checked to see if my home partition was still intact. And it was. I created a new user for my login and used the UID that I got when I was still running Ubuntu. I fired up the WiFi with the following command:</div><div><br />wpa_supplicant -Dwext -i wlan0 -cwpa.conf -B</div><div>dhcpcd</div><div><br />I was connected to Internet through WiFi which I validated by issing a ping to. I modified the /etc/pacman.d/mirrorlist file to uncomment the mirrors for my updates. After that I ran the pacman update command:</div><div><br />pacman -Syu</div><div><br />This updated the pacman database and started the initial update process. Once the update was done, I followed the instructions from the beginner's guide to install X and then KDE. I followed the KDE guide on the Wiki to setup kdm as the display manager. Once KDE was installed, I rebooted. After the reboot, I was presented a graphical login screen. Once inside KDE session, I had to restart wpa_supplicant for WiFi. I also realized that the network daemon was slowing the boot process. I disabled it from the rc.conf file.<br /><br />The Wiki mentioned about graphical tools for package management and KPackageKit/Apper was one of them. Since I had used Apper with Kubuntu and was familiar with its functionality, I decided to install it. Installation of Apper was probably the trickiest thing to figure out during this entire exercise. But I was able to install it from AUR. Once Apper was installed, installing other software became a piece of cake. I installed all the plasma widgets and plasmoids. This enabled me to use the Network Management plasma widget that I was accustomed to under Kubuntu. Since it needed NetworkManager to function, I installed NetworkManager daemon and enabled it in rc.conf file. On the next reboot, I had networking and I could get on the WiFi using KDE's network management settings.</div><div><br />I did realize that while I had my home folder from previous setup, my desktop and KDE settings had disappeared. Since I knew that Kubuntu stored user level configuration under ~/.kde/share, I decided to take a look. I did find that there was a ~/.kde4 folder under home along with ~/.kde. This was it. Arch was using the ~/.kde4 folder instead of ~/.kde which was why my previous settings had not taken effect in the new setup. I copied the share folder from .kde to .kde4 folder and logged out. After loggin back in, I was in a familar workspace.</div><div><br />With all my software installed under the ArchLinux and with my original KDE settings restored, it feels like I never switched distributions.</div><div><br />One of the things that I had forgotten to do was to create a group by the same name as the user as Kubuntu did. That caused a temporary permissions issue which I was able to resolve without much difficulty.<br /><br />I hope this information will be helpful to those who are considering to move to a different distribution but my be intimidated by what they might have read about it.</div></div><img src="" height="1" width="1" alt=""/>Karim Lalani bye Kubuntu<div xmlns=""><div>I never thought it'd come to this. I was content with what I had. I had been using Kubuntu full time since their 7.04 release. Kubuntu was the realization of my love for the KDE desktop environment as it offered near latest builds of KDE desktop and all of the software that I needed for everyday computing. I was a happy camper.<br /><br /></div><div>While I was already familiar with Linux, KDE desktop was what converted me into a fulltime Linux user and Kubuntu was the only distribution that I had found to implement KDE well. So the decision to switch away from Kubuntu to another KDE based desktop was a difficult one. There were three factors that led me to consider an alternate KDE distribution.</div><div><br />Until a few months ago, I used my computer only for regular everyday tasks like for messaging, surfing, e-banking, online shopping, etc. I did some programming, but that was mostly as a hobby. Few months ago I started spending more time doing application development for more than just recreation. I mostly develop using the Mono framework on my Linux machine. Since Mono is integrated with Ubuntu, to ensure stability, Canonical does not offer updates to Mono very often for Ubuntu. Mono packages for Ubuntu lag behind the official Mono releases and recently the gap has only widened. Since, and this is by design, Mono itself lags behind the latest .Net Framework, to be able to utilize the power of the latest release of Mono, I have had to compile it from source on my laptop on number of occasions. I've had to do this after every Ubuntu upgrade every six months since the Ubuntu upgrade would cause some of the dependencies to be overwritten.</div><div><br />Canonical releases Ubuntu on a schedule with one release in April and the other in December. This means that while the system receives regular updates, major features and enhancements are only released with those scheduled releases. These features and enhancements are not only those that Canonical might include in the new releases of Ubuntu, but they might also include enhancements to Desktop Environment and to the Linux Kernel etc. Sometimes, waiting for a full release in order to avail some of these enhancements doesn't seem justifiable.</div><div><br />I would have still continued using Kubuntu, if it weren't for the news that I came across couple days ago. Canonical will be discontinuing funding the development of Kubuntu. Now for some perspective, Canonical had one paid full time developer who is responsible for KDE implementation in Kubuntu who they would no longer fund for the effort and Canonical would continue to provide only infrastructure support to Kubuntu. I am sure that this does not mean the end for Kubuntu. But it might mean that KDE related updates to Kubuntu would become more infrequent over time. Due to the lack of full time developmental resources from Canonical, the effort might even be taken over by the community for KDE related maintenance.</div><div><br />I did some research on different Linux distributions to replace Kubuntu on my laptop. The criteria was set: I needed a distribution that would allow me to easily install and upgrade to the latest versions of software, especially Mono Framework and KDE, along with the latest Linux Kernel updates. After reviewing my choices against the criteria, there was only one clear winner - ArchLinux. I chose Arch over other Linux distributions because of its rolling release and the availability of latest versions of the software that I use almost everyday. I now have KDE 4.8 and Mono 2.10.8 installed under ArchLinux on my laptop. After installing Apper for package management, I've also found myself on familiar grounds again. The transition was as smooth as one could hope for and I am already beginning to enjoy my new distribution.<br /><br />I will always have fond memories of Kubuntu though, as the distribution through which I first experienced Software Freedom. But it is time to move on.</div></div><img src="" height="1" width="1" alt=""/>Karim Lalani in Education - Jeff Hoogland's takeI came across this post yesterday from <a href="">Jeff Hoogland</a> on how he realized his Asus T101MT tablet does so much more than an iPad. It is titled <a href="">Confused about iPads in Education</a> ()<img src="" height="1" width="1" alt=""/>Karim Lalani ASP.NET MVC like NameValueCollection translation to method parameters<div xmlns="">If you've programmed with ASP.NET MVC, you know about a neat feature where the form post data or the query string, which is string data, is automagically converted to strongly typed objects when it is passed to its Controller Action.<br /><br />For instance, consider the following code:<br /><div><br />public class HomeController : Controller</div><div>{</div><div> public ActionResult ProductQuery(string Name, int Code)</div><div> {</div><div> ...</div><div> }</div><div>}</div><div><br />The web request to http://<server<:port>></AppRoot>/Home/ProductQuery?Name=123&Code=456 will be properly handled and Name will be treated as a string while Code will be treated as an integer value.<br /><br /></div><div>Also consider the following code:</div><div><br />// UserRegFormData.cs</div><div>public class UserRegFormData</div><div>{</div><div> public string Name{get;set;}</div><div> public DateTime Dob{get;set;}</div><div>}</div><div><br />// HomeController.cs</div><div>public ActionResult Register(UserRegFormData urfd)</div><div>{</div><div>...</div><div>}</div><div><br />You could now setup an html form with action=/Home/Register, method=post, a text field with</a></div><div><br />Another search on how to assign property values using reflection yeilded this post on DotNetSpider:</div><div><a href=""></a></div><div><br />I found how to create strongly typed arrays from this article on Byte.com:</div><div><a href=""></a></div><div><br />I had most of what I needed to get started.</div><div><br />I have uploaded the initial code to my github repository. The code is still crude and it has not yet undergone a refactoring exercise. <a href=""></a>.</div></div><img src="" height="1" width="1" alt=""/>Karim Lalani on Linux - Update<div xmlns=""><div>I spent some more time with RavenDB source code trying to figure out what might have been causing the runtime errors which had led me to comment the "SatisfyImportsOnce" line and supply code to manually load MEF exports.<br /><br />It turns out that some of the Imports were not being satisfied. The one place in particular was in the OAuth code under Raven.Database/Server/Security/OAuth/OAuthClientCredentialsTokenResponder.cs file. The member IAuthenticateClient AuthenticateClient was expecting import of type IAuthenticateClient which was not being satisfied.<br /><br />I reverted my changes made to in connection with disabling SatisfyImportsOnce and loading exports manually, rebuilt the project, and fired up the server application. I was presented with the same nasty stack trace.<br />I commented out the Import attribute from AuthenticateClient, rebuilt the project, and tried running the server second time. It worked!<br /><br />There were other similar instances in the code where the imports were being satisfied with corresponding exports. I learnt this from running the xUnit tests on the application. It wasn't making sense. RavenDB was supposed to be a complete solution.<br /><br />I did a filesystem search for AuthenticateClient under solutions root folder and sure enough I found results in CSharp code files that were not part of the Raven.sln file. These files and many more were under the Bundles folder under their own solution. I compiled these projects - Raven.Bundles.Tests did not build due to some issues - mono or monodevelop specific I assume.<br /><br />I copied the generated dll files into the Bundles/Plugins folder and set its path as the value to the "Raven/PluginsDirectory" key in App.Config for Raven.Server project.<br />I uncommented the import attribute, rebuilt the solution and fired up the server the third time. It worked this time as well.<br /><br />Next, I'll try to re-run some of those xUnit tests that had failed earlier to see how much ground could be covered out of those 1160 tests that came packaged with RavenDB.'s 2012!!!Have a Happy, Prosperous, and an Open-Source New Year!!!<br />OK, that last bit doesn't make any sense, but you get it, don't you :)<img src="" height="1" width="1" alt=""/>Karim Lalani on Linux - Source Code<div xmlns="">I've created a new github repository to temporarily host the updated code for RavenDB to enable execution under Linux. I say temporary because a few things could happen.<br /><br /><div>Worst case scenario, and I don't believe this would happen, but there is a slight possibility, that I might be asked to shut down the repository because I unknowingly violated some terms of use. For an open source project, this is a highly unlikely scenario.</div><div><br />The normal case scenario would be that I get no notice from the creators of RavenDB in which case this code would continue to exist under its own repository.</div><div><br />The bese case scenario, and I would really like this to be the case, would be that my changes in some shape be accommodated upstream in RavenDB making RavenDB a cross platform tool.</div><div><br />I did not investigate why the OutputStream.Flush() command was causing an exception. At the same time, this is really my first attempt at MEF and .Net 4.0 and I don't know why the exports were not automagically loaded, in resolution to which, I had to manually load them using reflections. A better fix would be to identify and resolve these issues.<br /><br /></div><div>I am glad, however, that I was able to fulfill a personal quest of learning about RavenDB, and in the process, making it run under Linux. This opens up the possibility of making RavenDB a serious contender against MongoDB on the non-Windows platforms.</div><div><br />RavenDB along with my source code changes are available at <a href="" style="text-decoration: line-through;"></a> <a href="" target="_blank"></a>.<br /><br />Note: Source code url updated. Source Shines - RavenDB on Linux<div xmlns="">In the previous article, I wrote about RavenDB - a .Net NoSQL Document Oriented Database that supports Transactions - and why I had decided to spend time on it to get it to run under Linux. Now RavenDB was created for a Windows only audience and the creator's website doesn't not provide any guidelines about how to implement it under Linux.<br /><br />After numerous attempts and with some modifications to the source code, I was able to get it to run under Linux. Some features that are not available under Linux as they are under Windows such as the web based UI - Raven Studio . However, dynamic index functionality also did not seem to work under Linux for some reason.<br /><div><br />In my initial review had failed to notice something. When the server application starts at command line, you have a few options such as garbage collect, clear screen, and reset. When I had fired the test application that I had created, I noticed and then did not pay any attention to the fact that the server was not sending responses to all of the calls that were being sent by the client. When this happened, the client froze until either I issued a reset command to the server command line, or the connection timed out eventually. Since reset seemed to get the ball rolling for the moment, I was not too much concerned at that time.<br /><br /></div><div>The couple days ago, I decided to spend time to finish the task that I had started - to tweak RavenDB's code enough to get the database to fully work under Linux and then to use it in a project. My many efforts to identify the root cause of the issue were not at all fruitful. My first instinct was to review the server log files. There was nothing useful to be found there.<br /><br />I thought that may be the document Id generator was causing the hold up so I provided manually generated Ids to my document objects. When that didn't solve the issue I started placing breakpoints all over the client library code and debugged the application one line at a time, stepping into every function call possible. I came across a piece of code that seemed like a promising lead.<br /><br /></div><div>It was in the Raven.Client.Lightweight project in the Connections/HttpJsonRequest.cs file at line 304.</div><div><br />var text = reader.ReadToEnd();</div><div><br />When control reached that statement, the execution froze until I issued the reset command at the server command line. It meant only one thing - there had to be a corresponding response writing activity on the server. I reviewed the code couple lines above to get some clues. I evaluated response object on line 299 for its ResponseUri property.</div><div><br />ResponseHeaders = response.Headers;</div><div><br />The ResponseUri property was set to "".<br /><br />At the same time the server had the following output on the terminal:<br /><br /></div><div><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br />That was my clue. I needed to locate a responder that handled the "bulk_docs" url.</div><div>Raven.Database project has a responder that handles "bulk_docs" under Server/Responders/DocumentBatch.cs. On setting breakpoint in the file on line 38 and following the control line by line lead me to Server/HttpServer.cs on line 318 which always ended in exception with a not so meaningful error message and once this method was executed the control stop.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br />I debugged further into the FinalizeRequestProcessing which led me to line 351.<br /><br /></div><div>ctx.FinalizeResonse();</div><div><br />This took me to Server/Abstractions/HttpListenerContextAdapter.cs line 82. The control disappeared once I tried to step further than like 86.<br /><br /></div><div>ResponseInternal.OutputStream.Flush();</div><div><br />I put a breakpoint in the catch block and sure enough there was an exception - and I/O exception of some sort - which was not handled. Couple lines below was the smoking gun. Line 88 never executed in event of an exception and that would be what was causing the server to hang on to request.<br /><br /></div><div>ctx.Response.Close();</div><div><br />I made a slight tweak to this file and moved the two lines into the finally block.</div><div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br />I recompiled the server and there it was. RavenDB was not longer freezing on individual requests and dynamic indices were working as designed. RavenDB on Linux is now ready to be used in application development. Since RavenDB was created as an Open Source application, it was possible to review the code and troubleshoot this issue. The issue seems to not affect Windows as RavenDB is being used by <a href="" target="_blank">real world businesses</a>.<br /><br />Next thing to do would be to create a non WPF replacement for Raven Studio for use in Linux.
http://feeds.feedburner.com/LinuxComputingContemplationAndMetacrisis
CC-MAIN-2017-51
refinedweb
14,491
55.24
Code, ideas about COVID epidemics Project description Tools, tries about COVID epidemics. The module must be compiled to be used inplace: python setup.py build_ext --inplace Generate the setup in subfolder dist: python setup.py sdist Generate the documentation in folder dist/html: python -m sphinx -T -b html doc dist/html Run the unit tests: python -m unittest discover tests Or: python -m pytest To check style: python -m flake8 aftercovid tests examples The function check or the command line python -m aftercovid check checks the module is properly installed and returns processing time for a couple of functions or simply: import aftercovid aftercovid.check() Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/aftercovid/
CC-MAIN-2020-45
refinedweb
135
58.11
Created on 2014-06-30 17:08 by n8henrie, last changed 2014-07-23 16:04 by serhiy.storchaka. This issue is now closed. When using the new plistlib.load and the FMT_BINARY option, line 997: p = _FORMATS[fmt]['parser'](use_builtin_types=use_builtin_types) doesn't send the dict_type to _BinaryPlistParser.__init__ (line 601), which has dict_type as a required positional parameter, causing an error def __init__(self, use_builtin_types, dict_type): My first bugs.python.org report, hope I'm doing it right... Thanks for the report. Can you supply a test case and/or a fix patch? Ideally, the test case would be a patch to Lib/test/test_plistlib.py. If you're interested, there's more info here: Here is a patch. New changeset 09746dc1a3b4 by Serhiy Storchaka in branch '3.4': Issue #21888: plistlib's load() and loads() now work if the fmt parameter is New changeset 275d02865d11 by Serhiy Storchaka in branch 'default': Issue #21888: plistlib's load() and loads() now work if the fmt parameter is
http://bugs.python.org/issue21888
CC-MAIN-2016-40
refinedweb
168
67.96
6 years, 8 months ago. MSCUsbHost USB flask drive recording using Ticker interface problem I encountered weird problems with the MSCUsbHost and Ticker interface. It does not continue after the printf("file opening\n"); line. Your help will be appreciated. Thank you! #include "mbed.h" #include "MSCFileSystem.h" MSCFileSystem msc("msc"); DigitalOut led (LED1); Ticker usb; void usbflip() { printf("file opening\n"); FILE *fp = fopen( "/msc/test.txt", "a"); printf("file Checking error\n"); if ( fp == NULL ) { error("Could not open file for write\n"); } printf("file recording\n"); fprintf(fp, "recorded\n"); printf("file closing\n"); fclose(fp); } int main() { FILE *fp = fopen( "/msc/test.txt", "a"); if ( fp == NULL ) { error("Could not open file for write\n"); } fprintf(fp, "Recording\n"); fclose(fp); usb.attach(usbflip, 3); while(1) { led = !led; wait(1.001); } } 1 Answer 6 years, 8 months ago. The usbflip() callback is operating in handler (interrupt, or IRQ) context. Long processes like printf, and USB handling, should not be undertaken in handler context, because they block all subsequent interrupts - including the next Ticker interrupt, which will be along before the last callback is processed, often. And USB needs interrupts, too. Best thing is to use the callback to just set a flag. then, the main while 1 loop can check for the flag being set, and call a normal (non-IRQ) context function to write the file. I'm still an amateur in Mbed, mind typing out example code for referencing? Thanks.posted by 22 Jan 2013 You can follow Andy Kirkham's splendid tutorial on the proper use of interrupts, callbacks, and flags (trips). One of his examples shows exactly what to do, in your case: by 23 Jan 2013
https://os.mbed.com/questions/289/MSCUsbHost-USB-flask-drive-recording-usi/
CC-MAIN-2019-43
refinedweb
286
65.83
Using an I2S Microphone - SPH0645LM4H I2S is a digital standard for transferring mono or stereo audio data. The SPH0645LM4H is an I2S MEMS microphone. It is available on a breakout board from Adafruit. One nice advantage is that unlike earlier analog MEMs microphones, no preamp and A/D input is needed. With digital signals from the microphone chip, any noise issues should also be minimized. I2S is not supported currently by mbed APIs, but an I2S library is available for the mbed LPC1768. I2S requires a special hardware interface on the processor and it uses two clock pins (i.e., bit clock and a left/right clock) and one data pin. Any mbed platform other than the LPC1768, will require rewriting the I2S library functions for the other processor's I/O registers and I2S setup. There is a select pin that makes it easy to use two devices for a stereo microphone. I2S Microphone Demo Here is a quick I2S microphone demo using interrupts and the LEDs for an audio level meter. Wiring The pins are lableled on the bottom of the microphone breakout board. #include "mbed.h" //I2S Demo to display values from I2S microphone // //Only works on mbed LPC1768 - I2S is not an mbed API! //Microphone used is SPH0645LM4H //see #include "I2S.h" //uses patched mbed I2S library from // //Needed a typo patch - clock was swapped p30/p15 in pin function setup code //"if (_clk == p15)" changed to "if (_clk != p15)" in I2S.cpp line 494 BusOut mylevel(LED4,LED3,LED2,LED1); //4 built in mbed LEDs display audio level #define sample_freq 32000.0 //Uses I2S hardware for input I2S i2srx(I2S_RECEIVE, p17, p16, p15); //p17(PWM value sent as serial data bit) <-> Dout //p16(WS) <-> LRC left/right channel clock //p15(bit clock) <-> BCLK volatile int i=0; void i2s_isr() //interrupt routine to read microphone { i = i2srx.read();//read value using I2S input mylevel = (i>>7) & 0xF; //Display Audio Level on 4 LEDs } int main() { i2srx.stereomono(I2S_MONO);//mono not stereo mode i2srx.masterslave(I2S_MASTER);//master drives clocks i2srx.frequency(sample_freq);//set sample freq i2srx.set_interrupt_fifo_level(1);//interrupt on one sample i2srx.attach(&i2s_isr);//set I2S ISR i2srx.start();//start I2S and interrupts while(1) { printf("%X\n\r",(i>>16)&(0x0000FFFF)); wait(0.5); } } Import programi2s_microphone I2S microphone demo for LPC1768 The value from the microphone is used to setup an audio level meter using the LPC1768's four built-in LEDs. The audio level is a bit low on the webcam recording, but the microphone is picking it up well. Please log in to post comments.
https://os.mbed.com/users/4180_1/notebook/using-an-i2s-microphone---sph0645lm4h/
CC-MAIN-2021-17
refinedweb
432
66.13
I'm trying to read the "from" and "subject" fields from the messages in a POP3 mailbox (I don't care about the content of the messages), and have cobbled together the following code from online examples: use Net::POP3; # $MAILSERVER,$MAILUSER,$MAILPASS defined here! # Constructors $pop = Net::POP3->new($MAILSERVER); if ($pop->login($MAILUSER, $MAILPASS) > 0) { my $msgnums = $pop->list; # hashref of msgnum => size foreach my $msgnum (keys %$msgnums) { my $head = $pop->top($msgnum,0); my ($subject, $from) = analyze_header($head); print "From: $from ; Subject: $subject \n"; } } $pop->quit; print "done.\n"; sub analyze_header { my $header_array_ref = shift; my $header = join "", @$header_array_ref; my ($subject) = $header =~ /Subject: (.*)/m; my ($from ) = $header =~ /From: (.*)/m; return ($subject, $from); } [download] It works, insofar as it successfully logs in and reads the (two identical) messages, but this is the output I get: starting... From: "=?utf-8?B?Q2hyaXMgSHVudA==?=" <chris@example.com> ; Subject: =? +utf-8?B?VGVzdA==?= From: "=?utf-8?B?Q2hyaXMgSHVudA==?=" <chris@example.com> ; Subject: =? +utf-8?B?VGVzdA==?= done. [download] The contents of the header fields appear (I assume) to be encoded somehow into utf-8, but how do I decode it into something I can make sense of? I assume there must be some standard method of doing this, but none of the documentation I could find gives me any help.. A foolish day Just another day Internet cleaning day The real first day of Spring The real first day of Autumn Wait a second, ... is this poll a joke? Results (433 votes), past polls
http://www.perlmonks.org/?node_id=1004630
CC-MAIN-2014-15
refinedweb
251
57.91
Flash Player 9 and later, Adobe AIR 1.0 and later When you instantiate a display object, it will not appear on-screen (on the Stage) until you add the display object instance to a display object container that is on the display list. For example, in the following code, the myText TextField object would not be visible if you omitted the last line of code. In the last line of code, the this keyword must refer to a display object container that is already added to the display list. import flash.display.*; import flash.text.TextField; var myText:TextField = new TextField(); myText.text = "Buenos dias."; this.addChild(myText); When you add any visual element to the Stage, that element becomes a child of the Stage object. The first SWF file loaded in an application (for example, the one that you embed in an HTML page) is automatically added as a child of the Stage. It can be an object of any type that extends the Sprite class. Any display objects that you create without using ActionScript—for example, by adding an MXML tag in a Flex MXML file or by placing an item on the Stage in Flash Professional—are added to the display list. Although you do not add these display objects through ActionScript, you can access them through ActionScript. For example, the following code adjusts the width of an object named button1 that was added in the authoring tool (not through ActionScript): button1.width = 200; Twitter™ and Facebook posts are not covered under the terms of Creative Commons.
http://help.adobe.com/en_US/as3/dev/WS5b3ccc516d4fbf351e63e3d118a9b90204-7dff.html
CC-MAIN-2016-18
refinedweb
261
68.91
VS 2005 - C# - open cascade and windows forms app example Submitted by Kamil on 22 June, 2009 - 21:32 Forums: Hello! Im a student and i`m trying to create Windows form application in c# which is using open cascade lib. but i can`t handle with that. Have anybody some example of similar application ? Please help me ! Hi Kamil, there is a c# example shipped with the OCC distribution. Have you checked that? Pawel yes i did, but there is a lot of source code and it doesnt work`s correctly. It is compiling but return some exceptions. I`m using VS 2005 and 2008 i've managed to run c# sample, though it was not easy.. first i fused shell project into occ project, then copied all OCC dlls to debug directory and only then it worked... and dont forget to set path to tkopengl.dll Hi, I want to user OCC and shell projects that are available in C# sample ...how can I do that. basically I want to use OCC and shell project in my current C# project, I tried lot but no luck to get it work..please help me... Thanks Hi Kamil, I would suggest searching the forum. There have already been some posts dealing with running the sample. Pawel Kamil you can use narocadwrapper. When You are creating project in VS you just need to add dll to referencees of your project. Then you have most of functions of OCC yes, i did it and everything works fine :) Hi! I'm a student and i'm trying to develop a C# application. Can someone write a very very little tutorial for embed and use narocad wrapper? [in visual studio or in SharpDevelop] If you installed OCC you have folder \OpenCascade\win32\samples\standard\c#\IE. In the folder you have sample project with 3d view written in c# and c++. It contains its own wrapper but narocadwrapper is better. You create new project of dll library with class viewer thats inherits after UserControl class, this is your 3d viewer control. After you finish you are adding your control to Tool Box of VS2005 ("Tool>>Choose Tool Box Items" in .NET Framework components you are clicking Browse then you are choosing location of your dll with control inside). Now last thing is to drag you control on window. Thanks!! Hi, If you download the NaroCad project there are two simple projects in the trunk\wrappers\tests\. One of them is the MakeBottle sample ported to these wrappers. Another complete and fully working visual project that you might use for start is located on branches\newtestproject\OpenCascadeResearch\. Mihai ok, I have embed the view user control! The MakeBottle example is very complex and I need only to draw lines, circles and write some text. How I can do it? I have only to add simple entities to my draw. Could anyone help me? For example: using System; using System.Windows.Forms; using OCNaroWrappers; namespace test { public partial class MainForm : Form { private ViewUserControl control; public MainForm() { InitializeComponent(); control = new ViewUserControl(); control.Dock = DockStyle.Fill; this.Controls.Add(control); double x1 = 0; double x2 = 10; double y1 = 0; double y2 = 10; OCgp_Pnt aPnt1 = new OCgp_Pnt(x1,y1,0); OCgp_Pnt aPnt2 = new OCgp_Pnt(x2,y2,0); OCGC_MakeSegment makeSegment1 = new OCGC_MakeSegment(aPnt1, aPnt2); // What I have to write for add this segment?? control.ZoomAll(); } } } Hi, I think you have two problems here: - create a window that embeds OCC view - 2nd: add to context your shape About the first one, the code that NaroCAD uses for this may be found here: About adding your shape to OCC context, the code may be the following: OCTopoDS_Shape shape = makeSegment1.Shape(); OCAIS_Shape interactive = new OCAIS_Shape(shape); context.Display(interactive, true); So all you need is to obtain the TopoDS_Shape and to add it to OCC context. Bests, Ciprian Thanks! I have done! But this command: OCTopoDS_Shape shape = makeSegment1.Shape(); give to me this error: 'OCNaroWrappers.OCGC_MakeSegment' does not contain a definition for 'Shape' etc. etc... What's happening? OCGC_MakeSegment::Value() returns OCGeom_TrimmedCurve object type. OCGeom_TrimmedCurve trimCurve = makeSegment1.Value(); OCBRepBuilderAPI_MakeEdge edgeMaker = new OCBRepBuilderAPI_MakeEdge(trimCurve); OCTopoDS_Shape shape = edgeMaker.Shape(); OCAIS_Shape interactive = new OCAIS_Shape(shape); context.Display(interactive, true); Thanks! It's perfect! But, where I can found more information about these procedures? I wouldn't want to ask for every single entity that I have to draw! Thanks a lot and sorry for my English! The procedures are documented based on OpenCascade documentations and headers. The NaroCAD project only provide (fairly complete but still partial) wrappers. So if you have an OpenCascade code, you can convert the code mostly by writing: using OCNaroWrappers; and put OC before every OpenCascade class. So if your code is like this: gp_Pnt a (10, 20, 10); the C# code will be: OCgp_Pnt a = new OCgp_Pnt(10, 20, 10); In rest the code follows the OpenCascade "look and feel". Thanks! I added new shapes and I changed the colors! I still need help for two things: Add TEXT and add a control for the user interaction such as Zoom and Pan: (at least for the Window Zoom). If anyone can help me, I would be very grateful. hi, for Pan there is no problem add to your project the events onMouseDown and onMouseMouve for the event onMouseDown store the e.X and the e.Y in tow different variables (Lx and Ly) { Lx = e.X; Lx = e.Y; } now for the event onMouseMouve use this { view.Zoom(Lx,Lx, e.X, e.Y); Lx = e.X; Lx = e.Y; } It works perfectly, so i have also the problem of displaying text, so if you found the solution please contact me Futate2001@yahoo.fr sorry, i was talking of Zoom For pan use the folowing function onMouseMouve { Lx = e.X; Ly = e.Y; View.Pan(e.X-Lx, Ly- e.Y,1) } sorry for the mistake, So the problem persists for displaying text good luck
https://www.opencascade.com/content/vs-2005-c-open-cascade-and-windows-forms-app-example
CC-MAIN-2020-05
refinedweb
994
67.65
I have a few questions relating to setjmp/longjmp usage - The C99 spec gives: If the return is from a direct invocation, the setjmp macro returns the value zero. If the return is from a call to the longjmp function, the setjmp macro returns a nonzero value. So the answer to 1 is that a zero indicates you have called setjmp the first time, and non-zero indicates it is returning from a longjmp. It pushes the current program state. After a longjmp, the state is restored, control returns to the point it was called, and the return value is non-zero. There are no exceptions in C. It's sort-of similar to fork returning different values depending whether you're in the original process, or a second process which has inherited the environment, if you're familiar with that. try/ catch in C++ will call destructors on all automatic objects between the throw and the catch. setjmp/ longjmp will not call destructors, as they don't exist in C. So you are on your own as far as calling free on anything you've malloced in the mean time. With that proviso, this: #include <stdio.h> #include <setjmp.h> #include <string.h> #include <stdlib.h> void foo ( char** data ) ; void handle ( char* data ) ; jmp_buf env; int main () { char* data = 0; int res = setjmp ( env ); // stored for demo purposes. // in portable code do not store // the result, but test it directly. printf ( "setjmp returned %d\n", res ); if ( res == 0 ) foo ( &data ); else handle ( data ); return 0; } void foo ( char** data ) { *data = malloc ( 32 ); printf ( "in foo\n" ); strcpy ( *data, "Hello World" ); printf ( "data = %s\n", *data ); longjmp ( env, 42 ); } void handle ( char* data ) { printf ( "in handler\n" ); if ( data ) { free ( data ); printf ( "data freed\n" ); } } is roughly equivalent to #include <iostream> void foo ( ) ; void handle ( ) ; int main () { try { foo (); } catch (int x) { std::cout << "caught " << x << "\n"; handle (); } return 0; } void foo ( ) { printf ( "in foo\n" ); std::string data = "Hello World"; std::cout << "data = " << data << "\n"; throw 42; } void handle ( ) { std::cout << "in handler\n"; } In the C case, you have to do explicit memory management (though normally you'd free it in the function which malloc'd it before calling longjmp as it makes life simpler)
https://codedump.io/share/okles2KeNEDw/1/exception-handling-in-c---what-is-the-use-of-setjmp-returning-0
CC-MAIN-2018-26
refinedweb
379
67.28
Opened 9 years ago Closed 8 years ago #1866 closed Bug (Wont Fix) GUI $WS_EX_MDICHILD Top Alignment bug Description #include <WindowsConstants.au3> $parent = GUICreate("Parent", 50, 50, 0, -3, $WS_POPUP, $WS_EX_LAYERED) $child = GUICreate("Child", 50, 50, 0, 5, $WS_POPUP, $WS_EX_MDICHILD, $parent) GUISetBkColor("0x000000") GUISetState() While 1 WEnd Results in a window in the middle of the screen As does Main window top, child window top 2,0 0,2 -1,3 -2,4 etc... Add the parent top and the child top if equals 2 then window is drawn in the middle of the screen. Attachments (0) Change History (1) comment:1 Changed 8 years ago by trancexx - anyone sees any sense in this then please make another report.
https://www.autoitscript.com/trac/autoit/ticket/1866
CC-MAIN-2020-10
refinedweb
119
60.48
Hi, I'm trying to extract the colors of the pixels from a texture, but I'm jumping an error. My code is: Color[] data = new Color[texture.Width * texture.Height];texture.GetData(data); And the error is: HRESULT: [0x887A0005], Module: [SharpDX.DXGI], ApiCode: [DXGI_ERROR_DEVICE_REMOVED/DeviceRemoved], Message: Unknown If anyone has any idea what may be happening. Thanks in advance. Give more details of your test hardware... 8 gb de ram, i5 4ta, gtx 650 1gb But the error also gives me on my two laptops, on my other desktop PC and on the office PC, I'm not sure it's a hardware problem. I'm not sure this is the way to use GetData. At least i've never seen it used this way with an array of Color. Have you tried with an array of bytes? Sizex*sizey*nbbytes per channel I think I cannot understand what you are saying... Can you clarify? Try with an int array: int[] data = new int[texture.Width * texture.Height]; texture.GetData(data); I just tried this in a test project I have on my computer and didn't get any runtime errors. My code is no different than yours... _texture = this.Content.Load<Texture2D>("sprite"); Color[] data = new Color[_texture.Width * _texture.Height]; _texture.GetData<Color>(data); Where are you calling this code? The error says device removed, are you doing it either before your device is created, or after it's been disposed? It is not a problem of how or where I call the code. I ended up trying other pc (apart from the ones I had already tried), and in some it gives the error and in others it does not. If you can reproduce this on some machines, but not on others, all using the same code... that is very strange. As I said, the error message does seem to suggest that something funny is going on when interacting with the device. I haven't written a lot of pure DirectX code, but from what little I've written as well as debugging through some weird issues in SlimDX once, I do remember it was a little finicky. Can you boil this down to a small program that you can pass around, then paste the code for that? Also, if you've got hardware that you can 100% reproduce this on, it might be a good idea to grab the code for MonoGame, build it, then attach a debugger to your program and see if you can get any insights. Sorry you're having such a strange issue... things like this are extra frustrating Device removed is a strange error for this did you try adding the color typecastWere is this being called from and are you doing anything unorthodox with the graphics device object.Try a barebones test case. New project, just with those two extra lines and the initialization of the texture. It is the same error in some pcs and not in others. I leave the code ... using Microsoft.Xna.Framework;using Microsoft.Xna.Framework.Graphics;using Microsoft.Xna.Framework.Input; namespace Game4{ /// /// This is the main type for your game. /// public class Game1 : Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Texture2D texture; texture = Content.Load<Texture2D>("sign"); Color[] data = new Color[texture.Width * texture.Height]; texture.GetData(data); } /// ); } } } You really need to give details of Which systems it does work on and which it does not. Have they got Integrated Graphics or Dedicated or a mixture of both... I am suspecting the ones the error occurs on are laptops which have dual graphics systems... EDIT Simply saying it works on one and not the other, I would pick a needle in a haystack over that description any day! Hmmm. Here is a link that explaines the error-code: I had such an error, if I remember correctly, when I was toying around with texture-formats and the resulting getData-call wanted to access textures-coordinates that were out of bounds.I don't see that's the case with your code and I didn't encounter that error a second time...Color should be the default for load-texture, shouldn't it? What's the format of the texture in the pipeline-tool?Is that format available on every CPU driver? (I know that Color is...) Try moving the function out of load and calling it with a keypress.Try to save the texture as a PNG see if you get a error. You could also try to wrap the call with a try catch to attempt to get extra info.Try using a small texture. ...Ya you usually get a system array access violation for out of bounds. I thought invalid format to at first but color should work as monogame converts hides everything to color I think. I'm pretty sure I got this error to but i cant remember why either. device errors usually mean its the card complaining. Sorry I can't test it as my comps down ATM. The problem is that I remember that I got some errors like that by guys having broken GPU installations. Not many though... 3 or 4 at most (out of 780) and they 'fixed' them by re-installing the drivers or throwing away the old cards. Was not happy with that since I wanted to fix bugs and not advice my customers to change gear And the problem I was referring to is, that I don't remember if that error was among those I just mentioned or not. Sorry. In the original project I call the function in the Update, when an event is triggered and I always use png. I wanted to use the GetData for the pixel to pixel collisions, in the end I ended up implementing them with a bitmap, but I am forced to have textures outside the Content to make the map of collisions to the textures that use that type of collisions. In short, another mystery xd. That error is an odd one that I have only come across once before. I never found the exact cause, but I believe it was because the GPU was crashing due to a texture format that we were using. The GPU crashed, therefore the OS thought the graphics card had been removed. This was several years ago, but I seem to recall that we were using 16-bit surface format for render targets on a Surface RT tablet. Changing it to Color seemed to make the error go away, but it was a shame to lose all that memory just to avoid a GPU crash. Also, turn on DirectX Debug Layer to see if it shows an extra information. Open your Start menu (tap Windows key) and type 'dxcpl'. Press Enter. Click on "Edit List..." and add your executable. Enable native code debugging in your project properties. Debug the game in VS now and you will get a bunch of extra output in the Output Window. This may help diagnose the cause. Remember to remove the application from the debug list in the DirectX Control Panel afterwards.
http://community.monogame.net/t/getdata/10385
CC-MAIN-2018-13
refinedweb
1,189
73.17
As an experiment, I decided to try this out. I was thinking the other day it would be nice if I could upload a coffeescript file to our CMS at work, and have it generate a corresponding JS file that I can refer to in the views. This is actually pretty straight forward if you have NodeJS installed. The NodeJS installer also comes with the NPM package manager, which you can use to install CoffeeScript: npm install -g coffee-script The most important thing here is to take note of where the binary for coffeescript sits. I used the where command to figure this out: where coffee Then all you need is to create a commandline C# project in visual studio, and use the Process class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Diagnostics; namespace CoffeeConsoleTest { class Program { static void Main(string[] args) { var p = new Process(); p.StartInfo.UseShellExecute = false; // change your path to whatever resulted from the where command usage p.StartInfo.FileName = @"C:\Users\aaron\AppData\Roaming\npm\coffee.cmd"; p.StartInfo.Arguments = "-cp test.coffee"; p.Start(); Console.ReadLine(); } } } That will output the compiled coffeescript using standard out. You can of course store it into a string and use a StreamWriter to store it into a file, like I want to do. To grab the output, you will need to use a StringBuilder, and a loop as such to grab all the content. StringBuilder builder = new StringBuilder(); while (!p.HasExited) { builder.Append(p.StandardOutput.ReadToEnd()); } string output = builder.ToString(); using (StreamWriter javaScriptFile = new StreamWriter("test.js")) { javaScriptFile.Write(output); } Be sure to set the following flag as well: p.StartInfo.RedirectStandardOutput = true; And that's it. Here's my code in full to checkout, in case you have any issues:
http://agmprojects.com/blog/2012/3/5/compiling-coffeescript-from-c.html
CC-MAIN-2013-20
refinedweb
301
58.08
For instance, increase in size results in more network bandwidth required to send/retrieve the equivalent XML content, additionally to the increase in memory space required to store the XML locally, additionally to the increase in time required for the XML parser to process the stream. XML optimization provides a report showing relevant figures to play with (see screen capture above). With this report in hands, the XML producers may choose to either use dedicated XML automation tools to transform XML streams according to defined rules. XML producers may find even more appropriate to redesign the whole XML metadata. Figures have been calculated and are displayed in the report because they are meaningful for almost any kind of XML stream, ie each could mean a substantial change in size or design. I have tested over 50 XML files before coming up with these figures. XML optimization is new stuff. Before writing this article, I have browsed through the public internet sites, newsgroups and even quite a bunch of research papers, and I haven't found a single topic addressing it. Amazingly enough, I believe this is not only of interest in the real world - when you know that every company out there in the high-tech industry now uses XML somehow - this is as crucial as database tuning tools or network tuners. Why isn't this part of leading XML tools (Xmlspy, Xmetal, Msxml, .NET API) ? I don't know, may be developers are content enough with their use of XML without really seeing the impact of using XML instead of binary file formats and standard databases. The remainder of this article can be broken down into the following sections (reflecting the sections in the Html report) : Though the meaning of nb lines, nb elements, nb comments is obvious, it is of interest to know what are the effects over an XML stream with a high nb comments ratio in it. XML producers usually add comments above, in, or below the actual XML elements to explain the hierarchy and underlying design. But what they don't know is that in a lot of "content management server (CMS)" software, the XML is left as is, and sent to clients without removing these unnecessary comments. Resulting in data transport being often 10% larger compared to the size without comments. Of course, in this case, XML producers are more than encouraged to lift down their XML code. NB. CDATA sections and nb Process instructions play a simlar role than nb comments. NB namespaces used is interesting as it reflects whether elements, attributes, and even data itself, use a lot of prefixes, which in turn may significantly increase the size of the XML stream. For the report to be really useful, figures are often displayed both as absolute values, and as percentages. This reverse engineers the XML stream hierarchy by just processing the stream (it nevers read the DTD if any), giving both parent/children relationships, and also datatypes when they are recognized (including float, integers, currencies, dates, urls, and emails). What for ? reverse engineering the structure pattern is not only a unique feature, it reveals a lot whether the XML is designed "vertically" (lot of elements), "horizontally" (lot of attributes), or somewhat diagonally. The structure pattern is a preliminary block that must be displayed before proceeding next topics because it simplifies figuring out the design. Distinct patterns tells if there is more than one main pattern in the XML stream. Pattern occurences, Pattern height (amount of lines) and Pattern size (in bytes) show the key characteristics of the main structure pattern. These are figures that are worth mentioning by themselves, but are also preliminary to the next figure. Now what is flattening patterns ? That's what is obtained by replacing child elements with attributes, where possible. Follows is a sample before and after flattening : <person> <firstname>John</firstname> <lastname>Lepers</lastname> </person> <person firstname="John" lastname="Lepers"/> <person firstname="John" lastname="Lepers"/> Flattening the patterns makes use of what is known in the W3C XML norm as empty element tags, ie tags with no slash counterparts, thus reducing the size by significant amounts. Flattening patterns has a lot of interesting effects : 1. for instance, because the hierarchy is flat, the parsing will be faster. 2. it is much easier to do a diff on XML streams with flatten patterns. The depth we are talking about is the element depth in the hierarchy, ie "1" for the root element, "2" for the direct children, and so on. A measure usually comes with figures such like : the minimum value overall the XML stream, the maximum value, the average value, and the standard deviation value. A great standard deviation value means that the XML stream intensively uses indentation, <, > and end tags, which in turn increase the size. To better reveal the depth, we also list the amount of elements at any given depth. The depth measure is visually displayed using a bar chart (numerical figures in a list often hide the trend). For those interested in how the chart is built, using Javascript code, read what follows : // usage var tabheight = 120; var tabdata = new Array(1,15,83,159); // y-axis var tabtips = new Array("01","02","03","04"); // x-axis showChart_Max("<p class='m1'>Depth histogram chart</p>",tabheight, "#4488DD",tabdata,tabtips); // chart library 1.0 - Stephane Rodriguez - free software function showChart_Max(title, height, color, data, datatips) { // don't go too far if no data were passed if (data.length==0 || data.length!=datatips.length) return; // calculate min, max and average var max = data[0]; var min = data[0]; for (i=0; i<data.length; i++) { c = data[i]; if ( max<c ) max = c; if ( min>c ) min = c; } var average = (min+max)/2; average = Math.floor(100*average)/100; // output table header document.writeln ("<table height='"+height+"' cellpadding='0' " + "cellspacing='0' border='0'>"); document.writeln ("<tr><td valign='center'><font size='-1'>max=" + max+"</font></td>"); // output data according to max for (i=0; i<data.length; i++) { dataportion = height * data[i] / max; // height of bar voidportion = height - dataportion; // void between top of the bar // and top of the table document.writeln ("<td height='129' width='15' rowspan='5'> </td>"); document.writeln ("<td width='15' rowspan='5'>"); document.writeln (" <table width='100%' cellpadding='0' " + "cellspacing='0' border='0'>"); document.writeln (" <tr><td height='"+voidportion+"'></td></tr>"); document.writeln (" <tr><td height='"+dataportion+"' " + "bgcolor='"+color+"'></td></tr></table>"); document.writeln ("</td>"); } document.writeln ("</tr>"); // output min, max and average in first column (rowspan) document.writeln ("<tr><td> </td></tr>"); document.writeln ("<tr><td><font size='-1'>avg="+average+ "</font></td></tr>"); document.writeln ("<tr><td> </td></tr>"); document.writeln ("<tr><td><font size='-1'>min="+min+"</font></td></tr>"); document.writeln ("<tr><td valign='center'></td>"); // output data according to max for (i=0; i<data.length; i++) { j=i+1; document.writeln ("<td width='15'> </td>"); if (datatips.length==0) document.writeln ("<td width='15'><font size='-1'>"+j + "</font></td>"); else document.writeln ("<td width='15'><font size='-1'>" + datatips[i]+"</font></td>"); } document.writeln ("</tr>"); if (title!="") document.writeln ("<caption valign='bottom'>"+title+"</caption>"); document.writeln ("</table><br><br>"); } Element and attribute names are usually chosen so they are self-descriptive. While this looks like an advantage, it has an overhead on size just because even in English, keywords enclosing content take statistically a significant space, resulting to a great contribution to the overall stream size. This can be avoided by enforcing a new strategy on naming described below. An element or attribute is any combination of letters and digits. With that in hand, why not make these names as short as possible ? Let us take an example: <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE Bookstore SYSTEM "bookshop.dtd"> <Bookstore> <!--<span class="code-comment">J&R Booksellers Database--></span> <Book Genre="Thriller" In_Stock="Yes"> <Title>The Round Door</Title> </Book> </Bookstore> Let's build a map of name pairs: Bookstore becomes A Book becomes B Genre becomes C In_Stock becomes D Title becomes E So we get the following equivalent XML document : <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE Bookstore SYSTEM "bookshop_A.dtd"> <A> <!--<span class="code-comment"> J&R Booksellers Database --></span> <B C="Thriller" D="Yes"> <E>The Round Door</E> </B> </A> Similarly with depth, the node naming strategy is also visually reflected using a bar chart, so we see the trend. The gain resulting from applying the smart node naming strategy to the XML stream is calculated. That's often 30% or more, which is very very significant. The Structure attributes indicator reveals how uniform attributes are dispatched within elements. Besides the standard amount of attributes per element (with min, max, mean and standard deviation) is the disorder ratio. The disorder ratio attempts to show if attributes are listed in the same order or not wrt element occurences. That's of course an average, because each element may have any amount of associated attributes. According to the W3C XML norm, there is no special ordering between attributes, it is simply a good habit to have attributes always following the same order. XML namespaces are declared by using a special attribute of the form xmlns:supplier="" and refers to a set of element and attribute names with a dedicated semantic meaning. Element and attributes with namespaces are prefixed by the namespace, for instance supplier:orderID. Namespaces are not required in XML streams, but they special meanings and may simplify data binding, as long as namespace real meanings are made public and available to everyone. Any number of namespaces can be used, not only one. A namespace must always be declared before it is used. The URL used for the declaration is a fake URL here just for global uniqueness purpose. Below is a sample for the supplier namespace: <?xml version="1.0" encoding="ISO-8859-1"?> <Orders xmlns: <Order date="AA/45/10" supplier: <Id>NBZYSJSGSIAUSYGHBXNBJDUIUYE</Id> </Order> </Orders> When namespaces are used, the report shows the ratio of namespaces' use, and the list of namespaces. Not only using or not namespaces strongly changes the underlying XML design, they have effect on the node naming strategy, and in turn on the overall size of the XML stream. Content size in element or attribute values exhibit a trend which can be described using minimum size, maximum, average, and standard deviation. In addition, the ratio of element and attributes with no values is shown. If the ratio is high, easy it is to question whether the design of the metadata is good. A somewhat odd indicator is the Ratio of multiple part values. Below are two samples of multiple part values for the <book> element : <book> The name of this book is so inadequate for a general audience that it has been decided not to print it. </book> ... <book>The Round Door <year>1999</year> <price>20$</price>Part II </book> Content correlation is an in-depth examination of List Of Values that reveals valuables things. The first indicator is related to duplication, or how often the same values appear again and again. And it includes max, average and standard deviation. The second indicator is a ranking, it shows the most seen value in all List Of Values. Indentation is often used in XML streams, as they are often designed and read by humans. But indentation produces a signication increase in size. In the report is shown the new size of XML stream without indentation at all. That's often 30%. Out of the many figures from the HTML reports, several deserve some introductory explanations : Syntax : single file : betterxml <your file> betterxml bookshop.xml betterxml c:\mydir\bookshop.xml betterxml whole directory : betterxml -d <your directory> betterxml -d c:\tmp\repository Technically, the tool is based on James Clark's Expat (royalty-free SAX Parser). The executable, which is a report generator on top of a static library can be divided into three.
https://www.codeproject.com/Articles/2882/XML-optimization?fid=4734&df=90&mpp=10&sort=Position&spc=None&tid=588052
CC-MAIN-2017-26
refinedweb
1,996
54.22
*usr_02.txt* Nvim On Unix you can type this at any command prompt. If you are running Microsoft Windows, open a Command Prompt the command prompt. |:Tutor| really want to do this." If you want to continue editing with Vim: The ":e!" command reloads the original version of the file. ============================================================================== *02.8* Finding help Everything you always wanted to know can be found in the Vim help files. Don't be afraid to ask! If you know what you are looking for, it is usually easier to search for it using the help system, instead of using Google. Because the subjects follow a certain style guide. Also the help has the advantage of belonging to your particular Vim version. You won't see help for commands added later. These would not work for you. below: |help-summary|.* 1) Use Ctrl-D after typing a topic and let Vim show all available topics. Or press Tab to complete: :help some<Tab> More information on how to use the help: :help helphelp 2) Follow the links in bars to related help. You can go from the detailed help to the user documentation, which describes certain commands more from a user perspective and less detailed. E.g. after: :help pattern.txt You can see the user guide topics |03.9| and |usr_27.txt| in the introduction. 3) Options are enclosed in single apostrophes. To go to the help topic for the list option: :help 'list' If you only know you are looking for a certain option, you can also do: :help options.txt to open the help page which describes all option handling and then search using regular expressions, e.g. textwidth. Certain options have their own namespace, e.g.: :help cpo-<letter> for the corresponding flag of the 'cpoptions' settings, substitute <letter> by a specific flag, e.g.: :help cpo-; And for the 'guioptions' flags: :help go-<letter> 4) Normal mode commands do not have a prefix. To go to the help page for the "gt" command: :help gt 5) Insert mode commands start with i_. Help for deleting a word: :help i_CTRL-W 6) Visual mode commands start with v_. Help for jumping to the other side of the Visual area: :help v_o 7) Command line editing and arguments start with c_. Help for using the command argument %: :help c_% 8) Ex-commands always start with ":", so to go to the ":s" command help: :help :s 9) Commands specifically for debugging start with ">". To go to the help for the "cont" debug command: :help >cont 10) Key combinations. They usually start with a single letter indicating the mode for which they can be used. E.g.: :help i_CTRL-X takes you to the family of CTRL-X commands for insert mode which can be used to auto-complete different things. Note, that certain keys will always be written the same, e.g. Control will always be CTRL. For normal mode commands there is no prefix and the topic is available at :h CTRL-<Letter>. E.g. :help CTRL-W In contrast :help c_CTRL-R will describe what the CTRL-R does when entering commands in the Command line and :help v_CTRL-A talks about incrementing numbers in visual mode and :help g_CTRL-A talks about the "g<C-A>" command (e.g. you have to press "g" then <CTRL-A>). Here the "g" stands for the normal command "g" which always expects a second key before doing something similar to the commands starting with "z". 11) Regexp items always start with /. So to get help for the "\+" quantifier in Vim regexes: :help /\+ If you need to know everything about regular expressions, start reading at: :help pattern.txt 12) Registers always start with "quote". To find out about the special ":" register: :help quote: 13) Vim Script is available at :help eval.txt Certain aspects of the language are available at :h expr-X where "X" is a single letter. E.g. :help expr-! will take you to the topic describing the "!" (Not) operator for Vim Script. Also important is :help function-list to find a short description of all functions available. Help topics for Vim script functions always include the "()", so: :help append() talks about the append Vim script function rather than how to append text in the current buffer. 14) Mappings are talked about in the help page :h |map.txt|. Use :help mapmode-i to find out about the |:imap| command. Also use :map-topic to find out about certain subtopics particular for mappings. e.g: :help :map-local for buffer-local mappings or :help map-bar for how the '|' is handled in mappings. 15) Command definitions are talked about :h command-topic, so use :help command-bar to find out about the '!' argument for custom commands. 16) Window management commands always start with CTRL-W, so you find the corresponding help at :h CTRL-W_letter. E.g. :help CTRL-W_p for moving the previous accessed window. You can also access :help windows.txt and read your way through if you are looking for window handling commands. 17) Use |:helpgrep| to search in all help pages (and also of any installed plugins). See |:helpgrep| for how to use it. To search for a topic: :helpgrep topic This takes you to the first match. To go to the next one: :cnext All matches are available in the quickfix window which can be opened with: :copen Move around to the match you like and press Enter to jump to that help. 18) The user manual. This describes help topics for beginners in a rather friendly way. Start at |usr_toc.txt| to find the table of content (as you might have guessed): :help usr_toc.txt Skim over the contents to find interesting topics. The "Digraphs" and "Entering special characters" items are in chapter 24, so to go to that particular help page: :help usr_24.txt Also if you want to access a certain chapter in the help, the chapter number can be accessed directly like this: :help 10.1 which goes to chapter 10.1 in |usr_10.txt| and talks about recording macros. 19) Highlighting groups. Always start with hl-groupname. E.g. :help hl-WarningMsg talks about the WarningMsg highlighting group. 20) Syntax highlighting is namespaced to :syn-topic. E.g. :help :syn-conceal talks about the conceal argument for the ":syn" command. 21) Quickfix commands usually start with :c while location list commands usually start with :l 22) Autocommand events can be found by their name: :help BufWinLeave To see all possible events: :help events 23) Command-line switches always start with "-". So for the help of the -f command switch of Vim use: :help -f 24) Optional features always start with "+". To find out about the conceal feature use: :help +conceal 25) Documentation for included filetype specific functionality is usually available in the form ft-<filetype>-<functionality>. So :help ft-c-syntax talks about the C syntax file and the option it provides. Sometimes, additional sections for omni completion :help ft-php-omni or filetype plugins :help ft-tex-plugin are available. 26) Error and Warning codes can be looked up directly in the help. So :help E297 takes you exactly to the description of the swap error message and :help W10 talks about the warning "Changing a readonly file". Sometimes, however, those error codes are not described, but rather are listed at the Vim command that usually causes this. So: :help E128 takes you to the |:function| command ============================================================================== Next chapter: |usr_03.txt| Moving around Copyright: see |manual-copyright| vim:tw=78:ts=8:noet:ft=help:norl: top - main help file
https://neovim.io/doc/user/usr_02.html
CC-MAIN-2022-21
refinedweb
1,272
74.39
Reading excel table data using pandas library I am new to the IT industry and would like to learn with you and make progress together. Please point out any questions! !! Still suffering from data reading, please see the brief introduction below: The data source is downloaded from the website of the National Bureau of Statistics: Specific method code: import pandas as pd import pandas as pd df = pd. read_excel( 'quanguojingji10nian.xls') #现在Excel表格与py代码放在一个文件夹里 df = pd. read_excel ( 'quanguojingji10nian.xls') #Now Excel tables and py code are placed in a folder x= df[ '指标'] #读取第一列数据 x = df [ 'indicator'] #Read the first column of data print( x); #把'指标换成其他列地列名,就能读其他列' print ( x); #Change the 'index to other columns and column names, you can read other columns' result: Read out the results of column x.You can use matplotlib.pyplot library to draw line, pie, and line charts.
http://www.itworkman.com/76525.html
CC-MAIN-2022-21
refinedweb
143
60.04
Hi, we would like to install XI with 3 systems (DEV, QA, PROD) and transport the java components (e.g. namespace) using file system or using change management system (CMS). What are the differences between this possibilities? Thanks, Lutz One is manually moving the data across (FTP) and the CMS has a full audit trail and is relatively automatic! Help to improve this answer by adding a comment If you have a different answer for this question, then please use the Your Answer form at the bottom of the page instead. Hi Transportation is of two types. a) File system Transport or b) CMS. If you want to use CMS then basis team should have install CMS. Check this document for transportations using CMS and File transport: /people/sravya.talanki2/blog/2005/11/02/overview-of-transition-from-dev-to-qa-in-xi /people/sap.india5/blog/2005/11/03/xi-software-logistics-1-sld-preparation /people/sap.india5/blog/2005/11/09/xi-software-logistics-ii-overview you can use CMS or file system transport, depending on how your system is configured if your idoc scenario is from DEV1 to XID to DEV2 in the dev landscape and in the quas landscape is QAS1 XIQ to QAS2 then the transport target for DEV1 is QAS1 and the transport target for DEV2 is QAS2. that means the transport target is the corresponding business system under a different landscape, (dev or qas or prod) CMS is NW tool for transporting developments This will help you How to Use CMS in XI 3.0.pdf CMS based transport in XI Help to improve this answer by adding a comment If you have a different answer for this question, then please use the Your Answer form at the bottom of the page instead. Add a comment
https://answers.sap.com/questions/3308149/difference-between-transport-system-filecms.html
CC-MAIN-2021-10
refinedweb
303
59.43
Before I introduce *Grails *to you, I want to write a little bit about Groovy, because it is used as the main language in grails. What is Groovy? Groovy is a dynamic language build on top of Java Virtual Machine whose features are inspired by the success of languages like Python, Ruby and smalltalk. Lets go forth step by step. Groovy is a language for Java Virtual Machine What does that mean? It mean that groovy can be compiled into bytecode to run on JVM. No groovy interpreter/compiler is required on the target machine, when you are planning to run your compiled groovy code. All you need is just JVM. You can use java libs in groovy easily. It means that groovy from the start is able to reuse all existing Java libraries. It also means that you do not need to throw away your old Java code! Calling groovy code from Java is also possible! Groovy is a dynamic language It supports dynamic typing. So you can write code like this: def myVar = "Text"; println "Value of var: $myVar"; myVar = 5 * 20; println "Value of var: $myVar"; Running this code will produce following output: Value of var: Text Value of var: 100 Groovy supports closures. What is a closure? If you are familiar with C/C++ you can think about a closure as pointer to function. def sayHi = { name -> println "Hi, $name!" } // Call closure sayHi("groovy learner!") I guess you can tell that output will be “Hi, groovy learner!”. Closures can be used as function arguments, of course :) def sayHi = { name -> println "Hi, $name!" } def callCode(def code, def param) { code(param) } callCode(sayHi, "closure as argument!") Try to guess what output will be! Groovy is really good for scripting Have you noticed that there are no classes in the previous examples? Groovy does not require your compilation file to have a class. It allows you to write a script in groovy (with no public class with *main *method), compile it into a java class and execute. You will only need groovy script runtime jar in your class path. As a part of being good for scripting, groovy provides easy dependency management using Grapes. You can defined in your script which additional libraries you need and groovy will download them when script is executed. No need to package all those .jar‘s with your script! And one more good thing about it is that Codehaus guys decided not to reinvent the wheel. They did not create their own “package” format. Instead Grapes make use of maven. Yes. You can use any jar from maven repository in your groovy script. And you can achieve that with a single line of code. It also means that all libraries, required by your dependency, will be downloaded as well! Here follows an example of script which uses Google Guava library: @Grab(group='com.google.guava', module='guava', version='11.0.2') // Look Ma! I am using import from library that // will be downloaded only in runtime! import com.google.common.math.LongMath println "2 in power of 10: " + LongMath.pow(2, 10) Groovy is really easy to learn If you already know java. Or Ruby. Or python. Or any other high-level programming language. As for Java, consider following code: package com.binarybuffer.groovy; public class EntryPoint { public static void main(String[] args) { System.out.println("Hello world!"); } } Does it look like a regular Java class? Well, it is also a valid Groovy class! So you already can write code in groovy. What are you waiting for?! Go on, download it and start having fun. Additional resources: - Official groovy site - Groovy Alamanc Site with small code samples to solve different tasks using groovy
https://binarybuffer.com/post/2012-05-06-couple-words-about-groovy/
CC-MAIN-2019-09
refinedweb
621
76.62
A Python utility / library to sort Python imports. Project description.7 and 3.4+ without any dependencies. Install isort with requirements.txt support: pip install isort[requirements]. Installing isort’s for your preferred text editor Several plugins have been written that enable to use isort from within a variety of text-editors. You can find a full list of them on the isort wiki. Additionally, I will enthusiastically accept pull requests that include plugins for other text editors and add documentation for them as I am notified.. Multi line output modes You will notice above the “multi_line_output” setting. This setting defines how from imports wrap when they extend past the line_length limit and has, ... ) 6 - Hanging Grid Grouped, No Trailing Comma In Mode 5 isort leaves a single extra space to maintain consistency of output when a comma is added at the end. Mode 6 is the same - except that no extra space is maintained leading to the possibility of lines one character longer. You can enforce a trailing comma by using this in conjunction with -tc or trailing_comma: True. from third_party import ( lib1, lib2, lib3, lib4, lib5 ). The no_lines_before option will prevent the listed sections from being split from the previous section by an empty line. Example: sections=FUTURE,STDLIB,THIRDPARTY,FIRSTPARTY,LOCALFOLDER no_lines_before=LOCALFOLDER would produce a section with both FIRSTPARTY and LOCALFOLDER modules combined., ) It is also possible to opt-in to sorting imports by length for only specific sections by using length_sort_ followed by the section name as a configuration item, e.g.: length_sort_stdlib=1 Skip processing of imports (outside of configuration) To make isort ignore a single import simply add a comment at the end of the import line containing the text -rm , modify=True)) If you just want to display warnings, but allow the commit to happen anyway, call git_hook without the strict parameter. If you want to display warnings, but not also fix the code, call git_hook without the modify" ] ) Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/isort/
CC-MAIN-2019-35
refinedweb
354
54.32
The pages endpoints are used to create static pages which such as about page or any other page that doesn’t need to be updated frequently and only a specific content is to be shown. This article will illustrate how the pages can be added or removed from the /admin/content/pages route using the pages API in Open Event Frontend. The primary end point of Open Event API with which we are concerned with for pages is GET /v1/pages First, we need to create a model for the pages, which will have the fields corresponding to the API, so we proceed with the ember CLI command: ember g model page Next, we need to define the model according to the requirements. The model needs to extend the base model class. The code for the page model looks like this: import attr from 'ember-data/attr'; import ModelBase from 'open-event-frontend/models/base'; export default ModelBase.extend({ name : attr('string'), title : attr('string'), url : attr('string'), description : attr('string'), language : attr('string'), index : attr('number', { defaultValue: 0 }), place : attr('string') }); As the page will have name, title, url which will tell the URL of the page, the language, the description, index and the place of the page where it has to be which can be either a footer or an event. The complete code for the model can be seen here. Now, after creating a model, we need to make an API call to get and post the pages created. This can be done using the following: return this.get('store').findAll('page'); The above line will check the store and find all the pages which have been cached in and if there is no record found then it will make an API call and cache the records in the store so that when called it can return it immediately. Since in the case of pages we have multiple options like creating a new page, updating a new page, deleting an existing page etc. For creating and updating the page we have a form which has the fields required by the API to create the page. The UI of the form looks like this. Fig. 1: The user interface of the form used to create the page. Fig. 2: The user interface of the form used to update and delete the already existing page The code for the above form can be seen here. Now, if we click the items which are present in the sidebar on the left, it enables us to edit and update the page by displaying the information stored in the form and then the details be later updated on the server by clicking the Update button. If we want to delete the form we can do so using the delete button which first shows a pop up to confirm whether we actually want to delete it or not. The code for displaying the delete confirmation pop up looks like this. <button class="ui red button" {{action (confirm (t 'Are you sure you would like to delete this page?') (action 'deletePage' data))}}> {{t 'Delete'}}</button> The code to delete the page looks like this deletePage(data) { if (!this.get('isCreate')) { data.destroyRecord(); this.set('isFormOpen', false); } } In the above piece of code, we’re checking whether the form is in create mode or update mode and if it’s in create mode then we can destroy the record and then close the form. The UI for the pop up looks like this. Fig.3: The user interface for delete confirmation pop up The code for the entire process of page creation to deletion can be checked here To conclude, this is how we efficiently do the process of page creation, updating and deletion using the Open-Event-Orga pages API ensuring that there is no unnecessary API call to fetch the data and no code duplication. Resources: - Open Event API Docs - Official Ember Data documentation - An article on how to create GET requests in ember in the blog by Adam Sommer
https://blog.fossasia.org/implementing-pages-api-in-open-event-frontend/
CC-MAIN-2018-47
refinedweb
677
61.19
Subject: Re: [boost] Tests are a mess From: Robert Ramey (ramey_at_[hidden]) Date: 2008-09-11 15:34:10 David Abrahams wrote: > on Thu Sep 11 2008, "Robert Ramey" <ramey-AT-rrsd.com> wrote: > Okay, Robert. Now your concerns have been taken seriously, and, I > think, addressed. Is that correct, and if so, can we move on? If > not, what is left to deal with? nothing - I move on some time ago. > >> This question has been raised why I put it into >> boost::serialization::throw_exception instead just using >> boost::throw_exception for the for user override. This is the >> decision which I believe is causing your grief. First of all, it's >> not clear to me anymore what boost::throw_exception should do - its >> not obvious that its equivalent to the old boost::throw_exception. > > You won't take the word of Emil and Peter that it is? No - I asked for a pledge that if this happened in the future it would be considered a bug. Since I didn't get one, there's no reason to believe it won't happen in the future. Rather than belabor the point, I just decided not to use the library until I have to time to look into it. > But throw_exception is not a similar case in any way. The things you > were asked to move were *definitions* that were placed into namespace > boost rather than into the serialization library. You didn't have a > definition of throw_exception to move. OK - I can easily implement vincent's suggestion. That should do it. Robert Ramey Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2008/09/142218.php
CC-MAIN-2021-49
refinedweb
282
76.62
Node and JavaScript API Shims Common node and js utils to help in porting of node.js and javascript libs to dart. Behavior of libs should match the js implementation as closely as dart allows. Installing via Pub Add this to your package's pubspec.yaml file: dependencies: node_shims: 0.1.0 Public API Path Usage: import "pacakge:node_shims/path.dart" as path; Normalize a string path, taking care of '..' and '.' parts. String normalize(String path) Join all arguments together and normalize the resulting path. String join(List paths) Resolves to to an absolute path. String resolve(List paths) Solve the relative path from from to to. String relative(String from, String to) Return the directory name of a path. Similar to the Unix `dirname` command. String dirname(String path) Return the last portion of a path. Similar to the Unix `basename` command. String basename(String path, String ext) Return the extension of the path, from the last '.' to end of string in the last portion of the path. String extname(String path) Check if it's an absolute path. bool isAbsolute(String path) Check if a file exists. void exists(String path, void callback(bool)) bool existsSync (String path) The platform-specific file separator. '\\' or '/'. String sep; The platform-specific path delimiter, ; or ':'. String delimiter; JS Usage: import "package:node_shims/js.dart"; Core JS functions If value is truthy return value, otherwise return defaultValue. If defaultValue is a function it's result is returned. or(value, defaultValue) //js value || defaultValue //Usage or(null, 1) or(null, () => 1) Return true if value is "falsey": bool falsey(value) => value == null || value == false || value == '' || value == 0 || value == double.NAN; //Usage if (falsey('')) Return true if value is "truthy": bool truthy(value) => !falsey(value); //Usage if (truthy(1)) Array functions Changes the content of an array, adding new elements while removing old elements. docs. List splice(List list, int index, [num howMany=0, dynamic elements]) Returns a new array comprised of this array joined with other array(s) and/or value(s). docs. List concat(List lists) Removes the last element from an array and returns that element. docs dynamic pop(List list) Mutates an array by appending the given elements and returning the new length of the array. docs int push(List list, item) Reverses an array in place. The first array element becomes the last and the last becomes the first. docs List reverse(List list) Removes the first element from an array and returns that element. This method changes the length of the array. docs dynamic shift(List list) Adds one or more elements to the beginning of an array and returns the new length of the array. docs int unshift(List list, item) Returns a shallow copy of a portion of an array. docs List slice(List list, int begin, [int end]) Tests whether all elements in the array passes (truthy) the test implemented by the provided function. docs bool every(List list, fn(e)) => list.every((x) => truthy(fn(x))); Tests whether some element in the array passes (truthy) the test implemented by the provided function. docs bool some(List list, fn(e)) => list.any((x) => truthy(fn(x))); Creates a new array with all elements that pass the test implemented by the provided function. docs List filter(List list, fn(e)) => list.where((x) => truthy(fn(x))).toList(); Apply a function against an accumulator and each value of the array (from left-to-right) as to reduce it to a single value. docs dynamic reduce(List list, fn(prev, curr, int index, List list), [initialValue]) Apply a function simultaneously against two values of the array (from right-to-left) as to reduce it to a single value. docs dynamic reduceRight(List list, fn(prev, curr, int index, List list), [initialValue]) Strings Returns the character at the specified index. docs String charAt(String str, int atPos) => str.substring(atPos, 1); Returns a number indicating the Unicode value of the character at the given index. docs int charCodeAt(String str, int atPos) => str.codeUnitAt(atPos); Wraps the string in double quotes ("""). docs String quote(String str) => '"$str"'; Used to find a match between a regular expression and a string, and to replace the matched substring with a new substring. docs String replace(String str, pattern) => str.replaceAll(str, pattern); Executes the search for a match between a regular expression and a specified string. docs int search(String str, RegExp pattern) => str.indexOf(pattern); Returns the characters in a string beginning at the specified location through the specified number of characters. docs String substr(String str, int start, [int length=null]) Trims whitespace from the left side of the string. docs String trimLeft(String str) => str.replaceAll(new RegExp(r'^\s+'), ''); Trims whitespace from the right side of the string. docs String trimRight(String str) => str.replaceAll(new RegExp(r'\s+$'), ''); HTML Encode html string. String escapeHtml(String html) Process Usage: import "pacakge:node_shims/process.dart" as process; Returns the current working directory of the process. String cwd() An object containing the user environment. Map<String,String> env A Writable Stream to stdout. IOSink get stdout A writable stream to stderr. IOSink get stderr A Readable Stream for stdin. Stream get stdin An array containing the command line arguments. List<String> get argv This is the absolute pathname of the executable that started the process. String get execPath Changes the current working directory of the process or throws an exception if that fails. void chdir(String directory) Exit the Dart VM process immediately with the given code. void exit([int code=0]) Utils Useful helper utils extracted from the 101 LINQ Samples. Usage: import "package:node_shims/utils.dart"; Order a sequance by comparators or expressions. docs order(List seq, {Comparator by, List<Comparator> byAll, on(x), List<Function> onAll}) A case-insensitive comparer that can be used in ordering and grouping functions. caseInsensitiveComparer(a,b) => a.toUpperCase().compareTo(b.toUpperCase()); Group a sequance by comparators or expressions. docs List<Group> group(Iterable seq, {by(x):null, Comparator matchWith:null, valuesAs(x):null}) Capture an expression and invoke it in the supplied function. wrap(value, fn(x)) => fn(value); Trim the start of a string if it matches the specified string. String trimStart(String str, String start) Trim the end of a string if it matches the specified string. String trimEnd(String str, String end) Pull requests for missing js or node.js utils welcome.
https://www.dartdocs.org/documentation/node_shims/0.1.0/index.html
CC-MAIN-2017-26
refinedweb
1,074
59.19
#include <StateNode.h> List of all members. Override setup() to build your own Transition and StateNode network if you want this node to contain a state machine. Override DoStart() / DoStop() as you would a normal BehaviorBase subclass to have this node add some functionality of its own. You can override setup to create a sub-network, as well as overriding DoStart and DoStop, in the same class. See also the tutorial page on State Machines. There are two StateNode templates in project/templates/: /templates/ Definition at line 28 of file StateNode.h. this [inline] constructor, pass a name to use Definition at line 31 of file StateNode.h. [virtual] destructor, removes references to its outgoing transitions (be careful of incoming ones - they're still around!), and calls RemoveReference() on subnodes Definition at line 5 of file StateNode.cc. [inline, protected] constructor, pass the class name and instance name to use Definition at line 78 of file StateNode.h. [private] don't call this Adds the specified StateTransition to the transition table. Definition at line 20 of file StateNode.cc. Returns the std::vector of transitions so you can modify them yourself if need be. Definition at line 40 of file StateNode.h. Returns the const std::vector of transitions so you can read through them if need be. Definition at line 43 of file StateNode.h. Adds a StateNode to nodes so it can be automatically dereferenced later, returns what it's passed (for convenience), calls AddReference() on node. Also sets the node's parent to this if it is null. Definition at line 27 of file StateNode.cc. Returns the std::vector of sub-nodes so you can modify them yourself if need be. Definition at line 49 of file StateNode.h. Referenced by EventLogger::spider(). Returns the const std::vector of sub-nodes so you can read through them if need be. Definition at line 52 of file StateNode.h. Sets the retain flag - if not retained, will RemoveReference() subnodes upon DoStop() and recreate them on DoStart (by calling setup()) - may be too expensive to be worth saving memory... Definition at line 55 of file StateNode.h. Transitions should call this when you are entering the state, so it can enable its transitions. Reimplemented from BehaviorBase. Reimplemented in GroupNode, MCNodeBase, MotionSequenceNode< SIZE >, OutputNode, PostureNode, SoundNode, WalkEngineNode< W, mcName, mcDesc >, WalkToTargetNode, and WaypointEngineNode< W, mcName, mcDesc >. Definition at line 35 of file StateNode.cc. Referenced by WalkToTargetNode::DoStart(), SoundNode::DoStart(), OutputNode::DoStart(), MotionSequenceNode< SIZE >::DoStart(), MCNodeBase::DoStart(), and GroupNode::DoStart(). [inline, virtual] This is called by DoStart() when you should setup the network of subnodes (if any). Definition at line 61 of file StateNode.h. Referenced by DoStart(). Transitions should call this when you are leaving the state, so it can disable its transitions. Reimplemented in MCNodeBase, MotionSequenceNode< SIZE >, SoundNode, and WalkToTargetNode. Definition at line 52 of file StateNode.cc. Referenced by OutputNode::DoStart(), WalkToTargetNode::DoStop(), SoundNode::DoStop(), MotionSequenceNode< SIZE >::DoStop(), and MCNodeBase::DoStop(). This is called by DoStop() when you should destruct subnodes. Default implementation will take care of the subnodes and their transitions, you only need to worry about any *other* memory which may have been allocated. If none, you may not need implement this function at all. Reimplemented in MotionSequenceNode< SIZE >. Definition at line 73 of file StateNode.cc. Referenced by DoStop(), MotionSequenceNode< SIZE >::teardown(), and ~StateNode(). returns parent Definition at line 74 of file StateNode.h. Referenced by EventLogger::isListening(). [protected, virtual] will throw an activation event through stateMachineEGID, used when DoStart() is called Definition at line 81 of file StateNode.cc. 0 will throw a status event through stateMachineEGID to signal "completion" of the node "completion" is defined by your subclass - will mean different things to different nodes depending on the actions they are performing. So call this yourself if there is a natural ending point for your state. Definition at line 85 of file StateNode.cc. Referenced by WaypointEngineNode< W, mcName, mcDesc >::processEvent(), SoundNode::processEvent(), MotionSequenceNode< SIZE >::processEvent(), and MCNodeBase::processEvent(). will throw an deactivation event through stateMachineEGID, used when DoStop() is called Definition at line 89 of file StateNode.cc. Referenced by DoStop(). [protected] pointer to the machine that contains this node Definition at line 96 of file StateNode.h. Referenced by addNode(), DoStart(), and getParent(). a vector of outgoing transitions Definition at line 98 of file StateNode.h. Referenced by addTransition(), DoStart(), DoStop(), getTransitions(), and ~StateNode(). this is set to true if the network has been setup but not destroyed (i.e. don't need to call setupSubNodes again) Definition at line 102 of file StateNode.h. Referenced by DoStart(), DoStop(), setup(), teardown(), MotionSequenceNode< SIZE >::~MotionSequenceNode(), and ~StateNode(). this is set to true if the network should be retained between activations. Otherwise it's dereferenced upon DoStop(). (or at least RemoveReference() is called on subnodes) Definition at line 104 of file StateNode.h. Referenced by DoStop(), and setRetain(). the timestamp of last call to DoStart() Definition at line 106 of file StateNode.h. Referenced by postCompletionEvent(), and postStopEvent(). vector of StateNodes, just so they can be dereferenced again on DoStop() (unless retained) or ~StateNode() Definition at line 108 of file StateNode.h. Referenced by addNode(), GroupNode::DoStart(), DoStop(), getNodes(), teardown(), and ~StateNode().
http://www.tekkotsu.org/dox/classStateNode.html
crawl-001
refinedweb
871
50.84
Subject: [hwloc-devel] -lpicl on Solaris From: Jeff Squyres (jsquyres_at_[hidden]) Date: 2012-06-12 10:19:04 I recently upgraded OMPI's SVN trunk to hwloc 1.4.2, and immediately broke builds on Solaris. After some hunting around, here's what our friends at Oracle have found: - Building hwloc 1.4.2 standalone on Solaris works fine. - Building OMPI SVN trunk (with hwloc 1.4.2 embedded) on Solaris fails due to a missing -lpicl. - The issue seems to be in hwloc's src/Makefile.am: if HWLOC_HAVE_SOLARIS ldflags += -lpicl endif HWLOC_HAVE_SOLARIS Specifically, -lpicl gets added to standalone builds but not embedded builds. Shouldn't the check for -lpicl be in hwloc.m4 so that it gets added to HWLOC_EMBEDDED_LIBS? See the attached patch. Or is there a deeper reason we didn't use AC_CHECK_LIB and used HWLOC_HAVE_SOLARIS instead? (e.g., is -lpicl Bad on other platforms?) -- Jeff Squyres jsquyres_at_[hidden] For corporate legal information go to:
http://www.open-mpi.org/community/lists/hwloc-devel/2012/06/3118.php
CC-MAIN-2014-42
refinedweb
159
69.99
Tooling Tuesday - Python3 GeoPy So you want to find something on a map, query a location and use the data in Python 3? I got your back! So what is it? GeoPy.... "makes it easy for Python developers to locate the coordinates of addresses, cities, countries, and landmarks across the globe using third-party geocoders and other data sources." So how do I install it? Using pip3 in a terminal / command prompt sudo pip3 install geopy Show me how to use it! So for a quick example project lets use GeoPy with the webbrowser library which can create new web browser instances and create new tabs in any browser (We covered this excellent library back in Feb 2018) Using GeoPy and webbrowser we shall create a simple tool that will ask for an address, then supply the longitude and latitude of that address, which are then used to open a Google map straight to that position. In eleven nine lines of Python 3 code! (I refactored the code this morning.) So first we import the geopy and webbrowser libraries. from geopy.geocoders import Nominatim import webbrowser Next we use the Nominatim Open Street Map search tool to search via address / post / zip code. geolocator = Nominatim() Next we create a variable called address and use that to store the keyboard input (the address details) that the user types. address = input("Give me an address :") Now we create an object called location that stores the location data for our chosen address. location = geolocator.geocode(address) Now lets extract the latitude and longitude data which is output as floats (numbers with decimal places), convert the data to strings and store them in variables. latitude = str(location.latitude) longitude = str(location.longitude) The URL for our web browser is made up of the Google maps URL, with the latitude and longitude joined (concatenated) to create a full URL that will resolve correctly. The reason that we converted the latitude and longitude data from floats to strings is that you can only concatenate like for like. So string to string, float to float, integer to integer. url = ""+latitude+","+longitude+","+"18z" Try changing the "18z" value and see what happens! Lastly we tell the code to open the browser with our URL. webbrowser.open_new_tab(url) Complete Code Listing from geopy.geocoders import Nominatim import webbrowser geolocator = Nominatim() address = input("Give me an address :") location = geolocator.geocode(address) latitude = str(location.latitude) longitude = str(location.longitude) url = ""+latitude+","+longitude+","+"18z" webbrowser.open_new_tab(url) Test The Code! Save and run the code, et voila! You can now see a map for the address you provided. Bonus Content I used GeoPy in last week's Friday Fun blog post to create a way to get pollen count data for your location. Head over and have a read.
https://bigl.es/tooling-tuesday-python3-geopy/
CC-MAIN-2020-16
refinedweb
468
65.01
mitmproxy possible? (Pythonista 1.6 beta) Is there any reason why mitmproxy can't work with pythonista? I was able to download/"install" it using githubget.py (got the github dowloader somewhere in the forums)...it creates the mitmproxy-master folder in the root directory of pythonista. Should the whole folder be moved into site-packages? Also if anybody has an alternative recommendation for intercepting an ssl-encrypted value in python, I'd love to know about it. (1.6 beta) Thanks, Tony I don't think it's possible to get this to work in Pythonista at all. From looking at the source code briefly, it seems to require running pfctlwith root privileges, which just isn't possible on iOS. What an incredible coincidence! I was just playing with various Python-based http proxies the day before (which also relates to a recent question on this forum about running server apps in the background). I've found several several http (tcp/80) proxies that work ok with Pythonista: <blockquote><br><br><br><br> </blockquote> But haven't been able to get any SSL-based ones working either. The one I'm currently looking at is <a href="">proxpy</a> but I suspect it has a dependency on OpenSSL (either the lib or a binary) which clearly is not available in Pythonista. If you make any progress with mitmproxy (or any other) please update this thread -- I'm very interested in this as well. Sidenote: One interesting discovery made while playing with the proxy servers -- iOS (8.3 anyway) uses this URL check if any new OS updates are available. It's also interesting/depressing to see just how many apps on your device call home using various analytic frameworks... Edit: Fixed bad markdown for URLs. Well I'm glad I didn't spend any significant amount of time trying to get it to work then. Thanks OMZ! I'm also curious whether that's an iOS limitation or a python from within iOS limitation? Paco- just saw your post. Thanks for your input, might have a go with your suggestions if it turns out ssl can be used, and I'll be sure to report back if I find any solutions. OpenSSL is in Pythonista... import ssl print(ssl.OPENSSL_VERSION) # OpenSSL 1.0.1g 7 Apr 2014 in the current Beta @ccc -- you're right, of course. I was thinking of the OpenSSL wrapper module that some of these proxy servers depend on. (I believe it wraps the external openssl binary, which isn't an option for Pythonista for the obvious reasons)<br> @Tizzy -- I still haven't gotten proxpy working, but I realized I'd forgotten to install the local proxpy root CA on the iOS device I was testing with. (It installs under Settings app->Profiles, like any other mobileconfig). Will continue messing with it as time allows... @pacco Checking in, any word on proxies in Pythonista since we brought this up quite a while ago? Any Luck?
https://forum.omz-software.com/topic/1711/mitmproxy-possible-pythonista-1-6-beta
CC-MAIN-2021-17
refinedweb
501
64.51
Z Code Here's how to build a simple continuous client application that spans multiple devices using the cloud Azure to handle communication between the devices. I feel lucky to live in the days of continuously connected devices. I love that I'm able to reply to e-mail using my phone while riding the bus home. It's amazing to be able to Skype with my family on the other side of the world and team up with like-minded gamers across the country on my Xbox. However, in this world of permanent Internet connectivity, there is, as Joshua Topolsky puts it, "a missing link in our computing experience" (engt.co/9GVeKl). This missing link refers to the lack of what Topolsky calls a continuous client; that is, a solution to the broken workflow that occurs today when you move from one device to another. As I switch among my PC, tablet and phone in a typical day, my current browsing session, documents, windows and application state should naturally flow to all of them. That way, I'd spend less time on context switching and more time on actual work and play. In this article, I'll show you how to build a simple continuous client application that spans multiple devices and platforms. I'll make use of the new Portable Class Libraries (PCLs) to ease the development of a cross-platform application, and the cloud—in particular Windows Azure AppFabric Service Bus—to handle the communication between the devices. On Your Way Home …It's late afternoon and I'm at work trying to fix that last bug quickly so I can avoid peak-hour traffic. The inevitable phone call comes: "Honey, on your way home can you pick up some milk, bread and chickpeas?" I hang up, get to the store and realize I've forgotten what to buy. In the end, I head home with items we already have in the pantry. It's frustrating, and today's solution tends to involve a lot of back-and-forth phone calling: "Did you say frozen peas or chickpeas?" "Chickpeas. And while you're there, can you buy toilet paper?" To help alleviate our marriage tensions around this particular issue (the others will have to wait for another day), I'll write a simple app called "On Your Way Home" that runs on our Windows Phone-based devices and Windows 8 beta tablets and allows my wife and me to easily track our shopping list. It will keep us both informed, in real time, of any changes to the shopping list so that at any time we know exactly what we need to buy. Given that a smartphone running Window Phone and a Windows 8-based tablet are different devices, with differing flavors of the Microsoft .NET Framework and Windows, I'll use PCLs to abstract away platform differences and enable me to share as much application logic as possible, including all the of the communication with the Windows Azure AppFabric Service Bus. I'll also use the Model-View-ViewModel (MVVM) pattern (bit.ly/GW7l) to facilitate the use of the same Models and ViewModels from our device-specific Views. Portable Class LibrariesIn the past, cross-platform development in the .NET Framework hasn't been easy. While the .NET Framework had grand dreams as a cross-platform runtime, Microsoft hasn't yet fully delivered on the promise. If you've ever attempted to deliver a .NET Framework-based application or framework that spanned multiple devices, you'll have noticed that a few things got in the way. On the Runtime Side The assembly factoring, versioning and assembly names are different among the .NET platforms. For example, System.Net.dll on the .NET Framework, which contains peer-to-peer networking APIs, means something entirely different on Silverlight, where it contains the core networking stack. To find those APIs on the .NET Framework, you'll need to reference System.dll. The assembly versions are also not the same; Silverlight adopts 2.0.5.0 for versions 2.0 to 4, whereas 2.0.0.0 and 4.0.0.0 were adopted for .NET Framework versions 2.0 to 4. These differences have, in the past, prevented an assembly compiled for one platform from running on another. On the Visual Studio Side Right from the beginning you need to decide which platform to target—the .NET Framework, Silverlight or Windows Phone. Once that decision is made, it's extremely hard to move to or support a new platform. For example, if you're already targeting the .NET Framework, targeting the .NET Framework and Silverlight means creating a new project and either copying or linking the existing files into that project. If you're lucky, you might have factored your application in such a way that platform-specific pieces are easily replaced. If not (and this is probably more likely), you'll need to #if PLATFORM your way around each build error until you have a clean build. This is where the new PCLs can help. PCLs, available as a free add-on to Visual Studio 2010 (bit.ly/ekNnsN) and built into Visual Studio 11 beta, provide an easy way to target multiple platforms using a single project. You can create a new PCL, choose the frameworks you'd like to target (see Figure 1) and start writing code. Under the covers, the PCL tools handle the API differences and filter IntelliSense so you see only classes and members that are available and work across all the frameworks you've selected. The resulting assembly can then be referenced and run, without any changes, on all indicated frameworks. Solution LayoutA typical way to organize a cross-platform app using a PCL is to have one or more portable projects containing the shared components, and have platform-specific projects for each platform that references these projects. For this application, I'll need two Visual Studio solutions—one created in Visual Studio 2010 (OnYourWayHome.VS2010) containing my Windows Phone app and one created in Visual Studio 11 (OnYourWayHome.VS11) containing my Windows Metro-style app. I need multiple solutions because at the time of writing, the Windows Phone SDK 7.1 works only on top of Visual Studio 2010, whereas the new Windows 8 tools are available only as part of Visual Studio 11. There isn't (currently) a single version that supports both. Don't despair, though; a new feature available in Visual Studio 11 helps me out here. I'm able to open most projects created in the earlier version without having to convert them to the new format. This allows me to have a single PCL project and reference it from both solutions. Figures 2 and 3 show the project layout for my application. OnYourWayHome.Core, a PCL project, contains the models, view models, common services and platform abstractions. OnYourWayHome.ServiceBus, also a PCL project, contains portable versions of the APIs that will talk to Windows Azure. Both projects are shared between the Visual Studio 2010 solution and Visual Studio 11. OnYourWayHome.Phone and OnYourWayHome.Metro are platform-specific projects targeting Windows Phone 7.5 and .NET for Metro-style apps, respectively. These contain the device-specific views (such as the pages in the application) and implementations of the abstractions found in OnYourWayHome.Core and OnYourWayHome.ServiceBus. Converting Existing Libraries to PCLsTo communicate with Windows Azure, I downloaded the Silverlight-based REST sample from servicebus.codeplex.com and converted it to a PCL project. Some libraries are easier to convert than others, but you'll inevitably run into situations where a given type or method isn't available. Here are some typical reasons a given API might not be supported in PCLs: The API Isn't Implemented by All Platforms Traditional .NET Framework file IOs, such as System.IO.File and System.IO.Directory, fall into this bucket. Silverlight and Windows Phone use the System.IO.IsolatedStorage APIs (though different from the .NET Framework version), whereas Windows 8 Metro-style 11 beta. 11 beta release. There are a couple of different ways of handling an API that falls into one of these scenarios. Sometimes there's a simple replacement. For example, Close methods (Stream.Close, TextWriter.Close and so forth) have been deprecated in PCL and replaced with Dispose. In such cases, it's just a matter of replacing a call to the former with the latter. But sometimes it's a little harder and takes more work. One situation I encountered while converting the Service Bus APIs involved the HMAC SHA256 hash code provider. It isn't available in a PCL because of the cryptography differences between Windows Phone and Metro-style apps. Windows Phone apps use .NET-based APIs to encrypt, decrypt and hash data, while Metro-style apps use the new native Windows Runtime (WinRT) APIs. The code in particular that failed to build after the conversion was the following: using (HMACSHA256 sha256 = new HMACSHA256(issuerSecretBytes)) { byte[] signatureBytes = sha256.ComputeHash(Encoding.UTF8.GetBytes(token)); signature = Convert.ToBase64String(signatureBytes); } To help bridge the gaps between the Phone crypto APIs and the WinRT crypto APIs, I invented a platform abstraction representing the Service Bus requirement. In this case, the Service Bus needed a way to calculate an HMAC SHA256 hash: public abstract class ServiceBusAdapter { public static ServiceBusAdapter Current { get; set; } public abstract byte[] ComputeHmacSha256(byte[] secretKey, byte[] data); } I added ServiceBusAdapter to the portable project, as well as a static property for setting the current abstraction, which will become important later. Next, I created Windows Phone- and Windows 8-specific HMAC SHA256 implementations of this abstraction, and put these in their respective projects, as shown in Listing 1. At startup in the Windows Phone project, I then "bootstrapped" the Service Bus by setting the Phone-specific adapter as the current adapter: ServiceBusAdapter.Current = new PhoneServiceBusAdapter(); I did the same for the Windows 8 project: ServiceBusAdapter.Current = new MetroServiceBusAdapter(); With everything in place, I then changed the original non-compiling code to call through the adapter: var adapter = ServiceBusAdapter.Current; byte[] signatureBytes = adapter.ComputeHmacSha256(issuerSecretBytes, Encoding.UTF8.GetBytes(token)); So although there are two different ways of computing the hash depending on the platform, the portable project talks to both using a single interface. This can take a little bit of work up front, but I can easily reuse the infrastructure as I run into more APIs that need bridging between the platforms. As a side note, I used a static property to access and register the adapter, which makes it easier to move existing APIs over to using the adapter. If you're using a dependency injection framework such as the Managed Extensibility Framework (MEF), Unity or Autofac, you'll find that it's natural to register the platform-specific adapter into the container and have the container "inject" the adapter into portable components that need it. Application Layout My shopping list application, On Your Way Home, has two simple views: ShoppingListView, which displays the current items on the shopping list; and AddGroceryItemView, which allows a user to add more items to the list. Figures 4 and 5 show the Windows Phone versions of these views. ShoppingListView shows all the items that are yet to be purchased, with the idea that as you walk around the store, you check off each item as you add it to the cart. After purchasing the items, clicking check out causes the checked items to be taken off the list, indicating they no longer need to be purchased. Devices sharing the same shopping list instantly (well, as instantly as the network behind it allows) see changes made by another person. The Views, which live in the platform-specific projects, consist mainly of XAML and have very little codebehind, which limits the amount of code you need to duplicate between the two platforms. Using XAML data binding, the Views bind themselves to portable ViewModels that provide the commands and data that run the Views. Because there's no common UI framework that ships across all platforms, PCL projects can't reference UI-specific APIs. However, when targeting frameworks that support them, they can take advantage of APIs that are typically used by ViewModels. This includes the core types that make XAML data binding work, such as INotifyPropertyChanged, ICommand and INotifyCollectionChanged. Also, although the WinRT XAML framework doesn't support them, System.ComponentModel.DataAnnotations and INotifyDataErrorInfo have been added for completeness, and this enables custom XAML validation frameworks to support portable ViewModels/Models. Listing 2 and 3 show examples of View/ViewModel interactions. Listing 2 shows the controls on the Window Phone version of AddGroceryItemView and their bindings. These controls are bound against the properties on the AddGroceryItemViewModel, which is shared with both the Windows Phone and Windows 8 projects, as shown in Listing 3. Event SourcingOn Your Way Home is based heavily around the concept of event sourcing (bit.ly/3SpC9h). This is the idea that all state changes to an application are published and stored as a sequence of events. In this context, event doesn't refer to the thing defined by the C# event keyword (although the idea is the same), but rather to concrete classes that represent a single change to the system. These are published through what's called an event aggregator, which then notifies one or more handlers that do work in response to the event. (For more about event aggregation, see Shawn Wildermuth's article, "Composite Web Apps with Prism," at msdn.microsoft.com/magazine/dd943055.) For example, the event that represents a grocery item being added to the shopping list looks something like what's shown in Listing 4. The ItemAddedEvent class contains information about the event: in this case, the name of the grocery item that was added and an ID that's used to uniquely represent the grocery item within a shopping list. Events are also marked with [DataContract], which makes it easier for them to be serialized to disk or sent over the wire. This event is created and published when the user clicks the add button on the AddGroceryItemView, as shown in Listing 5. Note that this method doesn't directly make any change to the shopping list; it simply publishes the ItemAddedEvent to the event aggregator. It's the responsibility of one of the event handlers listening to this event to do something with it. In this case, a class called ShoppingList subscribes to and handles the event, as shown in Listing 6. Every time ItemAddedEvent is published, ShoppingList creates a new GroceryItem using the data from the event and adds it to the shopping list. The ShoppingListView, which is indirectly bound to the same list via its ShoppingListViewModel, is also updated. This means that when the user navigates back to the shopping list page, the items he just added to the list are shown as expected. The process of removing an item from the shopping list, adding an item to a cart and checking out the cart are all handled using the same event publish/subscribe pattern. It may at first seem like a lot of indirection for something as simple as adding items to a shopping list: the AddGroceryItemViewModel.Add method publishes an event to the IEventAggregator, which passes it onto the ShoppingList, which adds it to the grocery list. Why doesn't the AddGroceryItemViewModel.Add method simply bypass the IEventAggregator and add the new GroceryItem directly to the ShoppingList? I'm glad you asked. The advantage of treating all state changes to the system as events is that it encourages all the individual parts of the application to be very loosely coupled. Because the publisher and subscribers don't know about each other, inserting a new feature in the pipeline, such as syncing data to and from the cloud, is a lot simpler. Syncing Data to the CloudI've covered the basic functionality of the application running on a single device, but there's still the problem of getting the changes a user makes to the shopping list to other devices, and vice versa. This is where the Windows Azure AppFabric Service Bus comes in. Windows Azure AppFabric Service Bus is a feature that enables applications and services to easily talk with each other over the Internet, avoiding the complexities of navigating communication obstacles such as firewalls and Network Address Translation (NAT) devices. It provides both REST and Windows Communication Foundation (WCF) HTTP endpoints hosted by Windows Azure and sits in between the publisher and the subscriber. There are three main ways to communicate using the Windows Azure AppFabric Service Bus; for the purposes of my application, however, I'll just cover Topics. For a full overview, check out "An Introduction to the Windows Azure AppFabric Service Bus" at bit.ly/uNVaXG. For publishers, a Service Bus Topic is akin to a big queue in the cloud (see Figure 6). Completely unaware of who's listening, publishers push messages to the Topic, where they're held ad infinitum until requested by a subscriber. To get messages from the queue, subscribers pull from a Subscription, which filters messages published to the Topic. Subscriptions act like a particular queue, and messages removed from a Subscription will still be seen from other Subscriptions if their own filters include them. In On Your Way Home, the AzureServiceEventHandler class is the bridge between the application and the Service Bus. Similar to ShoppingList, it also implements IEventHandler<T>, but instead of specific events, AzureServiceEventHandlers can handle them all, as shown in Figure 7. public class AzureServiceBusEventHandler : DisposableObject, IEventHandler<IEvent>, IStartupService { private readonly IAzureServiceBus _serviceBus; private readonly IAzureEventSerializer _eventSerializer; public AzureServiceBusEventHandler(IEventAggregator eventAggregator, IAzureServiceBus serviceBus, IAzureEventSerializer eventSerializer) { _eventAggregator = eventAggregator; _eventAggregator.SubscribeAll(this); _serviceBus = serviceBus; _serviceBus.MessageReceived += OnMessageReceived; _eventSerializer = eventSerializer; } [...] public void Handle(IEvent e) { BrokeredMessage message = _eventSerializer.Serialize(e); _serviceBus.Send(message); } } Every change a user makes to the state of the shopping list is handled by AzureServiceBusEventHandler and pushed directly to the cloud. Neither AddGroceryItemViewModel, which publishes the event, nor ShoppingList, which handles it on the local device, is aware that this happens. The trip back from the cloud is where an event-based architecture really pays off. When the AzureServiceEventHandler detects that a new message has been received on the Service Bus (via the IAzureServiceBus.MessageReceived C# event), it does the reverse of what it did earlier and deserializes the received message back into an event. From here, it gets published back via the event aggregator, which causes it to be treated as though the event came from within the application, as shown in Listing 7. The ShoppingList isn't aware (nor does it care) about the source of the event and handles those coming from the Service Bus/cloud as though they came directly from a user's input. It updates its list of groceries, which in turn causes any of the views bound to that list to be updated as well. If you pay special attention, you might notice one little problem with the workflow: Events that get sent to the cloud from the local device come back to that same device and cause duplication of the data. Worse, changes to other, unrelated shopping lists will also come to that device. I don't know about you, but I'm pretty sure I don't want to see other people's food choices appearing on my shopping list. To prevent this, a Service Bus Topic is created per list, and a Subscription per device, which listens to the Topic. When the messages are published to the Topic from the device, a property containing the device ID is sent along with the messages, which the Subscription filter uses to exclude messages that came from its own device. Figure 8 shows this workflow. Wrapping UpI covered a lot in this article: Portable Class Libraries simplified my solution and significantly reduced the amount of code I needed to write to target the two platforms. Also, changing application state via events made it very easy to sync that state with the cloud. There's still a lot I've left unsaid, however, that you'll want to factor in when developing a continuous client. I didn't talk about offline event caching and fault tolerance (what if the network isn't available when I publish an event?), merge conflicts (what if another user makes a change that conflicts with mine?), playback (if I attach a new device to the shopping list, how does it get updated?), access control (how do I prevent unauthorized users accessing data they shouldn't?) and finally, persistence. In the sample code for the article, the application doesn't save the shopping list between launches. I'll leave this as an exercise for you; it might be an interesting challenge if you want to play around with the code. A naïve (or rather the traditional) way of approaching persistence might be to a put a hook directly into the ShoppingList class, mark the GroceryItem objects as serializable and save them off to a file. Before going down this route, though, stop and think about it: Given that the ShoppingList already handles events natively and already doesn't care where they come from, syncing data to and from the cloud looks surprisingly like saving and restoring data from disk, doesn't
https://visualstudiomagazine.com/articles/2012/03/09/create-a-continuous-client-using-portable-class-libraries.aspx
CC-MAIN-2020-40
refinedweb
3,559
53.31
ASP. This release candidate build includes several new features, and one important change, from the Beta2 release. You can read a document that lists all changes from the CTP->Beta1->Beta2->RC here. At a high-level, the changes from Beta2 to RC include: - Inclusion of a built-in VS 2005 Web Application Project template to create new ASP.NET AJAX applications. This now allows you to pick File->New Project (in addition to the existing template in File->New Web Site) to create new ASP.NET AJAX enabled web applications. - Additional globalization support for AJAX applications, as well as additional script resource handler features to improve substitution logic, compression and caching. Dynamic invocation of web service proxies from JavaScript is also now supported. - The assembly name of ASP.NET AJAX changed from Microsoft.Web.Extensions.dll to System.Web.Extensions.dll, and the namespace of the server-side features of ASP.NET AJAX changed from Microsoft.Web to System.Web. Note that the client JavaScript namespaces did not change (which avoids breaking existing client JavaScript code). The team made this last server namespace and assembly change for two reasons: 1) Because ASP.NET AJAX will be a fully-supported part of the core .NET Framework going forward, and so for consistency it makes sense for the final release to live under the "System" namespace - which is where the other core parts of the .NET Framework and ASP.NET live. 2) Because it will help make upgrading to the "Orcas" release of ASP.NET and Visual Studio much easier. ASP.NET AJAX will be built-in with "Orcas" (so you don't have to download and install it separately), and by making the namespace change it means that your code will not need to change. You'll be able to optionally keep your applications running using the ASP.NET AJAX 1.0 release just fine if you want (it will run and be supported on top of Orcas) - or you'll be able to change the version string in your web.config file and automatically upgrade to the newer version of ASP.NET AJAX that will be included built-in to ASP.NET "Orcas". This whitepaper provides step-by-step instructions on how to upgrade existing ASP.NET AJAX Beta2 applications to the ASP.NET AJAX RC build. Important Intellisense Tip: One additional step you'll want to make after you follow the steps in the whitepaper above is to). - Once you delete these schema files and restart VS, it will re-calculate the HTML markup Intellisense for all controls and pick up the changes from the assembly name change. Hope this helps, Scott
http://weblogs.asp.net/scottgu/asp-net-ajax-1-0-release-candidate-now-available
CC-MAIN-2015-06
refinedweb
444
67.55
this is called “placement new” which is used to construct an object on a pre-allocated buffer. Example: char *buffer = new char[50]; // pre-allocated buffer string *ptr = new (buffer) string("hello world!"); // contruct string in buffer To overload it, you need to redefine the following interface: void operator new(size_t, void *where) { return (where); } Hope that helps. I have a custome memory manager that overrides the new and delete operator in a pretty basic way: Now this works fine for almost evrything and anything except in one small area. I have started to work with AngelScript, and to use custom classes with the script (including strings) i need to use new in a way I have never seen before. Now I need to alter my memory manager so that it can handle this syntax, but I have no idea how to do this, as I have _no_ idea what it does. According to the author of AngelScript, the _thisPointer has already been allocated, so i am assuming it just calls the constructor, and sets up the object. Is this correct? IF it is, does anyone know how to implment this in my memory manager? Thanks in advance Spree
http://devmaster.net/posts/6618/overridding-new-operator
CC-MAIN-2013-48
refinedweb
199
68.7
Difference between revisions of "Squish/Load Testing" Latest revision as of 08:41, 25 November 2017 Load testing with Squish Let's assume you have an application that communicates with a separate server, let it be a web server, a Jabber server or a custom developed server only for your application. Using Squish, you can load test this server with your application as the entry point. This is done by running Squish multiple times in parallel on multiple instances of your application working against the same server. To accomplish the load testing, you need two different scripts: the actual test script that will run on each application instance and a launch script that will start off the requested number of Squish instances. 1) The test script The content of the test script is of course very dependent of the application you are testing, but one general concept is that you put the interactions in a loop doing a pre-defined number of iterations. For the sake of this example, we will use the fortune client and threaded fortune server available in your Qt installation. The test will simply be repetetive clicking of the "Get Fortune" button, verifying that the fortune text is updated within a specific period of time, in the below case 20 milliseconds. The script assumes the fortune server is already running. def main(): startApplication("fortuneclient") type(waitForObject(":Server port:_QLineEdit"), "41868") labelName = "{type='QLabel' unnamed='1' visible='1' window=':Fortune Client_Client' occurrence='3'}" waitFor("object.exists(labelName)", 20000) label = findObject(labelName) prevText = label.text for i in range(100): clickButton(waitForObject(":Fortune Client.Get Fortune_QPushButton")) time = QTime() time.start() waitFor("label.text != prevText") elapsed = time.elapsed() test.verify(elapsed < 20) prevText = label.text clickButton(waitForObject(":Fortune Client.Quit_QPushButton")) 2) The launch script This script can be written in any scripting language of your choice. The main purpose of it though is to launch ONE Squish server and an arbitrary number of Squish runners all running the script above. The script below does that, in Python, launching 50 Squish runners. #!/usr/bin/env python import subprocess SQUISHDIR="/path/to/squish-4.1.0-qt-src" TESTSUITEDIR="/path/to/suite_loadTesting" TESTCASE="tst_case1" subprocess.Popen(["s/bin/squishserver" SQUISHDIR, "—daemon"]) for i in range(50): subprocess.Popen(["s/bin/squishrunner" SQUISHDIR, "—testsuite", TESTSUITEDIR, "—testcase", TESTCASE]) Discussion One problem with the above approach is that the test script will launch immediately when the Squish runner is launched, which is normally what you want. However, this means some of the application instances are finished by the time other start. This can be worked around by having the test script wait for a specific file to exist before entering the loop, and having the launch script create this file after all Squish runners has started. That way, the actual load testing won't start until all the clients are ready. You may also want to introduce random snoozing between each iteration in the test script, to even out the load on the client machine and also making the load on the server machine more "human". If you want to run more application instances than what is feasable on a single host, using Squish's remote testing you can execute multiple application instances on multiple different hosts. This of course still means running the Squish runners on one machine, but at least you reduce the load introduced by the application you are testing. An alternative of course is to execute the Squish runners remotely over e.g. SSH on multiple machines to spread the load even more. Technically, you could be using multiple Squish servers (e.g. one per application instance), but as a single server is handling multiple Squish runners quite well, there is really no need for it, and it will actually increase system load on the client machine.
https://wiki.qt.io/index.php?title=Squish/Load_Testing&diff=32274&oldid=593
CC-MAIN-2019-39
refinedweb
637
53.1
Template matching, false negative on close image's I'm having this issue that for some reason, opencv template matching doesn't match the template into an image that is closely the same as the template (around 90%). Reason I'm search for more than 1 is because the images below are cut from the original (which has many matches). Here's my code def remove_match(args): original, match, _ = args # Load original image, convert to grayscale original_image = cv2.imread(original) final = original_image.copy() found = [] # Load template, convert to grayscale, perform canny edge detection template = cv2.imread(match) template = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY) template = cv2.Canny(template, 50, 200) (tH, tW) = template.shape[:2] # Dynamically rescale image for better template matching for scale in np.linspace(0.5, 1.0, 10)[::-1]: gray = cv2.cvtColor(final, cv2.COLOR_BGR2GRAY) # Resize image to scale and keep track of ratio resized = maintain_aspect_ratio_resize(gray, width=int(gray.shape[1] * scale)) r = gray.shape[1] / float(resized.shape[1]) # Stop if template image size is larger than resized image if resized.shape[0] < tH or resized.shape[1] < tW: break # Detect edges in resized image and apply template matching canny = cv2.Canny(resized, 50, 200) detected = cv2.matchTemplate(canny, template, cv2.TM_CCOEFF_NORMED) (_, max_val, _, max_loc) = cv2.minMaxLoc(detected) threshold = 0.5 loc = np.where(detected >= threshold) for pt in zip(*loc[::-1]): found.append([0, pt, r]) # Erase unwanted ROI (Fill ROI with white) (start_x, start_y) = (int(max_loc[0] * r), int(max_loc[1] * r)) (end_x, end_y) = (int((max_loc[0] + tW) * r), int((max_loc[1] + tH) * r)) cv2.rectangle( final, (start_x, start_y), (end_x, end_y), (255, 255, 255), -1 ) cv2.imwrite(original.replace("source", "output"), final) Template Image's: use colormatchtemplateand sample... @LBerger Isn't it possible to match by edges? I mean the script wouldn't work if the color changes. And colormatchtemplate doesn't seem to have a doc for python :( there is a python binding in doc Yes my bad
https://answers.opencv.org/question/225854/template-matching-false-negative-on-close-images/
CC-MAIN-2020-10
refinedweb
328
53.37
Not sure if this is the right forum for it, but I'm writing a plugin in Java that uses the scala plugin to query PSI and so on (cucumber for scala). I have two PSI related issues that I'm unable to resolve: - I have a QueryExecutor to find step def usages in feature files. In the Java plugin, this gets a PsiMethod and all is well. In Scala, it gets a ScReferencePattern. How do I go from this to either ScMethod, ScMethodCall, or any other navigation I need to do to ultimately get the method call parameters? - Similarly, elsewhere I have a ScMethodCall, and I'm trying to find out where this call is actually defined, and want to check the package/class of the method call. For example: class Foo def bar(val: String) = { ... } class Wibble extends Foo { bar("xyz") bar resolves to PsMethodCall, I'd be able to determine that it's a method call to Foo. Not sure if this is the right forum for it, but I'm writing a plugin in Java that uses the scala plugin to query PSI and so on (cucumber for scala). Hi! First of all, there exists a quite useful guide for IDEA plugin development ("References and Resolve" in Developing Custom Language Plugins for IntelliJ IDEA is especially relevant). Another useful thing is PSI viewer. Scala Language Specification may be also helpful. Basically, all you need to do, is to: Keep in mind, that the steps above are for the simple case, and Scala syntax can be rather manifold (check the Scala plugin code for examples). You may provide specific code snippets and we will take a good look at them. Great, thanks! This solves the second question I have. I'm still unable to figure out how to go from PsiReferencePattern to get the arguments for the call. An example to help illustrate. Putting the following in Psi Viewer: package scaladefs import cucumber.api.scala.{EN, ScalaDsl} import cucumber.api.Scenario class CalculatorStepDefs extends ScalaDsl with EN { Then("^the result is (\\d+)$") { } } What I'd like is to get the highlighted part (the parameter to Then). In PsiViewer, everything looks fine. I have MethodCall which has ReferenceExpression and ArgumentList as children, so I can get the arg just fine. However, when I look at the actual psi (not in psi viewer), I don't have a MethodCall, instead I have a ScReferencePattern, and I can't figure out the right sequence of calls to go from ScReferencePattern to the string parameter I'm trying to get a hold of. Runtime PSI structure should completely match the structure in the PSI viewer. Here's how you can verify that: Here's an excerpt of the resulting strucutre: So, everything is basically the same as I outlined in my previous message. Probably you were inspecting PSI for a different piece of code (or modified PSI). I'm getting the element in a callback. In my plugin.xml I have: <extensions defaultExtensionNs="com.intellij"> <referencesSearch implementation="MyStepDefinitionSearch"/> </extensions> My implementation class: @Override public boolean execute(@NotNull final ReferencesSearch.SearchParameters queryParameters, @NotNull final Processor<PsiReference> consumer) { final PsiElement myElement = queryParameters.getElementToSearch(); myElement here is a ScReferencePattern, not a ScMethodCall as I was hoping it would be. The callback is called whenever I click on the 'When' in the scala source. You may use the following to check the full PSI stucture: Yeah that prints out the PSI for EN.scala. I guess the problem I'm having then is that the callback is giving me the actual invocation target (When in EN.scala) rather than its actual usage (ScMethodCall). So I guess the problem is my usage of referencesSearch in plugin.xml. What I'd like is to be able to click on the When and have find usages find the string param in .feature files.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/205997959-Plugin-dev?sort_by=created_at
CC-MAIN-2019-39
refinedweb
645
63.7
2653/hadoop-giving-java-io-ioexception-in-mkdir-java-code Hi, Namenode generates new namespaceID every time you ...READ MORE Hadoop provides us Writable interface based data ...READ MORE Try this, first stop all the daemons, ...READ MORE Run the command as sudo or add the ...READ MORE Firstly you need to understand the concept ...READ MORE Hi, You can create one directory in HDFS ...READ MORE In your case there is no difference ...READ MORE The distributed copy command, distcp, is a ...READ MORE I would recommend you to use FileSystem.rename(). ...READ MORE Hadoop server installed was kerberos enabled server. ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/2653/hadoop-giving-java-io-ioexception-in-mkdir-java-code?show=2654
CC-MAIN-2022-21
refinedweb
128
69.28
ARITHMETIC OPERATIONS IN NUMPY In this tutorial, we are going to learn about the arithmetic operations in numpy and focus on the examples in hand by going through some exercises. What are Arithmetic Operations? After working with string operations it’s time work with arithmetic operation in numpy. Arithmetic operations perform tasks like adding, subtracting, dividing, etc. The arrays in operation must be of the same shape or have at least same broadcasting rules, otherwise you might face an error. Basic Arithmetic Operations Let’s perform some basic arithmetic operations: import numpy as np arr1 = np.arange(10).reshape(2,5) arr2 = np.array([6,7,8,9,10]) print("Array 1:") print(arr1) print("\n") print("Array 2:") print(arr2) print("\n") #Adding two arrays print("Adding two arrays:") print(np.add(arr1,arr2)) print("\n") #Subtracting two arrays print("Subtracting two arrays:") print(np.subtract(arr1,arr2)) print("\n") #Multiply two arrays print("Multiply two arrays:") print(np.multiply(arr1,arr2)) print("\n") #Dividing two arrays print("Dividing two arrays:") print(np.divide(arr1,arr2)) Output: Array 1: [[0 1 2 3 4] [5 6 7 8 9]] Array 2: [ 6 7 8 9 10] Adding two arrays: [[ 6 8 10 12 14] [11 13 15 17 19]] Subtracting two arrays: [[-6 -6 -6 -6 -6] [-1 -1 -1 -1 -1]] Multiply two arrays: [[ 0 7 16 27 40] [30 42 56 72 90]] Dividing two arrays: [[0. 0.14285714 0.25 0.33333333 0.4 ] [0.83333333 0.85714286 0.875 0.88888889 0.9 ]] Let’s look at some other arithmetic functions that are available on Numpy: numpy.reciprocal() The reciprocal function returns the reciprocal of the values provided element-wise. Let’s look at some examples: import numpy as np arr = np.array([10,20, 33, 40]) print('Array 1 is:') print(arr) print("\n") print("Reciprocal function applied: ") print(np.reciprocal(arr)) print("\n") arr2 = np.array([3.5, 6.3, 5.6, 2.3]) print('Array 2 is:') print(arr2) print("\n") print("Floating values reciprocal is: ") print(np.reciprocal(arr2)) Output: Array 1 is: [10 20 33 40] Reciprocal function applied: [0 0 0 0] Array 2 is: [3.5 6.3 5.6 2.3] Floating values reciprocal is: [0.28571429 0.15873016 0.17857143 0.43478261] numpy.mod() The mod function returns the remainder value when two arrays are divided element wise. You can also use the remainder function to produce the same result, for example: import numpy as np arr = np.array([10, 20, 33, 40]) print('Array 1 is:') print(arr) print("\n") arr2 = np.array([3.5, 6.3, 5.6, 2.3]) print('Array 2 is:') print(arr2) print("\n") print("Mod function applied: ") print(np.mod(arr, arr2)) print("\n") print("\n") print("Remainder function applied: ") print(np.remainder(arr, arr2)) Output: Array 1 is: [10 20 33 40] Array 2 is: [3.5 6.3 5.6 2.3] Mod function applied: [3. 1.1 5. 0.9] Remainder function applied: [3. 1.1 5. 0.9] There are several arithmetic operations in numpy that we can use for different purposes. You can learn more about them here: References:
https://python-tricks.com/arithmetic-operations-in-numpy/
CC-MAIN-2021-39
refinedweb
535
57.47
Ruby Basic Exercises: Check whether one of the first 5 elements in a given array of integers is a 7 Ruby Basic: Exercise-42 with Solution Write a Ruby program to check whether one of the first 5 elements in a given array of integers is a 7. The array length may be less than 5. Ruby Code: def array_count(array) ctr = 0 array.each{|item| ctr += 1 unless item != 7} return ctr end print array_count([1, 2, 6, 4, 9]),"\n" print array_count([1, 2, 5, 7, 9]),"\n" print array_count([0, 2, 5, 7]) Output: 0 1 1 Flowchart: Ruby Code Editor: Contribute your code and comments through Disqus. Previous: Write a Ruby program to count the number of 5's in an given array. Next: Write a Ruby program to check whether the sequence of numbers 10, 20, 30 appears anywhere
https://www.w3resource.com/ruby-exercises/basic/ruby-basic-exercise-42.php
CC-MAIN-2021-21
refinedweb
144
60.55
Why cast after an instanceOf? instanceof java how to convert string to user defined object in java java 8 instanceof java casting java checked cast java cast(object to class) cast vs instanceof In the example below (from my coursepack), we want to give to the Square instance c1 the reference of some other object p1, but only if those 2 are of compatible types. if (p1 instanceof Square) {c1 = (Square) p1;} What I don't understand here is that we first check that p1 is indeed a Square, and then we still cast it. If it's a Square, why cast? I suspect the answer lies in the distinction between apparent and actual types, but I'm confused nonetheless... Edit: How would the compiler deal with: if (p1 instanceof Square) {c1 = p1;} Edit2: Is the issue that instanceof checks for the actual type rather than the apparent type? And then that the cast changes the apparent type? Thanks, JDelage Keep in mind, you could always assign an instance of Square to a type higher up the inheritance chain. You may then want to cast the less specific type to the more specific type, in which case you need to be sure that your cast is valid: Object p1 = new Square(); Square c1; if(p1 instanceof Square) c1 = (Square) p1; Casting In Java 8 (And Beyond?), If obj is null, it fails the instanceof test but could be cast because null can be a reference of any type. Dynamic Casting. A technique I encounter The instanceof operator tests whether the prototype property of a constructor appears anywhere in the prototype chain of an object. JavaScript Demo: Expressions - instanceof. function Car (make, model, year) { this.make = make; this.model = model; this.year = year; } var auto = new Car ('Honda', 'Accord', 1998); console.log (auto instanceof The compiler does not infer that since you are in the block, you have done a successful check for the type of the object. An explicit cast is still required to tell the compiler that you wish to reference the object as a different type. if (p1 instanceof Square) { // if we are in here, we (programmer) know it's an instance of Square // Here, we explicitly tell the compiler that p1 is a Square c1 = (Square) p1; } In C# you can do the check and the cast in 1 call: c1 = p1 as Square; This will cast p1 to a Square, and if the cast fails, c1 will be set to null. Object Type Casting in Java, An overview of type casting in Java, covered with simple and easy to understand examples. Learn about the instanceof operator in Java After the conversion in the above example, myInt variable is 1, and we can't restore Java instanceof and its applications instanceof is a keyword that is used for checking if a reference variable is containing a given type of object reference or not. Following is a Java program to show different behaviors of instanceof. There's a difference between measuring if some object will fit in a box, and actually putting it in the box. instanceof is the former, and casting is the latter. Casting and runtime type checking (using instanceof)—ArcObjects , About casting a run time type checking (using instanceof). ArcObjects follow an interface based programming style. Many methods use interface types as The java instanceof operator is used to test whether the object is an instance of the specified type (class or subclass or interface). The instanceof in java is also known as type comparison operator because it compares the instance with type. It returns either true or false. If we apply the instanceof operator with any variable that has null Because this particular syntactic sugar is not yet added to the language. I think it was proposed for Java 7, but it doesn't seem to have entered project coin Thinking in Java 10: Detecting Types, Thinking in Java 10: Detecting Types - Checking before a cast. This is the keyword instanceof, which tells you if an object is an instance of a particular type. After creating the AssociativeArray, it is filled with key-value pairs of pet names Is there an instanceof analog for checking at compile-time? In my case, I'm trying to throw a ClassCastException as part of my definition of Queue's add method. My first thought was instanceof but I got error: illegal generic type for instanceof during compilation. – Ungeheuer Sep 21 '17 at 3:47 Old code wont work correctly The impield cast feature is justified after all but we have trouble to implement this FR to java because of backward-compatibility. See this: public class A { public static void draw(Square s){...} // with impield cast public static void draw(Object o){...} // without impield cast public static void main(String[] args) { final Object foo = new Square(); if (foo instanceof Square) { draw(foo); } } } The current JDK would compile the usage of the second declared method. If we implement this FR in java, it would compile to use the first method! If something is an instance of a class why then cast it into that class , For example in the challenge it asks us to establish if obj is an instance of String, if it is, then cast it as a String!? Clearly I am missing something Unfortunately, there are no great options here. Remember, the goal of all of this is to preserve type safety. " Java Generics " offers a solution for dealing with non-genericized legacy libraries, and there is one in particular called the "empty loop technique" in section 8.2. Basically, make the unsafe cast, and suppress the warning. Then loop JEP 305: Pattern Matching for instanceof (Preview), It is tedious; doing both the type test and cast should be unnecessary (what else would you do after an instanceof test?). This boilerplate -- in In the above example, cast() and isInstance() methods are used instead of cast and instanceof operators correspondingly. It's common to use cast() and isInstance() methods with generic types. Let's create AnimalFeederGeneric<T> class with feed() method which “feeds” only one type of animals – cats or dogs, Dynamic casting in Java, The instanceof operator requires a type, not a Class instance e.g. item instanceof Date; The cast syntax as well e.g. (Date) item. The instance. Java “instanceOf”: Why And How To Avoid It In Code, The java “instanceOf” operator is used to test whether the object is an instance of Except this, we must admit that the readability suffers after adding In this case, we will still have the instanceOf and the casting, but we can, $ g++ badcast.cpp -o badcast -Wall && ./badcast terminate called after throwing an instance of 'std::bad_cast' what(): std::bad_cast Abort trap (core dumped) $ g++ badcast.cpp -o badcast -frtti -fexceptions -Wall && ./badcast terminate called after throwing an instance of 'std::bad_cast' what(): std::bad_cast Abort trap (core dumped) $ gcc -v Using built-in specs. - That's why he's asking a question delnan... - Regarding the question in your edit, why not just try to compile it yourself? You don't need the SO community to act as a compiler for you. - @Mark Peters - point well taken, my interest is not really what would happen, but more how differently the compiler would parse that. - That was what I didn't understand, thanks. I did a 2nd edit on my question to focus on that. - well, I think the compiler can know that it is a Square. - @Bozho, what do you mean? Not the current compiler. But I suppose it is possible. - well, from your answer it appeared that the compiler can't possibly know this. But it can, it's just not implemented yet (hence my answer) - It could be possible theoretically, but would not respect Java syntax and thus is not permitted by the compiler. Anyway since generics are in the language I'm not sure it is really a good thing to use instanceof. It is better to manipulate only well-defined interfaces. - Or in C#, if (p1 is Square s) { /* s is a non-null instance of Square, within this scope */ } - Assuming its a method's variable where no other thread can access, I always can put something in a box that fits instanceof!!!! - I hope it doesn't get added. Instanceof and cast is most often a design smell (should be using polymorphism). Making it more elegant will just aggravate the problem. - @Mark Peters - instanceofis a pretty valid usecase sometimes, although it is often misused.
https://thetopsites.net/article/52417189.shtml
CC-MAIN-2021-25
refinedweb
1,423
61.06
WebSvcObjectLinkProvider namespace The ObjectLinkProvider class is the primary class in the WebSvcObjectLinkProvider namespace. The ObjectLinkProvider class includes methods that manage web objects and links for documents and list items for on-premises project sites in Microsoft SharePoint Server 2013.). In the ASMX web service, ObjectLinkProvider is a class. In the WCF service, ObjectLinkProvider is an interface that is implemented in the ObjectLinkProviderClient class. For information about using the ObjectLinkProviderClient class in a WCF-based application, see the ObjectLinkProvider class constructor. The WebSvcObjectLinkProvider namespace is an arbitrary name for a reference to the ObjectLinkProvider.asmx web service (or the ObjectLinkProvider.svc service) of the Project Server Interface (PSI)..
https://msdn.microsoft.com/en-us/library/office/websvcobjectlinkprovider_di_pj14mref
CC-MAIN-2017-30
refinedweb
107
58.38
Board of Governors of the Federal Reserve System International Finance Discussion Papers Number 1057, longstanding puzzle is that the United States is a net borrower from the rest of the world, yet continues to receive income on its external position. A large difference between the yields on direct investment at home and abroad is responsible and this paper examines potential explanations for this differential. We find that most of the differential disappears after one adjusts for the U.S. taxes owed by the parent on foreign earnings, the sovereign risk and sunk costs associated with investing abroad, and the age of foreign direct investment in the U.S.. Taken together, our results suggest most of the difference in yields should remain as long as there is a difference in tax rates between the United States and the countries in which U.S. firms invest, and U.S. investments are perceived as relatively safe. This has implications for the long-run sustainability of the U.S. current account deficit which will depend, in part, on the long-run behavior of this income. Keywords: Foreign direct investment, returns differentials, U.S. current account JEL classification: F21, F23, F3 A longstanding puzzle is that the United States is a net borrower from the rest of the world, yet somehow manages to, on net, receive income on its external position. Net investment income receipts reported in the U.S. balance of payments (BOP), the top line in Figure 1, have continued to grow even while the net liabilities position, the bottom line, has also grown. This situation has mystified economists for almost a quarter-century: "Clearly, if our investments abroad are yielding a positive return, their capital value must be positive not negative. Is this a defect of the figures on current flows, or is it a defect of the balance-sheet figures?..." (Milton Friedman, 1987)1 The income received on the U.S. external position plays an important role in one of the biggest issues confronting international macroeconomists--the sustainability (or lack thereof) of the U.S. current account deficit. Net income receipts, which equaled 33% of the goods and services balance in 2010, provide a significant stabilizing force for the current account. Future sustainability will depend, in part, on the persistence of these net income receipts. So an understanding of what is generating this income will help economists assess how the U.S. imbalance might evolve. A single asset class is responsible for the puzzle. Net income receipts in the BOP owe entirely to a difference between the yields (income divided by the position) on direct investment claims and liabilities (Hung and Mascaro 2004, Bosworth et al. 2008, Bridgeman 2008, Curcuru, Dvorak and Warnock 2008). The aggregate yield on U.S. cross-border claims averaged 140 basis points per year higher than that paid on U.S. cross-border liabilities from 1990-2010, shown in the first columns of Figure 2. The next columns show that the main driver of this difference was foreign direct investment (FDI); the average yield received on U.S. FDI claims was an impressive 620 basis points per year higher than that paid on liabilities. In contrast, for portfolio equity and debt the average yields on claims and liabilities were nearly identical. The overall yield advantage was enough to move the income balance in favor of U.S. claims despite the large net liability position.2 Why is there such a large difference between the yield received on U.S. direct investment abroad (USDIA) and that paid on foreign direct investment in the United States (FDIUS)? Several studies suggest that the large difference between these yields is the result of USDIA earnings that are unusually high, FDIUS earnings that are unusually low, or a combination of the two. These conclusions are drawn from comparisons between U.S. FDI yields and yields which, at least on the surface, appear to be similar. However a closer look at the comparator yields used in these studies reveals some important differences. Some studies compare pre-tax with post-tax yields. Other studies use comparator yields that are only valid in certain situations; for example, when the affiliate borrows only from the parent firm. Our approach in this paper is to first closely examine DI earnings and position data to find the most comparable measures before constructing yields. We then identify any remaining differences between the investments and quantify how these differences might affect yields. We identify several reasons for the large differential between USDIA and FDIUS yields. In foreign countries, U.S. multinational enterprises (MNEs) earn about the same on their USDIA as do investors from other countries, but the yield on USDIA is above that of firms operating in the US. For USDIA we focus on the return from the parent firm's perspective, and calculate the return net of all tax liabilities and estimate the amount of compensation for the risks specific to investing abroad. We find that taxes and risk account for all but about 50 basis points of the average difference between USDIA yields and those earned by U.S. firms on their domestic operations (USIUS) since 2004, and all but about 100 basis points over the entire sample. Compensation for the sunk costs of investing abroad can account for the rest. Years in which FDIUS significantly underperformed domestic investments followed significant increases in U.S. investments by foreign parents--in other words, FDIUS performed relatively poorly when it was relatively young. In recent years, however, FDIUS has performed about as well as other investments in the United States. Taken together, compensation for taxes, risk, sunk costs, and age account for virtually all of the difference between USDIA and FDIUS yields. Favorable transfer prices associated with trade between related firms further narrows the gap. Therefore we agree with Bosworth et al (2008) that the difference between USDIA and FDIUS yields is not "an illusion of bad data" as suggested in the quotation in the opening paragraph; rather, data quirks and investment differences create a divergence between these returns, the effect of which has decreased in recent years. Looking ahead, we expect this differential will narrow further if the FDIUS capital stock continues to age or the relative perceived risk of investing abroad decreases. This paper contributes to the literature on sustainability, returns differentials, and FDI in several ways. Work by Cavallo and Tille (2006) and Kitchen (2007) shows that the positive income yield differential limits pressure on the exchange rate in the event of a trade balance adjustment. Our results, which suggest the yield differential is likely to persist, tend to lower the probability of a rapid decline of the U.S. exchange rate predicted by these models. Several papers have noted the large yield and capital gains differential between U.S. claims and liabilities (Lane and Milesi - Ferretti 2005; Obstfeld and Rogoff 2005; Meissner and Taylor 2006; Gourinchas and Rey 2007; Forbes 2010; Habib 2010; Gourinchas et al. 2010), although some of the difference in capital gains may be overstated because of inconsistent data (Curcuru, Dvorak and Warnock 2008; Curcuru, Thomas and Warnock 2009; Lane and Milesi - Ferretti 2009). This paper is also the first paper to fully account for all the components of the DI differential. Throughout this paper we discuss implications for the yield differentials of the extensive work done by Desai, Foley and Hines on the factors influencing FDI decisions. The paper proceeds as follows: Section 2 summarizes existing literature, Section 3 compares USDIA yields with those on direct investment liabilities reported by other countries; Section 4 compares USDIA and FDIUS yields with yields on domestic operation of U.S. firms; Section 5 summarizes what the results suggest for future differences between USDIA and FDIUS yields; and Section 6 concludes. Existing literature suggests that USDIA yields are abnormally-high, FDIUS yields are abnormally-low, or a combination of the two. The focus of most studies has been the role of firm characteristics (firm age, industry, intangibles, productivity), transfer costs, and taxes. Several papers link low FDIUS yields to the relative youth of FDIUS affiliates (Lupo et al. 1978, Landefeld et al. 1992, Grubert et al. 1993, Laster and McCauley 1994, Grubert 1997, Mataloni 2000, McGrattan and Prescott 2010). Many new firms have relatively high expenses associated with depreciation of newly-purchased assets or interest on debt used to finance acquisitions. Inexperience can also lead to relatively poor performance for younger firms. The industry mix of FDIUS is dramatically different than USDIA and U.S. investment more generally, with a large share of USDIA classified as holding companies and a large share of FDIUS classified as manufacturing firms. However, Mataloni (2000), the only study examining the role of composition, finds that the return on FDIUS assets was below that of U.S. operations for most industries. Other work suggests that differing amounts of investment in intangible capital (defined in Bridgeman (2008) as patents, trademarks, trade secrets, and organizational knowledge) is responsible for the large difference between FDIUS and USDIA yields. The value of intangible capital is excluded from the valuation method for DI that BEA features, the current-cost method, because of measurement difficulties.3 Bridgeman (2008) estimates the stocks of intangible assets and finds that including them in the USDIA and FDIUS positions reduces the gap between USDIA and FDIUS yields by three-fourths. McGrattan and Prescott (2010) finds the FDIUS yield is held down by the large amount of research and development investment these firms engage in, which is accounted for as an expense. However, they find that the USDIA yield is higher than can be explained by intangible capital and other factors in their model.4 Studies in the trade literature find that relatively more productive U.S. firms are more likely to engage in FDI, which leads to higher USDIA yields relative to domestic-only firms (Helpman et al. 2004, Fillat and Garetto 2010). These models suggest the high return of USDIA relative to USIUS is compensation for the higher sunk costs and risks associated with FDI. Early studies find little evidence that the low FDIUS yield arises from favorable intrafirm transfer pricing. Lester and McCauley (1994) and Mataloni (2000) find no difference in the earnings of firms with a significant share of imports from the foreign parent and those with a smaller share. Similarly, Grubert (1997) finds no difference in the earnings of FDIUS affiliates which are wholly owned by the parent and those with a smaller share of foreign ownership. In more recent work Bernard et al (2006) examines detailed price and transaction data on U.S. exports and imports and finds that the prices of exports to related firms are systematically lower than exports to unrelated firms, while the prices of imports from related firms are systematically higher. These pricing anomalies should have some effect on USDIA or FDIUS yields. Although reliable estimates of the size of the effects cannot be constructed, because firm nationality is not tracked in the trade data, we provide some sense of their magnitude in Section 5.. A series of papers by Desai, Foley and Hines (hence DFH) shows that affiliate funding, dividend repatriations, and the location of MNE subsidiaries are heavily influenced by tax considerations. Because U.S. tax laws generally allow U.S. MNEs to defer U.S. taxes on foreign income until that income is repatriated, foreign operations in low-tax jurisdictions are disproportionately funded using reinvested earnings rather than new equity capital. In contrast, affiliates in relatively high-tax jurisdictions are funded using debt finance (Feldstein 1994 and DFH 2001, 2003, 2004)). DFH (2001) finds that USDIA affiliates in countries with 1% lower tax rates on foreign income have 1% lower dividend payout rates. Looking across affiliate countries, DFH (2004) finds that USDIA affiliates located in countries with relatively high tax rates had a higher debt-to-asset ratio in order to take advantage of the tax deductibility of interest payments, and that internal borrowing was particularly sensitive to tax rates. Complementary work by Grubert (1998) finds that interest payments to USDIA parents are higher for affiliates in countries with higher statutory tax rates. DFH (2006) finds that large U.S. MNEs with heavy research and development spending and relatively large amounts of intra-firm trade are most likely to have affiliates located in tax havens. Bosworth et al. (2008) estimates that the diversion of income to low-tax jurisdictions accounts for one-third of the difference in USDIA and USIUS yields. Other explanations for the low FDIUS yield include a relatively low cost of capital in the home country (Grubert et al. 1993 ), price concessions to gain access to the U.S. market or scarce raw materials (Landefeld et al. 1992), and several high-profile U.S. investments by foreigners in the 1980's which had particularly poor results (Laster and McCauley 1994, Jorion 1996). Other explanations for the large gap between USDIA and FDIUS yields include compensation for the additional risk of investing in countries with low sovereign credit ratings (Hung and Mascaro 2004), the venture capitalist nature of the U.S external position which issues safe assets while investing in risky assets (Gourinchas and Rey 2007), and the "erroneous" inclusion of reinvested earnings in income which artificially boosts USDIA earnings (Gros 2006). USDIA yields are double those earned by other cross-border claims and liabilities (Figure 2), which has led some to conclude that the data are misreported (Gros 2006, Hausmann and Sturzenegger 2006). In our first analysis we take a different approach than earlier papers which compared USDIA yields to those earned on other assets or in different locations. We focus our comparison on similar investments; at the country level we compare USDIA yields in a given country with the yield on all direct investment in that country (ACDIA). To the extent that USDIA investment in each country is similar to that undertaken by non-U.S. investors, the yields should be similar. A finding of similar yields would suggest that the seemingly-high USDIA yields are not unusual or temporary. A close look at global direct investment earnings and positions data needed for a cross-country comparison of DI yields reveals that neither is reported on a consistent basis across countries. USDIA earnings are measured using the Current Operating Performance Concept (COPC) recommended by the IMF, which includes reinvested earnings and intercompany debt payments in income and excludes capital gains and losses. In a survey conducted by the IMF only 19 out of 61 countries (8 OECD countries) fully applied the COPC to inward DI earnings, and only 16 out of 61 (7 OECD) to outward earnings.5 These deviations from the COPC standard can have a large impact on reported DI earnings. For example, France excludes the reinvested earnings of indirectly held subsidiaries from income; a similar omission from USDIA earnings would lower yields by one-third or over 300 basis points per year.6 In addition, it is difficult to estimate the market values of private companies, particularly in countries without liquid stock markets, so the DI positions published by most countries value firms using some combination of historical cost and market values. Because of these data variations we focus on the 8 countries that fully apply the COPC method, and provide results for an expanded selection of countries in the appendix. The ACDIA yield for each country is the ratio of net income payments associated with DI liabilities to the amount of DI liabilities, from the Balance of Payments statistics published by the IMF.7 In addition to different measures of earnings, accounting methods also vary. BEA reports country-level earnings on a financial accounting (historical cost) basis, and computes current-cost adjustments needed to transform earnings to an economic accounting basis only at the aggregate level. We use historical cost earnings to compute yields because this is how earnings are reported in the U.K. and many other countries. However, including current-cost adjustments in earnings does not change our conclusions.8 Similarly, country-level positions are reported at historical cost value and the adjustments needed to transform the position to a current-cost or market-value basis are released by BEA only at the aggregate level. We adjust the country-level positions from a historical-cost to current-cost basis using the ratio of the aggregates when we compute USDIA country-level yields.9 We find that USDIA yields in most countries are similar to or below those earned by other foreign investors in those countries. For 5 out of 8 countries in Table 1 the USDIA yield is below the ACDIA yield, significantly so for 3 countries. In the U.K., where 13% of USDIA is located, U.S. investors earn 6.7% on their USDIA, while all foreign investors in the U.K. earn significantly more--8.5%, on average. In Canada, home to almost 8% of USDIA, the average yields of U.S. and foreign investors on their DI are nearly identical. The yield on USDIA investments in Ireland is surprisingly high--almost 18% per year--but not as high as that earned on all DI in Ireland, which earns almost 22% per year.10 The last line of Table 1 presents average USDIA and ACDIA yields, where the average is weighted by the USDIA position share in the sample each year. The average yield is lower for USDIA--7.5% for USDIA vs. 8.5% for ACDIA --and the difference is statistically significant at the 10% level. Figure 3 shows these yields track each other very closely over the sample period. The weighted average USDIA yield for this sample is noticeably lower than the aggregate USDIA yield because the sample excludes many tax havens which do not report the data needed to calculate ACDIA yield. For an expanded selection which includes countries that do not fully apply the COPC method, Appendix Table A1, the weighted USDIA yield averages 30 basis points per year higher than ACDIA, and the difference between the two weighted yields is not significant. At least by this measure, there is no evidence that USDIA earnings are unusual, or any indication that they should not persist. Next we examine how USDIA and FDUIS yields compare with yields on other U.S. investments. Several studies find that USDIA yields are significantly higher than those of U.S. domestic operations (USIUS), while FDIUS yields are significantly lower (Bosworth et al 2008, MacGrattan and Prescott 2010). We begin this section with a discussion of alternative measures of USIUS yields, and then move to comparisons of USIUS yields with USDIA and FDIUS yields. Many studies use the yield on tangible assets (YTA) for all U.S. firms as a benchmark for evaluating USDIA and FDIUS yields (Howenstine and Lawson 1991, Bosworth et al. 2008, among others). This measure excludes financial assets and liabilities and their associated interest expenses from the position and income. Compared with YTA, USDIA yields appear unusually high, while FDIUS yields appear unusually low. Despite its frequent use, YTA is a weak benchmark for U.S. DI yields because YTA cannot be constructed from the available DI data. DI income reported in the BOP includes earnings on all assets, including net interest income associated with financial assets, and includes interest payments on intercompany debt paid to the U.S. (for USDIA) or foreign (for FDIUS) parent. BEA does not separately report net financial assets and interest expenses of the affiliates--it only reports those associated with intercompany debt--so YTA cannot be constructed for USDIA and FDIUS affiliates. YTA may differ markedly from a yield measure that includes net financial assets if affiliates have significant borrowing from entities other than the parent firm, which U.S. FDI surveys suggest is indeed the case.11 Given this weakness of YTA as a DI yield benchmark, we instead construct a yield which includes net interest payments in earnings and financial assets in the position, and is much closer in spirit to the yield that can be constructed for USDIA and FDIUS affiliates from BEA data. We label this net yield measure USIUS_min. (To maintain comparability with earlier literature we also show YTA, which we label USIUS_max.) USDIA, FDIUS, and USIUS yields are shown in Figure 4, and details on the data series used to construct these yields are given in Appendix Table A2. Consistent with earlier literature, USDIA yields are significantly higher than both FDIUS and USIUS yields, and for much of the sample FDIUS is below USIUS. We reconcile the differences between these yields in the next sections. As we did with ACDIA, our first step is to make sure we are making an apples-to-apples comparison between USDIA and USIUS yields. We then compute the USDIA return from the parent firms' perspective, and estimate the magnitude of other systematic factors that might account for differences between the two yields including tax accounting and compensation for risk and the sunk costs of investing abroad. USDIA earnings reported in the BOP and USIUS earnings reported in the National Income and Product Accounts (NIPA) have different tax treatments. USDIA earnings in the BOP are net of foreign taxes, but the U.S. taxes paid by U.S. parents on those earnings are not deducted. This is because U.S. taxes due on USDIA earnings are paid by the U.S. parent firm, so they are not cross-border transactions. While U.S. parents receive a credit for foreign income taxes paid against their U.S. tax liability, because the U.S. tax rate is generally higher most U.S. parents still owe some U.S tax on repatriated earnings even after this credit (Hines 1996). So, as implied in Bridgeman (2008), the USDIA yield computed using unadjusted BOP data generally overstates the after-tax earnings of the U.S. parent firm. In contrast, USIUS and FDIUS earnings are already net of all taxes.12 We estimate the U.S. taxes owed on USDIA earnings in two steps. First, we construct an estimate of the USDIA yield net of U.S. taxes associated with earnings repatriated to the U.S. parent firm. We estimate the yearly tax liability on repatriated income using the U.S. tax rates from KPMG (2010), less a credit for foreign taxes paid if the U.S. tax rate is higher than the foreign tax rate.13 If the foreign tax rate is higher than the U.S. tax rate there is no additional U.S. tax liability. Deducting estimated U.S. tax payments from affiliate earnings reduces the USDIA yield by about 80 basis points, in Table 2, from an average of 9.1% to 8.3% per year. We view this as a lower-bound for the compensation required by U.S. parent firms for the U.S. tax liability associated with USDIA earnings. In the second step, we adjust the yield for all taxes that will eventually be paid, including taxes on reinvested earnings which are not immediately due. U.S. parents pay U.S. taxes on foreign affiliate earnings only when those earnings are repatriated, which allows firms to defer a portion of their U.S. tax liability by reinvesting earnings in a foreign affiliate. U.S. MNEs use intricate corporate structures to aggressively funnel earnings to low income-tax jurisdictions and defer U.S. taxes on those earnings by reinvesting them abroad. Although U.S. taxes on reinvested earnings are not paid immediately, the potential U.S. tax liability associated with those earnings is likely an important factor when firms decide whether the earnings potential of a DI investment offers a high enough return. This is because the firm might not be certain, ex ante, of how much they will need to repatriate to support domestic operations. While U.S. firms might obviously prefer to never repatriate affiliate earnings in order to forever delay the additional U.S. tax liability, there is evidence that many firms choose repatriation strategies that are not optimal from a tax perspective.14 So as an upper bound for the tax-related compensation required by U.S. parent firms we calculate and subtract from earnings U.S. taxes that would be due had the affiliate repatriated all of its earnings.15 This reduces the USDIA yield by an additional 100 basis points per year to 7.3 percent (Table 2), bringing the average adjustment for U.S. taxes to 180 basis points per year. The tax-adjusted yields, plotted in Figure 5, are much closer to the USIUS yields, particularly during the last decade. The remaining difference between USDIA and USIUS yields--150-260 basis points depending on the USIUS measure--is greater than can be explained solely by earnings volatility. Table 2 also reports that the Sharpe (1966) ratio of the after-tax USDIA yield is significantly higher than that of even our upper-bound estimate for USIUS.16 Some of this remaining difference could be compensation for other risks associated with investing abroad, discussed next. Some of the risks faced by MNEs beyond those faced by domestic-only firms include foreign regulations, foreign tax policy, fluctuations in foreign demand, U.S. tax policy for foreign investments, and dependence on the foreign labor and goods markets. So the relatively high yields earned by MNEs likely represent compensation for these additional risks relative to domestic-only firms. Otherwise, as pointed out in Fillat and Garetto (2010), investors would not bother holding the equities of domestic-only firms in equilibrium. To estimate how much might be required to compensate investors for the additional risks associated with investing abroad we use credit-default swaps (CDS) spreads on sovereign debt when they are available, and corporate debt spreads in earlier years. CDS are a form of insurance that compensates the holder when the issuer of the underlying bond defaults (i.e., fails to make an interest or principal payment), and are commonly used as a proxy for the amount of compensation required for investors to invest in a country. We calculate the average difference between foreign country and U.S. CDS spreads on sovereign debt, weighted by the share of the USDIA position in each country each year. Because of the extensive use of intermediate firms in low-income-tax and low-sovereign risk jurisdictions--about 36% of USDIA in 2010-- recent USDIA positions have been shown to be a poor representation of where the activity of foreign affiliates actually occurs (Borga and Mataloni 2001). So we construct weights based on the positions in 1999 when the use of intermediate holding companies was more limited (about 7% of USDIA). The average difference between U.S. and foreign sovereign CDS spreads, our proxy for compensation for sovereign risk, averaged 70.4 basis points per year between 2004 and 2010 (Table 3).17 For earlier years when U.S. and other CDS spreads are unavailable we follow Hung and Mascaro (2004) and use the spread between the yields on AAA and Baa-rated corporate debt published Moody's as a proxy for risk compensation.18 For these earlier years the weighted risk adjustment averages 98 basis points. Putting the two risk adjustments together, the estimated compensation for risk over the entire sample averages 91 basis points per year. After adjustments for taxes and risk, the estimated yield on USDIA falls to 6.4% per year (Table 2). The total compensation for taxes and risk averages 270 basis points per year, which is the bulk of the 330-440 basis point difference per year between unadjusted USDIA and USIUS yields. The remaining difference might represent compensation for the sunk costs of investing abroad, discussed next. The remaining difference between USDIA (after-tax) and USUIS yields averages between 60 and 170 basis points per year over the entire sample (Table 2), and all but 50 basis points of the difference since 2004. Other literature suggests that foreign investments should also include compensation for sunk costs specific to investing in a foreign country. For example, in the models of Helpman et al. (2004) and Fillat and Garretto (2010) FDI investments are subject to sunk costs beyond those encountered domestically. Fillat and Garetto (2010) estimate that compensation for these sunk costs adds 25% to MNE yields relative to the yields of domestic-only exporters. This translates to 120-145 basis points based on our USIUS estimates, roughly equal to the difference that remains between USDIA and USIUS yields after we adjust for taxes and risk. In sum, we estimate that compensation for taxes, risk, and sunk costs accounts for around 400 basis points of the 9.1% yield on USDIA. Now that we have reconciled the difference between USDIA and USIUS yields, we turn to FDIUS yields. Existing literature reports that the yield on FDIUS has been low relative to YTA (USIUS_max in Figure 4), and for much of the sample FDIUS also underperformed the net U.S. yield (USIUS_min). This underperformance was striking in the early 1990s and 2000s--totaling almost 600 basis points in 1991 and averaging over 300 basis points per year between 1988 and 2002. However, Figure 4 shows that since 2002 the gap has closed considerably, suggesting a permanent change has affected the relative profitability of FDIUS. One potential explanation for the comparatively low yield earned by FDIUS affiliates is their age. Several studies suggest that the relative youth of FDIUS affiliates has played a role in their low profitability relative to other U.S. firms (Lupo et al. 1978, Landefeld et al. 1992, Grubert et al. 1993, Laster and McCauley 1994, Grubert 1997, Mataloni 2000). Younger firms may underperform more experienced firms because of inexperience, startup costs, or interest expenses on debt used to fund acquisitions. To see how age effects FDIUS yields we construct several proxies for affiliate age using the equation: where AGE represents the "newness" of the FDIUS investment; specifically, the share of FDIUS that has occurred in the last T years. We use several types of investment in AGEVAR, including outlays to acquire or establish new FDIUS, increases in U.S. affiliates' intercompany debt payables, and increases in parent equity. We also construct a measure of the relative age of FDIUS and USIUS using the differential between the growth rates of the respective positions. The weight variable w ( = 1) represents effects such as learning which decay the importance of new investment over time. We sum weighted investment over T prior years and scale by the FDIUS position. Estimates for AGE, shown in Figure 6, suggest that there have been three waves of new FDIUS investment during the last 30 years; 1987-1990, 1998-2001, and to a lesser extent 2008-2010. Glancing back at Figure 4, it is apparent that FDIUS underperformed USIUS during these three investment waves, suggesting that affiliate age does depress the FDIUS yield. To more precisely measure the relationship between AGE and FDIUS yields we regress FDIUS yields on USIUS yields and AGE from equation (1): A significant and negative b will confirm results from earlier studies that the underperformance is linked to firm age. The regressions results, presented in Table 4, suggest that FDIUS performance is indeed related to new investment by foreign parents as b is negative and significant in every specification. The adjusted-R2 values are quite high, ranging between 41% and 74%. New intercompany debt has the most explanatory power, suggesting that debt service costs play a large role, likely in the form of higher outside borrowing costs. The age effect subtracts 150 basis points on average from FDIUS (based on the first specification in Table 4), and in the absence of age effects the FDIUS yield increases to 5%--higher than USIUS_min, which averages 4.7% (Table 2). An FDIUS estimate where the effects of age have been removed, plotted in Figure 7, closely tracks USIUS_min, even during new investment waves. This evidence confirms the results of previous studies which concluded that age was an important factor in the comparatively poor performance of FDIUS. However, since 2002 FDIUS affiliates have matured and there is little underperformance. So far have we accounted for most of the difference between FDIUS and USIUS, in addition to accounting for most of the difference between USDIA and USIUS. We end this section with a discussion of the difference between USDIA and FDIUS. To recap, we estimate that compensation for taxes, risk, and sunk costs can account for as much as 400 basis points of the 9.1% average USDIA yield (Table 2), and that age subtracts 150 basis points from the FDIUS yield, which averages 3.5% (Tables 2 and 4). Taken together, these adjustments account for just about all of the 560 basis point difference between USDIA and FDIUS. Although evidence on the existence of transfer pricing effects is mixed, the results of one paper suggest transfer pricing might add further to the wedge between USDIA and FDIUS yields. Bernard et al. (2006) finds that the prices of U.S. exports to related firms in 2004 were systematically lower than those to unrelated firms, while the prices of U.S. imports from related firms were systematically higher. This mispricing will have a downward effect on the earnings of firms located in the U.S. and an upward effect on the earnings of related firms located abroad. Unfortunately firm nationality is not reported in the customs data used in that study so a direct link to USDIA or FDIUS earnings cannot be made. However, if half the $15.7 billion mispricing identified by the authors is attributed to USDIA and the other half to FDIUS, that would account for 80 basis points of the 480 basis point difference between USDIA and FDIUS yields in 2004.19 So while transfer pricing effects play a role in the DI yield differential, their effect is less than that of taxes or sunk costs. Looking ahead, we can say a few things about how much of the difference between USDIA and FDIUS we expect to persist. The performance of FDIUS affiliates has caught up to other U.S. firms in recent years, probably because the capital stock has reached a comparable maturity level. So we suspect that FDIUS affiliates will continue to earn about the same yields as USIUS firms, or even outperform because of the tendency of only the most productive firms to engage in FDI. Further, we do not have a reason to expect the yield of USDIA affiliates to decline--absent a change in U.S. tax laws or the perception of the relative risk of investing in the U.S. versus abroad. Taken together, this suggests that the difference between USDIA and FDIUS yields might remain near or slightly below the 2010 difference of 400 basis points. How this yield difference will translate into net income will depend on the relative amount of capital flows into USDIA and FDIUS affiliates and other changes in the values of the positions. In this paper we showed that compensation for taxes, risk, sunk costs and age account for just about all of the difference between USDIA and FDIUS yields, which is behind the puzzling behavior of the U.S. net income. Unless there is a change in the underlying factors driving the difference--the perception of investment in the U.S. as relatively safe and the relatively high U.S. tax rate--we expect the difference to remain near or slightly below the 400 basis points recorded in 2010. Therefore the U.S. will continue to, on net, earn income on the net liability position, which, in turn, will continue to provide a stabilizing force for the U.S. current account deficit. Our results provide evidence against misreporting of USDIA earnings (Gros 2006), or that the U.S. is earning abnormally high returns because of the role of the dollar as an international reserve currency (Gourinchas and Rey 2007). In sum, we agree with Bosworth et al. (2008) that the large difference between USDIA and FDIUS yields is not "an illusion of bad data. This study suggests several areas of future research. One obvious extension is to verify all of our results using the firm-level data available on-site at BEA, as the existence of significant heterogeneity in the underlying firm data might result in different conclusions. Our results have implications for the sustainability of the U.S. current account deficit, so it would be interesting to see how they change the predictions of sustainability models such as those presented in Kitchen (2007) or Gourinchas and Rey (2007). Finally, our results can also inform policy discussions on the potential effect of changes in taxation of MNEs. Bernard, Andrew B., J. Bradford Jensen and Peter K. Schott, 2006. Transfer Pricing by U.S.-Based Multinational Firms. NBER Working Paper No. 12493. Borga, Maria and Raymond J. Mataloni, Jr., 2001. Direct Investment Positions for 2000: Country and Industry Detail. Survey of Current Business 81, 16-25. Bosworth, Barry, Susan M. Collins and Gabriel Chodorow-Reich, 2008. Returns on FDI: Does the U.S. Really Do Better? Brookings Trade Forum 2007: Foreign Direct Investment, Susan M. Collins editor, Brookings Institution Press, Washington, D.C., 177-210 Bridgeman, Benjamin, 2008. Do Intangible Assets Explain High U.S. Foreign Direct Investment Returns? Bureau of Economic Analysis Working Paper 2008-06. Buiter, Willem, 2006.Dark Matter or Cold Fusion? Global Economics Paper no. 136, London: Goldman Sachs. Bureau of Economic Analysis, 2006. Foreign Direct Investment in the United States: 2002 Benchmark Survey, Final Results,. Bureau of Economic Analysis, 2008. U.S. Direct Investment Abroad: 2004 Final Benchmark Data,., 2008. Cross-Border Returns Differentials. Quarterly Journal of Economics 123, 1495-1530. Curcuru, Stephanie E., Charles P. Thomas, and Francis E. Warnock, 2009. Current Account Sustainability and Relative Reliability. in J. Frankel and C. Pissarides, ed. NBER International Seminar on Macroeconomics 2008. University of Chicago Press, 67-109. Curcuru, Stephanie E., Charles P. Thomas, and Francis E. Warnock, 2011. Returns Differentials and the Income and Position Puzzles. Working paper. Desai, Mihir A., C. Fritz Foley, and James R. Hines Jr., 2001. Repatriation Taxes and Dividend Distortions. National Tax Journal 54, no. 4: 829-851. ________, 2003) Chains of Ownership, Regional Tax Competition, and Foreign Direct Investment in Foreign Direct Investment in the Real and Financial Sector of Industrial Countries, edited by Heinz Herrmann and Robert Lindsay, 61-98, Heidelberg: Springer Verlag, 2003. ________, 2004. A Multinational Perspective on Capital Structure Choice and Internal Capital Markets. Journal of Finance 59(6): 2451-2488. ________, 2006. The Demand for Tax Haven Operations. Journal of Public Economics, 90, no. 3: 513-531. ________, 2007. Dividend Policy inside the Multinational Firm. Financial Management, 36(1): 5-26. Feldstein, Martin, 1994. Taxes, Leverage and the National Return on Outbound Foreign Direct Investment. NBER Working Paper No. 4689. Fillat, Jose L. and Stefania Garetto, 2010. Risk, Returns and Multinational Production. working paper, Boston University. Forbes, Kristin, 2010. Why do Foreigners Invest in the United States? Journal of International Economics 80(1): 3-21. Gohrband, Christopher A.and Kristy L. Howell, 2010. U.S. International Financial Flows and the U.S. Net Investment Position: New Perspectives Arising from New International Standards. Paper presented at NBER-CRIW Conference on Wealth, Financial Intermediation and the Real Economy. Gourinchas, Pierre-Olivier, and Helene Rey, 2007. From World Banker to World Venture Capitalist: The U.S. External Adjustment and the Exorbitant Privilege. in R. Clarida, ed. G7 Current Account Imbalances: Sustainability and Adjustment, Chicago, University of Chicago Press, 11-55. Gourinchas, Pierre-Olivier, Hélène Rey, and Nicolas Govillot, 2010. Exorbitant Privilege and Exorbitant Duty. Bank of Japan IMES Discussion Paper No. 2010-E-20. Gros, Daniel, 2006. Foreign Investment in the US, II: Being Taken to the Cleaners? CEPS Working Document No. 243, Centre for European Policy Studies, Brussels, April. Grubert, Harry, 1997. Another Look at the Low Taxable Income of Foreign-Controlled Companies in the United States. U.S. Treasury Department, Office of Tax Analysis Paper 74. Grubert, Harry, 1998. Taxes and the division of foreign operating income among royalties, interest, dividends and retained earnings. Journal of Public Economics 68(2): 269-290. Grubert, Harry, Timothy Goodspeed, and Deborah Swenson, 1993) Explaining the Low Taxable Income of Foreign-Controlled Companies in the United States. Studies in International Taxation, edited by Alberto Giovannini, Glenn Hubbard, and Joel Slemrod, 237-275. Chicago: University of Chicago Press. Habib, Maurizio M., 2010. Excess returns on net foreign assets - the exorbitant privilege from a global perspective. European Central Bank Working Paper Series 1158. Hausmann, Ricardo and Federico Sturzenegger, 2006. Global Imbalances or Bad Accounting? The Missing Dark Matter in the Wealth of Nations. Harvard University, Center for International Development Working Paper 124. Helpman, Elhanan, Marc J. Melitz, and Stephen R. Yeaple, 2004. Exports Versus FDI with Heterogeneous Firms. The American Economic Review 94(1): 300-316. Hines Jr., James R., 1996. Dividends and Profits: Some Unsubtle Foreign Influences. Journal of Finance 51(2): 661-89. ________, 1999. Lessons from Behavioral Responses to International Taxation. National Tax Journal 52(2): 305-322. Hines Jr., James R. and R. Glenn Hubbard, 1990.Coming Home To America: Dividend Repatriations By U.S. Multinationals. in Taxation in the Global Economy A. Razin and J. Slemrod, eds. University of Chicago Press, Chicago, 161-200. Howenstine, Ned G. and Ann M. Lawson, 1991. Alternative Measures of the Rate of Return on Direct Investment. Survey of Current Business 71(8): 44-45. Hung, Juann H., and Angelo Mascaro,, 2004) Return on Cross-Border Investment: Why Does U.S. Investment Abroad Do Better? Technical Paper no. 2004-17, Washington, D.C. Congressional Budget Office, December). Ibarra-Caton, Marilyn, 2010. Direct Investment Positions for 2009: Country and Industry Detail. Survey of Current Business 90(7):20-35. Jorion, Philippe, 1996. Returns to Japanese investors from US investments. Japan and the World Economy 8, 229-241. Kitchen, John, 2007. Sharecroppers or Shrewd Capitalists? Projections of the US Current Account, International Income Flows, and Net International Debt. Review of International Economics 15(5): 1036-1061. KPMG, 2010. KPMG's Corporate and Indirect Tax Rate Survey 2010.. Landefeld, J. Steven, Ann M. Lawson, and Douglas B. Weinberg, 1992. Rates of Return on Direct Investement. Survey of Current Business 72, 79-86. Lane, Philip R., and Gian Maria Milesi-Ferretti, 2005. A Global Perspective on External Positions. NBER Working Paper No. 11589. Lane, Philip R., and Gian Maria Milesi-Ferretti, 2009. Where Did All the Borrowing Go? A Forensic Analysis of the U.S. External Position. Journal of the Japanese and International Economies, 23(2):177-199. Laster, David S. and Robert N. McCauley, 1994. Making sense of the Profits of Foreign Firms in the United States. Federal Reserve Bank of New York Quarterly Review, Summer-Fall. 44-75. Lupo, L.A., Arnold Gilbert, and Michael Liliestedt, 1978. The Relationship Between Age and Rate of Return of Foreign Manufacturing Affiliates of U.S. Manufacturing Parent Companies. Survey of Current Business 58, August, 60-66. Mataloni Jr., Raymond, 2000. An Examination of the Low Rates of Return of Foreign-Owned U.S. Companies. Survey of Current Business 80, March, 55-73. McGrattan, Ellen R. and Edward C. Prescott, 2010. Technology Capital and the US Current Account. American Economic Review 100: 1493-1522. Meissner, Christopher M., and Alan M. Taylor, 2006. Losing Our Marbles in the New Century? The Great Rebalancing in Historical Perspective. NBER Working Paper No. 12580. Newey, W. K., and K. D. West, 1987. A simple, positive semi-definite, heteroskedasticity and autocorrelation consistent covariance matrix. Econometrica 55, 703-708. Obstfeld, Maurice, and Kenneth S. Rogoff, 2005. Global Current Account Imbalances and Exchange Rate Adjustments. Brookings Papers on Economic Activity 1, 67-123. Sharpe, William F., 1966. Mutual Fund Performance. Journal of Business 39, 119-138. Table 1: U.S. Direct Investment Abroad (USDIA) and All Countries Direct Investment Abroad (ACDIA) Yields for Selected Countries All values are average percentages over the sample period; share is of 2010 USDIA position. Sample includes countries which fully apply the current operating performance concept (COPC) to direct investment income reporting. The USDIA yield in each country is computed using BEA income and position data. BEA country-level positions are only available at historical-cost; we use the ratio of the aggregate position at current cost to the aggregate position at historical cost for each year to adjust the position to a current-cost basis. ACDIA is the ratio of DI income payments reported in the IMF Balance of Payments for each country to the DI liabilities position for that country. The last line of the table presents yields weighted by the historical cost share of USDIA investment in each country each year. ** and * indicate statistical significance at the 5% and 10% levels, respectively. Table 2: Summary Statistics for Yields, 1983-2010 Details of how the yield series were constructed are in Table A2. Direct investment income does not include current-cost adjustments and positions are valued at current-cost. The Sharpe ratio is the ratio of average excess returns to Standard deviation. The last column is chi-squared test statistic for the null hypothesis that the Sharpe ratio is equal to the USIUS Sharpe ratio indicated by the column heading; probability that the null is rejected is shown. Asymptotic p-values computed from Newey and West (1987) standard errors are in brackets. ** and * indicate test for equal Sharp ratios rejected at 5% level and 10% levels, respectively. Table 3: Sovereign CDS Spreads Each values is the average difference between the CDS spreads on 5-year sovereign debt and the CDS spread on 5-year U.S. Treasuries in basis points from 2004-2010. CDS spreads are from Markit. Share is of 1999 USDIA position calculated from BEA data. Weighted Avg. of 49 Countries: 70.4 Table 4: FDIUS Age Regressions This table shows coefficient estimates from the regression: where: The USIUS variable is either USIUS_max or USIUS_min from Table A2. AGEVAR is either new outlays (), gross debt flows (BOP Table 7a line 96 or 7b line 61), equity flows (BOP Table 7a line 92 or 7b line 57), or the difference between the annual growth rate of the FDIUS and USIUS_min positions (Table A2). RelAge is not scaled by the FDIUS position when AGEt is constructed. Newey West (1987) standard errors are in parentheses. Estimation period is 1983-2009 for regressions that include the outlay variable; 1983-2010 for all other regressions. ** and * indicate significance at 5% and 10% levels, respectively. Figure 1: U.S. Cross-Border Investment Income and Position Net investment income is from the U.S. balance of payments and the net investment position from the U.S. international investment position, both published by BEA. Data for Figure 1 Figure 2: Income Yields and Capital Gains on U.S. Cross-Border Positions Income and capital gains are from Gohrband and Howell (2010) for 1990-2009 and from the U.S. balance of payments and international investment position published by BEA for 2010. Yields are computed by scaling income and capital gains with positions. Direct investment positions valued at current-cost. Data for Figure 2 Figure 3: U.S. Direct Investment Abroad (USDIA) and All Countries Direct Investment Abroad (ACDIA) Yields The USIDA and ACDIA series are those shown in the last line of Table 1; see notes to Table 1 for a description. Data for Figure 3 Figure 4: Yields on U.S. Direct Investment Abroad (USDIA), Foreign Direct Investment in the United States (FDIUS), and U.S. Investment in the United States (USIUS) The USDIA series is the ratio of aggregate DI income receipts to the USDIA position reported by BEA. The FDIUS series is the ratio of aggregate DI income payments to the FDIUS position reported by BEA. The USIUS_max yield is the return (excluding interest payments) on tangible U.S. non-financial corporate assets excluding USDIA and FDIUS with tangible assets valued at replacement cost. The USIUS_min yield is the return on all U.S. non-financial corporate assets excluding USDIA and FDIUS with assets valued at replacement cost. The data series used to construct these yields are listed in Appendix Table A2. Direct investment income does not include current-cost adjustments and positions are valued at current-cost. Data for Figure 4 Figure 5: Tax-Adjusted USDIA Yields The USDIA series is the ratio of aggregate DI income receipts to the USDIA position reported by BEA. The top boundary of the range of after-tax USDIA yields subtracts from income estimated U.S. taxes on repatriated income (reported in the second line of Table 2), the bottom boundary subtracts from income U.S. taxes on all income (reported in the third line of Table 2). Direct investment income does not include current-cost adjustments and positions are valued at current-cost. USIUS yields are from Figure 4. Data for Figure 5 Figure 6: Age of FDIUS Affiliates The chart shows several alternative proxies for the age of FDIUS given by: for w = 1.0, T=3; AGEVAR is new outlays (), gross debt flows (BOP Table 7a line 96 or 7b line 61), equity flows (BOP Table 7a line 92 or 7b line 57), or relative age (the difference between the annual growth rate of the FDIUS and USIUS_min positions; see Table A2 for definitions). RelAge is not scaled by the FDIUS position when AGEt is constructed. Data for Figure 6 Figure 7: U.S. Domestic Yields (USIUS) and Foreign Direct Investment in the United States (FDIUS) Adjusted for Age-Effects The dashed line is the FDIUS yield predicted by the regression in the first line of Table 4, with the contribution of age removed. USIUS_min yield is the return on all U.S. non-financial corporate assets excluding USDIA and FDIUS with assets valued at replacement cost. Data for Figure 7 In Table A1 we extend our comparison of USDIA and ACDIA yields to include countries that do not fully apply the COPC to earnings. These countries either include capital gains and losses in direct investment income, which could either overstate or understate the ACDIA yield, or exclude some reinvested earnings or interest on intercompany debt, which would tend to understate the ACDIA yield. The USDIA yield for these countries averages 8.3% per year, lower than the 9.1% per year reported in Table 2. This is because yields in countries for which IMF BOP data are not available, such as Bermuda or the Cayman Islands, have a higher yield than the reported countries. For this less comparable sample the USDIA yield averages only 0.3 higher per year than the ACDIA yield and is not statistically significant. Therefore our conclusion remains unchanged --U.S. investors earn about the same yields on their USDIA as investors from other countries earn on their FDI. Table A1a: U.S. Direct Investment Abroad (USDIA) and All Countries Direct Investment Abroad (ACDIA) Yields for Selected Countries: Panel A: ACDIA Income Includes Capital Gains and Losses Table A1b: U.S. Direct Investment Abroad (USDIA) and All Countries Direct Investment Abroad (ACDIA) Yields for Selected Countries: Panel B: ACDIA is Missing Intercompany Debt Payments and/or Reinvested Earnings All values are average percentages over the sample period; share is of 2010 USDIA position. Sample includes countries which do not fully apply the current operating performance concept to direct investment income reporting. See notes to Table 1 for a description of the USDIA yields. The ACDIA yield is the ratio of total direct investment income payments reported by the IMF's BOP statistics to the liabilities position with two exceptions: DeNederlandsche Bank data that includes special financial institutions are used for the Netherlands starting in 2000, and returns for France are from Banque de France report (). The last line of the table presents returns weighted by the historical cost share of USDIA investment in each country each year. ** and * indicate statistical significance at the 5% and 10% levels, respectively. Table A2: Yield Definitions FOF: Flow of funds BOP: Balance of Payments NIPA: National Income and Product Accounts 1. Personal correspondence with Charles Thomas, June 1987. Return to text 2. Although there is a difference between the asset compositions of DI claims and liabilities, it contributes very little to the yield differential. Return to text 3. Investments in intangible capital are generally excluded from the U.S. national accounts because of difficulties in measuring its production and depreciation. BEA plans to start including some intangible assets related to research and development in the accounts in 2013. Return to text 4. In related work Hausmann and Sturznegger (2006) infers from the large net income receipts that USDIA intangible investment is much larger than FDIUS intangible investment, although Buiter (2006) challenges their methodology. Return to text 5. See for a description of the COPC and the survey results. Return to text 6. In 2009 reinvested earnings in USDIA holding company affiliates totaled $110 billion or one-third of total earnings. Most of this income was generated by indirectly held affiliates. Excluding these reinvested earnings lowers aggregate USDIA earnings in 2009 from 9.7% to 6.4%. Return to text 7. We also estimated the yield earned by only non-U.S. investors in each country by subtracting the USDIA earnings and position in each country from IMF DI liabilities. The resulting yields for the 8 countries in the main sample were similar to those reported, but these estimates could not be constructed for the expanded sample for several countries because inconsistent reporting resulted in U.S. income receipts or positions reported by BEA which were larger than total DI payments or liabilities reported by that country. Return to text 8. Current-cost adjustments increase USDIA earnings and lower FDIUS earnings and the differential between USDIA and FDIUS yields widens to 650 basis points. Return to text 9. The aggregate USDIA yield falls to 6.6%, and the aggregate differential drops to 125 basis points per year when yields are computed using the market value estimate of the position. We use aggregate income and positions to compute yields, which may mask significant heterogeneity in the underlying data. Unfortunately, those data are maintained by BEA and access to them by individuals from other government agencies, including the authors of this paper, is prohibited. Return to text 10. The yield on all DI liabilities in Ireland calculated from IMF data slightly overstates the yield on those liabilities because recorded DI income payments are not net of interest income associated with lending from Irish affiliates to foreign parents. Return to text 11. BEA (2006) Table III.C.1 reports that current liabilities and long-term debt owed by majority-owned nonbank FDIUS affiliates totaled $2.7 trillion in 2002, of which $719 billion (or 27%) was owed to the foreign parent. In contrast, BEA (2008) Table III.C.1 reports that current liabilities and long-term debt owed by majority-owned nonbank USDIA affiliates totaled $4.2 trillion in 2004, of which $523 billion (or 12%) was owed to the U.S. parent. Return to text 12. The United States has a "worldwide taxation" policy which taxes income generated by U.S. MNEs regardless of where it is earned. In contrast, most other countries have a policy of "territorial taxation" and only tax income generated by domestic activities. See the section "International Taxation for Beginners" in Hines (1999) for an overview of tax issues. Return to text 13. Foreign tax rates are inferred from 2004 benchmark survey (BEA 2008) and earlier surveys. An increasing number of multinational corporations include holding companies as intermediate firms between the parent company and foreign subsidiaries because several jurisdictions offer attractive tax treatment (DFH 2003, chart A in Ibarra-Caton 2010). See Figure 1 in DFH (2003) for common ownership structures used by firms located in tax havens. The aggregate foreign tax rate is a relatively low 14% because of the large share of intermediate holding companies that almost entirely avoid foreign taxes. In practice, the foreign tax credit may be smaller than our estimate because credits against U.S. taxes are given for only certain types of tax payments (DFH 2004). Return to text 14. For example, Hines and Hubbard (1990) find that many firms repatriate earnings during the same period in which they inject equity, and that some firms with excess tax credits reinvest earnings. Similarly, DFH (2007) finds that the amount firms repatriate depends on domestic funds available to meet dividend payments to external shareholders and domestic investment needs. Return to text 15. U.S. MNEs reinvest a substantial fraction of USDIA earnings--60% on average from 1999-2009--most of which is reinvested by holding company affiliates (Ibarra-Caton 2010). While 60% is the average, Hines and Hubbard (1990) find significant heterogeneity between firms. The 60% average excludes reinvested earnings in 2005 because reinvested earnings were large and negative in that year because firms took advantage of temporary reduction in the U.S. tax liability on repatriated earnings contained in the American Jobs Creation Act of 2004. Return to text 16. Hung and Mascaro (2004) report a similar result using the USDIA (pre-tax) and FDIUS yields. Return to text 17. The weighted spread is about 45 basis points using weights 2003 or 2009 weights. Return to text 18. Hung and Mascaro (2004) estimated that 11% of USIDA was invested in AAA-rated Canada, 17% in BB-rated Latin American countries, 50% in AA-rated European countries, and the weighted-average rating estimate for all countries was BBB, using Standard & Poor's ratings and the 2003 positions. We follow Hung and Mascaro and use the difference between AAA and BBB corporate debt yields as an estimate of the additional risk of USDIA. Return to text 19. Bernard et al. (2006) estimate that U.S. exports to related parties in 2004 were under-reported by $1.9 billion, while U.S. imports to related parties were over-reported by $13.8 billion, for a total of $15.7 billion. Return to text This version is optimized for use by screen readers. Descriptions for all mathematical expressions are provided in LaTex format. A printable pdf version is available. Return to text
http://www.federalreserve.gov/pubs/ifdp/2012/1057/ifdp1057.htm
CC-MAIN-2015-06
refinedweb
9,705
56.86
Creating Check Box in Java Swing Creating Check Box in Java Swing  ... component in Java Swing. In this section, you can learn simply creating the Check Box in Java Swing. Check Boxes are created in swing by creating the instance Check Box Validation in PHP - PHP Check Box Validation in PHP How can validations done on check boxes more than 3? Hi Friend, Please visit the following link: Popup dialog box in java on your application due to some event. With the help of java swing toolkit, you can create popup dialog box. Java swing toolkit provide tow types of dialog box...Popup dialog box in java How to display popup dialog box in java Option Box Value - Java Beginners Option Box Value Hi Friends, I have one option box which is division, division have dynamically data,if user select any division then his option box is populated (work schedule,Peronal Area,personal sub area,business Message Box Java ,options,options[2]); Here is an example of displaying a message box using swing... Dialog Box in Java - Swing Dialogs Show Message and Confirm Dialog Box...In this section we will create a message box in Java using a Java API class Java Message Box Java Message Box In this tutorials you will learn about how to create Message Box in Java. Java API provide a class to create a message Box in java... field or combo box. showMessageDialog : Display a message with one button Check Box Midlet Example ; This example illustrates how to create check boxes in to your form. In this example we are creating a Form("...;, "J2ME", "J2EE", "JSF"). if user select a check box Scrolling List Box. - Java Beginners Scrolling List Box. How can is make a list box scrollable by using method ? Please give me the code snipetts Show input dialog box Swing Input Dialog Box Example - Swing Dialogs  ... and interactive feature of Java Swing. You have been using the System.in for inputting anything from user. Java Swing provides the facility to input any thing (whether Combo Box operation in Java Swing Combo Box operation in Java Swing  ... button then the text of the text box is added to the combo box if the text box... the Combo Box component, you will learn how to add items to the combo box how to insert check box how to insert check box how to insert check box into jtable row in swing jsp list box - Java Beginners jsp list box I have two list boxs. the values for the first list box is retrieved from the mysql database. I want to fill the second list box selected item from the database. Please help me in this regard. Your help Show Dialog Box in Java Show Dialog Box in Java - Swing Dialogs  ... the message Dialog box. Our program display "Click Me" button on the window... box as follows: A simple message dialog box which has only one button remove item from list box using java script - Java Beginners remove item from list box using java script remove item from list box using java script Hi friend, Code to remove list box item using java script : Add or Remove Options in Javascript function addItem PHP List Box Post The PHP Post List box is used in the form. It contains multiple value User can select one or more values from the PHP list Box This PHP Post List Box is the alternate of Combo box PHP Post List Box Example <?php PHP list box mysql PHP List box that displays the data from mysql table. User can select any value from the PHP list box. Example of PHP MYSQL List Box Code <... <option value=1>1</option> <option value PHP list box The PHP List box is used in the form. It takes input values from the user. The PHP List box contains multiple values at a time. PHP List Box Example...;select <option value Hollow Box to exit. For example, if the user keys in 8, the hollow box (of length and width..., another example, if the user keys in 7, the hollow box (of length and width...;Hello Friend, Try this: import java.util.*; class Box{ public void check(Scanner Check Box in HTML Check Box in HTML  ..., user can choose a radio button in html page. Understand with Example... create a set of check box, this set of check box can be selected more than one dialog box Java show series of at least four interview questions hello. write.... at the end of the interview, use a dialog box to ask whether the user wants... the results of each question indicate how many users chose the first option,second Dialog Box Input Loop Prev: Example: Capitalize | Next: Java NotesDialog Box Input Loop Indicating end of input with cancel or close box, a special value, or empty input.... For dialog box input, this could be clicking the close box or hitting java script text box java script text box hi, I created a button when i click on button(next/prev) new two textbox is created. i want to do the two textbox will show... in alert). i also want the text box should generate in front of NEW button(next/prev validation..... validation..... hi.......... thanks for ur reply for validation code. but i want a very simple code in java swings where user is allowed to enter... give a message box that only numerical values should be entered. How can JavaScript Combo Box Validation JavaScript Combo Box Validation This application illustrates how to validate the combo box using JavaScript validation. In this example we create a combo box of different Jcombo box - Swing AWT Jcombo box Hello sir i found dis site today...realy superb evn i complete half project with ur examples sir i hav problem related to combo box i... for this? Can u plz just gve me a simple example populating text box using jsp code populating text box using jsp code Sir, How to populate related values in a text box after selecting value from drop down list using JSP and mysql. I tried using Ajax from your example. But for some browser it does not support JFrame Close Box Java: JFrame Close Box Terminating the program when the close box... closes. For example, it's typical to check to see if there is any unsaved work... that you wouldn't normally change. However, the close box only closes the window input box input box give me the code of input box in core java Swing Program Swing Program Write a java swing program that takes name and marks as input with all validation. The name and marks will be displayed in message box when user clicks on a button action for dropdown box - Java Server Faces Questions for populating a list box from a drop-down selection? What I want to do is give... a selection from the drop-down list, the list box beside it gets populated... from my table. When the user selects one catagory, the list box gets populated dropdown box in jsf - Java Server Faces Questions dropdown box in jsf Hi friends, AssigningJob... box..., For solving the problem some points to be remember : For example You select a country Javascript List Box - JSP-Servlet itself.my problem is in list box the semester which i selected is not showing in list box as selected.when i select,the page refreshes but i get the result what i expected.i need to show in list box as semester is selected but it doesnt retrieve the data to text fields from database on clicking the value of combo box but, this will be helpful for you.... box retrieve the data to text fields from database on clicking the value of combo box . I am not getting it plz help me out . hi Dialog Box Application in Java Swing | Show Message and Confirm Dialog Box...-to-One Relationship | JPA-QL Queries Java Swing Tutorial Section Java Swing Introduction | Java 2D API | Data Transfer in Java Swing validation controller class ....ple for this example ple write validater class and bean config... language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding...; <form:option <form:option combo box to that user in left list.when we select module from left list and click INCLUDE button...combo box Hi, [_|] dropdown box...] | | | LEFT LIST | | RIGHT LIST Time validation box is in the correct format or not using java script. Please help me for doing...;input type="button" value="Check" onclick="return check...Time validation Hi. I have a text box in html to get time Use of Select Box to show the data from database Use of Select Box to show the data from database Example program using Select Box to show retrieved data from database This example will describe you the use of Select Box in a JSP Show message and confirm dialog box Show Message and Confirm Dialog Box - Swing Dialogs... use in your swing applications, example of each type of dialog boxes.... Once you click on the first button then the simple message box will open which combo box connection combo boxes. Here is Java swing code: import java.sql.*; import...combo box connection how to provide connection between three combo boxes,if my 1st combo box is course and 2nd combo box is semester and 3rd combo Jdialog box with textfield - Java Beginners Jdialog box with textfield i have to create a dialog box with 2...(JFrame parent) { JButton button = new JButton("OK"); d = new JDialog...); panel.add(text2); panel.add(button); d.getContentPane().add(panel Choice Option (Combo) In Java Choice Option (Combo) In Java In this section, you will learn how to create Drop-Down List... of constructing a drop down list in java by using java awt. There is a program Swing Tutorials button in java swing. Radio Button is like check box.  ... and drop component (drop down list, text area, check box, radio button etc.) from one... in Java Swing. Dialog Box In Swing Application how to calculate the price on the option box how to calculate the price on the option box How i calculate the value when i using a option box with 2 option..first option i used for product name and for the second i used for the quantity..which function should i used HI Jsp check box..! HI Jsp check box..! Hi all.. I want to update the multiple values of database table using checkbox..after clicking submit the edited field has to update and rest has to enable to update...please help me..its urgent What is Java Swing? What is Java Swing? Here, you will know about the Java swing. The Java Swing provides... and GUIs components. All Java Swing classes imports form the import confirm message box - Java Beginners confirm message box How can I create a confirm message with Yes and No button instead of OK/Cancel buttons in java script inside a jsp? Hi friend, Code to help in solving the problem : Untitled Drop Box Drop Box program draw 2d shapes in java Create a JRadioButton Component in Java button in java swing. Radio Button is like check box. Differences between check box and radio button are as follows: Check Boxes are separated from one to another where Radio Buttons are the different-different button like check box check box condition check box condition Hai, my application has two check box one for chart and another one for table.when We click chart check box download only chart but table also download.same problem in table slection..xsl coding was used how to insert list box in java script dynamically and elements retrieving from database like oracle how to insert list box in java script dynamically and elements retrieving from database like oracle Hi, how to dynamically increase size of list... insert new course in a table.. It should be seen in my list box Dynamic check box problem Dynamic check box problem In my project i have used a dynamic table, in the each row of that table there is one check box [that i have created... check boxes ... pleas help me as soon as possible... 1)application.jsp add button to the frame - Swing AWT for more information. button to the frame i want to add button at the bottom... JFrame implements ActionListener { JButton button = new JButton("Button scroll bars to list box - JSP-Servlet scroll bars to list box Can I add scroll bars to a list box in struts? Hi friend, Scroll the list box in struts Two attribute set "multiple" "size". Select Tag Example Select Tag how to insert list box in java script dynamically and elements retrieving from database like oracle how to insert list box in java script dynamically and elements retrieving from database like oracle hi all, how can i insert elements into java script list box retrieving from Database. whenever I insert any element in the Db script validation - Java Beginners Button Validation function callEvent1...java script validation hi, i have two radio buttons yea and no. all text fiels r deactivated, when i click no radio button. its get active autosuggest box - Ajax autosuggest box Java example How to implement auto suggest box using Ajax-DWR technology in jsp/html To display suggestions in a text box - Ajax , to get the suggestions i mean when i enter the alphabet in a text box(For ex:'A'), the names that starts from 'A' have to display in the text box. The names must be get from database,Please help me to do this task. Example: When i java combo box java combo box how to display messagedialogbox when the combobox is null, Thanks in advance Change background color of text box - Java Beginners Change background color of text box Hi how can i change the background color to red of Javascript text box when ever user enters an incorrect value... check(){ if (document.getElementById('in').value=="amit Login Box - Java Beginners Login Box Hi, I am new to Java and I am in the process of developing a small application which needs a login page. I am planning to have my database... it with the corresponding password tosee if the login details match.I've validation in java script validation in java script i have put this code for only entering integer value in text box however error occured...="submit" value="Check"> </pre> </form> </html> Validation Validation Hi.. How to Validate blank textfield in java that can accepts only integers? Have a look at the following link: validation validation please help me to check validation for <form>...;select <option value="1">Select</option> <option value="2">Pancard</option> <option value="3"> i want to open a new dialog box after clicking "upload" button, it should have a text field, browse button to browse the file from...); JButton button=new JButton("Browse"); button.addActionListener(new Swing Applet Example in java Java - Swing Applet Example in java  ... swing in an applet. In this example, you will see that how resources of swing... button and if first text box is blank then the label lbl shows the message javascript form validation javascript form validation How to validate radio button , dropdown list and list box using onsubmit event..please give me a sample example..If we do not select any option,it shows an error.. Hello Friend, Try How to Hide Button using Java Swing How to Hide Button using Java Swing Hi, I just begin to learn java programming. How to hide the button in Java programming. Please anyone suggest or provide online example reference. Regards, hi, In java Java Code - Swing AWT Java Code How to Display a Save Dialog Box using JFileChooser... index; BufferedImage bi, bufferImage; int w, h; static JButton button...); } button=new JButton("Save"); button.addActionListener(new ActionListener check null exception java check null exception java check null exception java - How to check the null exception on Java? The null pointer exception in java occurs... this error. The only way to handle this error is validation. You need to check Radio Button In Java on the checkbox button. A check box group button in a CheckboxGroup can be in the "... Radio Button In Java Introduction In this section, you will learn how to create Radio Button JTable Cell Validation? - Swing AWT :// Thanks it's not exactly...JTable Cell Validation? hi there please i want a simple example...(table); JLabel label=new JLabel("JTable validation Example",JLabel.CENTER Swing - Applet information on swing visit to : Hello, I am creating a swing gui applet, which is trying to output all the numbers between a given number and add them up. For example
http://www.roseindia.net/tutorialhelp/comment/78048
CC-MAIN-2013-48
refinedweb
2,796
63.49
Emit a S_UDT record for typedefs. We still need to do something for class types. Details Details Emit a S_UDT record for typedefs. We still need to do something for Event Timeline Comment Actions We spent some time talking about this today, and I think there's two ways to do this: - The hard way: compute type indices for all types used in the function's symbol substream before endFunction. Assert if someone calls getTypeIndex during .debug$S emission. - The easy way: Don't handle the corner cases where people reference function-internal typedefs from outside that function, and just handle typedefs at file/namespace scope and function scope. This is David's current approach. I think we should go with approach 2, but we should also remember file/namespace scope typedefs before committing this. I think it just requires an 'else if' in the existing code and a vector on CodeViewDebug. Together that will handle 99% of all typedef usage.
https://reviews.llvm.org/D21149
CC-MAIN-2019-39
refinedweb
161
72.87
Ok. I am writing a program that tells your fortune. What it does is a random number generator makes a number, then from a data file full of fortunes, it picks the line corresponding with the number. What I can't get it to do is to tell the function for reading the file (ReadFile) what the random number is. Here is my code: fyi: I'm programming this in textbased telnet linux. #include <iostream.h> #include <fstream.h> #include <time.h> #include "home/ampalfe/APClasses/apstring.h" // Global Constants const int NUM = 0; // Function Prototypes int Numbergenerator(); apstring ReadFile ( NUM ); int main() { cout<<" ----The Super Fortune Teller----"<<endl<<endl; cout<<" warning: all fortunes generated that come true are purely coincidental and do not reflect the ability of our staff or this generator."<< endl; cout<<" Now.... To recieve your fortune<<endl<<endl<<endl<<endl; cout<< "This is your fortune.........."<<endl; Numbergenerator(); ReadFile( NUM ); } int Numbergenerator() { srand ( unsigned (time(0))); NUM = rand()%2; return NUM; } apstring ReadFile(NUM) { ifstream inFile("/home/ampalfe/Game/Fortunes.stuff"); getline( inFile, NUM); return apstring; } any help would be apreciated. Thanks ampalfe edit:: oops, I forgot to tell you how the data file is set up..... it's 1 You will eat a lot today 2 You will be attracted to a co worker today 3 don't go outside, the weather is bad etc. Thanks again
http://cboard.cprogramming.com/cplusplus-programming/16692-fortune-program-help.html
CC-MAIN-2015-48
refinedweb
231
65.12
Java Real time system poor inline performance807557 Dec 10, 2008 12:28 PM I am looking at the performance of 1.5.0_16 and Java RTS 2.1 I'm seeing some very big difference I suspect is due to inlining, but here is one concrete example: I'm seeing some very big difference I suspect is due to inlining, but here is one concrete example: import java.util.Random; public class DoubleToFloatBenchmark { private static final int INNER_LOOP = 10000; private static final int OUTER_LOOP = 1000; public static void main(String[] args) throws InterruptedException { Random random = new Random(0); double[] values = new double[INNER_LOOP]; long[] results = new long[INNER_LOOP]; for (int i = 0; i < values.length; i++) { values[i] = random.nextDouble(); } test(values, results); test(values, results); test(values, results); test(values, results); test(values, results); } private static void test(double[] values, long[] results) throws InterruptedException { long time = Long.MAX_VALUE; for (int i = 0; i < OUTER_LOOP; i++) { long start = System.nanoTime(); for (int j = 0; j < INNER_LOOP; j++) { results[i] = Double.doubleToLongBits(values); } long end = System.nanoTime(); time = Math.min(time, end - start); } System.out.format("time= %-,3.3fns\n", 1.0 * time / INNER_LOOP); Runtime.getRuntime().gc(); Thread.sleep(10); } }bash-3.00$ java -cp . DoubleToFloatBenchmarkbash-3.00$ java -cp . DoubleToFloatBenchmark Here is the output: time= 7.345ns time= 5.196ns time= 0.108ns time= 0.108ns time= 0.108ns bash-3.00$ /opt/SUNWrtjv/bin/java -cp . DoubleToFloatBenchmark time= 41.243ns time= 41.297ns time= 41.295ns time= 41.293ns time= 41.292ns Any ideas on how to speed the RTJ version up. Its 400 times slower. This content has been marked as final. Show 4 replies 1. Re: Java Real time system poor inline performance807557 Dec 11, 2008 12:09 AM (in response to 807557)Mak, What you are seeing are the effects of the hotspot server compiler versus the client compiler. Java RTS only supports the client compiler (even if you use -server you get -client). The server compiler can perform very aggressive optimizations, compared to the client compiler, because if it makes a wrong assumption it stops-the-world, deoptimizes things, recompiles them the right way (perhaps immediately, or perhaps leaving it for later dynamic compilation) and continues on its way. The client compiler is much less sophisticated and does not do these aggressive optimizations. For Java RTS the server compiler's mode of operation would completely kill predictability, so deopt can not be allowed and so the aggressive optimizations are also not allowed. Here are the results I get for client, server and then JRTS: This is the sort of results I'd expect to see. JRTS is approx 13% slower than J2SE client. # /mirrors/j2se-mirrors/5.0u17/solaris-i586/bin/java -client DoubleToFloatBenchmark time= 66.503ns time= 65.146ns time= 65.146ns time= 65.146ns time= 65.146ns # /mirrors/j2se-mirrors/5.0u17/solaris-i586/bin/java -server DoubleToFloatBenchmark time= 10.850ns time= 7.711ns time= 0.140ns time= 0.139ns time= 0.139ns # rtj DoubleToFloatBenchmark time= 74.191ns time= 74.190ns time= 73.739ns time= 73.739ns time= 73.739ns Looking at your example, this is a classic problem with micro-benchmarking - see Cliff Click's "famous" JavaOne 2002 talk on "How not to write a microbenchmark": There are numerous similar articles following up on that showing how easy it is for the server compiler to throw away the precious code you are so desperately trying to measure the performance of. It's a real eye-opener. See Brian Goetz's article: In this code in your example: for (int j = 0; j < INNER_LOOP; j++) { results[i] = Double.doubleToLongBits(values); } # /mirrors/j2se-mirrors/5.0u17/solaris-i586/bin/java -client DoubleToFloatBenchmark the inner loop can be removed completely because the computation in the loop is independent of the loop variable j. (I'm not sure if that was intentional?) So let's manually delete that inner loop and see what we get (and stop dividing by INNER_LOOP). Here's the results again for client, server and jrts: time= 449.000ns time= 450.000ns time= 317.000ns time= 308.000ns time= 316.000ns # /mirrors/j2se-mirrors/5.0u17/solaris-i586/bin/java -server DoubleToFloatBenchmark time= 455.000ns time= 455.000ns time= 452.000ns time= 455.000ns time= 452.000ns # rtj DoubleToFloatBenchmark time= 506.000ns time= 506.000ns time= 503.000ns time= 340.000ns time= 340.000ns Oh my gosh! JRTS and client become faster than server! ;-) But what are we now measuring ... ? I hope this clarifies things. David Holmes Edited by: davidholmes on Dec 11, 2008 10:08 AM Added link to Brian Goetz's article. 2. Re: Java Real time system poor inline performance807557 Dec 11, 2008 10:58 AM (in response to 807557)1) Thanks you very much for you useful info, timely response and links 2) Microbenchmarks aren't very good if they have a bug :-) --- DoubleToFloatBenchmark.java (revision 64) +++ DoubleToFloatBenchmark.java Thu Dec 11 07:33:34 GMT 2008 @@ -26,7 +26,7 @@ for (int i = 0; i < OUTER_LOOP; i++) { long start = System.nanoTime(); for (int j = 0; j < INNER_LOOP; j++) { - results[i] = Double.doubleToLongBits(values); + results[j] = Double.doubleToLongBits(values[j]); } long end = System.nanoTime(); time = Math.min(time, end - start); bash-3.00$ /usr/jdk/jdk1.5.0_16/bin/java -client -cp . DoubleToFloatBenchmark It was intention to loop through each element so it could not be optimized out, but an *i* looks a lot like a *j* 3) The version without the inner loop is really measuring the System.nanoTime() call which is better in JRTS 4) I think the corrected version still looks like a vaild microbenchmark, but would value your input as to why it is not. When run with the bug removed time= 31.497ns time= 32.483ns time= 32.559ns time= 32.724ns time= 32.503ns time= 32.994ns time= 32.498ns bash-3.00$ /usr/jdk/jdk1.5.0_16/bin/java -server -cp . DoubleToFloatBenchmark time= 7.168ns time= 5.481ns time= 2.439ns time= 2.438ns time= 2.440ns time= 2.440ns time= 2.436ns bash-3.00$ /opt/SUNWrtjv/bin/java -cp . DoubleToFloatBenchmark time= 45.903ns time= 45.856ns time= 43.575ns time= 43.567ns time= 43.572ns time= 43.569ns time= 43.573ns Its still a about 18 times difference. The client VM seems to not want to inline the native call. Is there any way to force this inlining? 3. Re: Java Real time system poor inline performance807557 Dec 11, 2008 12:08 PM (in response to 807557)The microbenchmark may be "valid" in that it will measure how the VM executes the code you've written, but the issue is more how you use the information from the microbenchmark to infer real application behaviour. If you app spends a large portion of its time doing this conversion then this may be an issue for your apps performance; if not then it probably won't. That's something only you can determine. Ultimately the issue is whether or not you meet your goals. There are significant differences between the abilities of the client and server compiler. Whether inlining is the issue in this case I can't say. There are some options for printing out what the compiler generates but I'm not sure to what extent they are applicable to JDK 5. There's little scope for influencing the the compilation policies as well - one of our compiler folk would have elaborate on what the available options are. David Holmes 4. Re: Java Real time system poor inline performance807557 Dec 11, 2008 3:10 PM (in response to 807557)Thanks David, What I'm looking at is an evaluation JRTS vs. jdk1.6 for a predicable low latency messaging app. The app does not exist, but I am building up benchmarks as I design and implement. I don't know if JNI inlining (if that is the problem) will be a significant issue yet, but it may be. I guess my question relates to what the roadmap is for improving the raw performance of JRTS vs. J2SE? As an aside, my opinion for what its worth, its surprising the JRTS team invested the resources to implement, support and maintain a Linux version. I would have thought investing in compiler improvements would be a bigger win. As a customer, its a much easier buy decision if the J2SE and JRTS performance is comparable, the'I'd I just take a S10 migration hit which would not be a big deal given we are taking Java. Better cross-selling opportunity too :-)
https://community.oracle.com/thread/1904636
CC-MAIN-2017-17
refinedweb
1,430
70.6
Modding Resources: Installing Mods | Creating Mods | Basic Code Examples | Creating a Character | Audio and Visual Effects | The Automator | User Interface | Graph Editor User Interface Much of the ui is still not accessible to modders, but as that changes information about how to mod it will go here. Creating a Sub Panel See ExampleNewTabModMain.as in the Clicker Heroes 2\mods\examples folder for a mod made using the code described here. This will go over how to create a custom sub panel that is opened by a tab at the top of the UI, like the "Items" or "Miscellaneous" sub panels. You will need to create a custom class for your new sub panel, then create an instance of this class and a tab that can open it in the onUICreated function. First import ui.elements.SubPanel and create a class that extends SubPanel outside of the package{ } area of your mod: import ui.elements.SubPanel class ExampleSubPanel extends SubPanel { override public function update( time:Number ):void { // Your code here } override public function activate():void { // Your code here } override public function deactivate( refresh:Boolean = false ):void { // Your code here } } Within the new sub panel class you will want to override some of the default sub panel functions to make it display what you want when the tab is opened. This includes the update function, which is called every frame with delta time in milliseconds passed to it. This can be used to update the display if the information shown on the sub panel can change while it is open. The activate and deactivate functions are what happens when the panel is opened or closed, respectively. Here is an example that will add a simple text field to our new sub panel, it will require you to import flash.text.TextField: public var textField:TextField; override public function activate():void { textField = new TextField(); textField.text = "Hello World"; textField.x = 10; textField.y = 100; display.addChild(textField); } override public function deactivate( refresh:Boolean = false ):void { textField = null; display.removeChildren(); } First a public variable is created so that it can be accessed by following functions. The activate function creates a new TextField in that variable and sets the information for it (the text to be displayed and the x,y coordinates for where to display it), and then adds that TextField as a child to the display. The deactivate function sets the TextField to null and removes all children that had been added to the display. Adding a New Tab See ExampleNewTabMod.as in the Clicker Heroes 2\mods\examples folder for a mod made using the code described here. You can add a tab to the top of the main UI to open a sub panel within the onUICreated panel of your mod's Main class. You will need to import flash.display.MovieClip and ui.CH2UI: public var newSubPanel:ExampleSubPanel; public function onUICreated():void { newSubPanel = new ExampleSubPanel(); newSubPanel.setDisplay(new MovieClip()); CH2UI.instance.mainUI.mainPanel.registerTab( CH2UI.instance.mainUI.mainPanel.tabs.length, "Example Tab", newSubPanel, 6, function():Boolean{ return true; }, function():Boolean{ return false; }, function(){ } ); } The above creates a new instance of our ExampleSubPanel and sets its display to a new movie clip. This is the display that children, like the TextField in the previous examples, are added to or removed from in the sub panel. Then a tab is created to open this new sub panel, with the following parameters: - CH2UI.instance.mainUI.mainPanel.tabs.length, an int for which number panel this is, by default there are 6 panels, 0 through 5. This particular line will set this tab to go after the last installed tab so that it won't overwrite another tab with the same number. - "Example Tab", a string for the hover text that will display in-game. - newSubPanel, the SubPanel instance that this tab will open, in this case, the one we just created. - 6, an int that references the icon to use for the tab, can use 1 through 6 to use the default icons. - function():Boolean{ return true; }, a function that determines whether the tab is visible, should return true or false. - function():Boolean{ return false; }, a function that determines whether the tab should glow, should return true or false. - function(){ }, a function that can have code that will run when a player clicks on the tab.
https://www.clickerheroes2.com/user_interface.php
CC-MAIN-2019-13
refinedweb
724
61.97
2.6 prepatch is 2.6.17-rc2, announced by Linus on April 18. There's a lot of fixes in this release, but it also contains a simplified form of the scheduler starvation avoidance patch, some tweaks to the memory overcommit algorithm, the removal of the obsolete blkmtd and qlogicfc drivers, the removal of the unmaintained Sangoma WAN drivers, the splice() and tee() system calls, and pollable sysfs attributes. See the long-format changelog for the details. For the record, it is worth noting that the prototypes for the splice() methods in the file_operations structure have changed again. This week's version: ssize_t (*splice_write)(struct pipe_inode_info *pipe, struct file *out, loff_t *offset, size_t len, unsigned int flags); ssize_t (*splice_read)(struct file *in, loff_t *offset, struct pipe_inode_info *pipe, size_t len, unsigned int flags); The offset parameter, describing where in the stream I/O should start, is new. A few dozen patches (all fixes) have been merged into the mainline after the -rc2 release. The current -mm tree is 2.6.17-rc1-mm3. Recent changes to -mm include an ACPI dock driver, i2c virtual adapter support, a number of memory management tweaks, a trusted platform module (TPM) driver update, and a new version of the zlib library. Kernel development news Quotes of the week -- Dave Aitel -- Stephen Smalley Virtual time Jeff's patch adds a new "time namespace" structure to the task structure. By default, all processes share the normal host system's idea of time. But a new option (CLONE_TIME) to the unshare() system call allows a process to disconnect from the system time. After such a call, that process - and any children it creates - will be able to keep its own time value. Setting a virtualized time value is, unlike changing the normal system time, an unprivileged operation. Internally, a virtualized time is stored as a simple offset; whenever a process requests the current time, the offset is added to the the current system time and the sum is returned. This approach has the advantages of being simple and fast; a process running with virtualized time also does not give up time adjustments made, for example, by NTP. On the other hand, this implementation does not support the ability to confuse processes by messing deeply with their idea of time - running time at a different rate, for example, or even backward. Chances are that this omission will not upset more than a small percentage of potential users of virtualized time, however. Jeff's purpose is to speed up the gettimeofday() system call in User-mode Linux instances. If the kernel allows process subtrees to have their own time values, then User-mode Linux can simply use the host's gettimeofday() call, rather than intercepting that call and implementing it itself. Since gettimeofday() is one of the most frequently-used system calls, this optimization can make a significant difference. One other change is required, however, for User-mode Linux to get the benefit from this change. UML performs much of its process control using ptrace(); in particular, it intercepts and interprets system calls with the PTRACE_SYSCALL operation. What is really needed for a fast gettimeofday() is the ability to not intercept that particular call. So Jeff's patch also extends ptrace() by adding a PTRACE_SYSCALL_MASK operation. This new operation can set a bitmask indicating which system calls should be intercepted, and which should be executed without stopping. The result, with a suitably patched UML, is a gettimeofday() call which runs at about 99% of the native process speed. That may well be good enough to make this patch a piece of the growing set of interfaces supporting virtualization and containers. write(), thread safety, and POSIX Andrew Morton quickly pointed out the source of this behavior. Consider how write() is currently implemented:; } There is no locking around this function, so it is possible for two (or more) threads performing simultaneous writes to obtain the same value for pos. They will each then write their data to the same file position, and the thread which writes last wins. Putting some sort of lock (using the inode lock, perhaps) around the entire function would solve the problem and make write() calls thread-safe. The cost of this solution would be high, however: an extra layer of locking when almost no application actually needs it. Serializing write() operations in this way would also rule out simultaneous writes to the same file - a capability which can be useful to some applications. So some developers have questioned whether this behavior should be fixed at all. It is not something which causes problems for over 99.9% of applications, and, for those which need to be able to perform this sort of simultaneous write, there are other options available. These include user-space locking or using the O_APPEND option. So, it is asked, why add unnecessary overhead to the kernel? Linus responds that it is a "quality of implementation" issue, and that if there is a low-cost way of getting the system to behave the way users would like, it might as well be done. His proposal is to apply a lock to the file position in particular. His patch adds a f_pos_lock mutex to the file structure and uses that lock to serialize uses of and changes to the file position. This change will have the effect of serializing calls to write(), while leaving other forms (asynchronous I/O, pwrite()) unserialized. The patch has not drawn a lot of comments, and it has not been merged as of this writing. Its ultimate fate will probably depend on whether avoiding races in this obscure case is truly seen to be worth the additional cost imposed on all users. The future of the Linux Security Module API Last year, some developers were heard to mumble that perhaps LSM should be removed from the kernel. Since LSM was merged, there has been only one serious security mechanism using it to emerge: SELinux. Since there is only one LSM user, and since SELinux can be thought of as a fairly generic security framework in its own right, it is not clear that there is a need for the LSM interface. The discussion died down last year, however, and there has been little talk of yanking out LSM. Until now. In response to a current discussion on LSM hooks, James Morris has posted a patch adding LSM to the "feature removal" schedule. The end of LSM is not a distant event either: the proposed date is this coming June - the 2.6.18 kernel, in other words. If this patch goes through, LSM will be gone in the very near future. The early indications suggested that it could go through: several kernel developers have argued in favor of the removal of LSM, while none asked for it to be retained. The only disagreement - mild - was over the removal date, with some arguing that 2.6.18 is too soon. Those in favor of an early removal, however, claim that last year's discussion should count as the usual one-year warning for this sort of change, and that there is no need to wait any longer. One might well wonder what the hurry is to remove this API from the kernel. There is, in fact, more than just the "only one user" argument in circulation. James's patch includes this text: So LSM becomes a general temptation to solve problems in the wrong way. Beyond the security levels module (which, among other things, is seen as having open vulnerabilities and no maintainer interest), the developers may be thinking of past episodes like the debate over the realtime security module or the Integrity Measurement Architecture, neither of which is best implemented as a security module. The real issue, however, may be this one: The 2.6 kernel - intentionally - does not give loadable modules access to the system call table. But the LSM interface is almost as good - it gives a loadable module the opportunity to intercept almost any operation that the kernel may attempt to perform. The LSM hooks are supposed to limit themselves to internal record keeping and returning an allow/deny status to the kernel - but there is no way to enforce that sort of restriction. The GPL-only status of the LSM API does not help much either. The people involved are wary of publicly pointing fingers at companies suspected of misusing the LSM interface. One example which can be found, however, is the kernel generalized event management module which was posted to the kernel-mentors list last year. When KGEM was loaded, it would shove aside any currently-loaded security policy and install itself in its place. It would then feed security-related events through to a (proprietary) user-space application, which would make decisions aimed at protecting Linux users from the pressing threat of virus attacks. There were a lot of issues over how this module was implemented, but using LSM to override existing security policies and provide hooks for proprietary code was considered especially distasteful. These reasons and strong developer pressure notwithstanding, it is not clear that LSM will actually go away anytime soon. There is not yet a consensus that SELinux should be seen as the One True Security Policy; many potential users find its complexity hard to deal with and often simply turn it off. The power of SELinux is unquestioned, but its usability is another story. There are other users of the LSM API out there, they just have not been submitted for inclusion into the mainline. These include: Some of the early discussion, however, suggests that AppArmor could have a hard path into the mainline. In particular, its use of file pathnames as the core of its security policy has been strongly questioned. In a system capable of hard and soft links, multiple namespaces, shared subtrees, and more, the meaning of any specific pathname is far from clear. That is why SELinux uses extended attributes to apply labels directly to files, rather than relying on their pathnames. Given that security is something other than a completely solved problem, it would be surprising if there were any single approach which was suitable for all users. So something may well emerge and qualify as the second user which keeps the LSM API in place. Or, at least, which keeps some sort of API in place. If LSM stays around, the kernel developers will probably make changes which make the API harder to abuse. These might include finding ways to restrict what LSM hooks can do and providing compile-time options to wire in a single security policy at kernel build time. So, while there is a reasonable chance that future kernels will include an LSM interface, it might be a rather different interface than the one there today. Any security module developers who want to have a say in how the interface evolves would be well advised to join the discussion soon. Patches and updates Kernel trees Core kernel code Development tools Device drivers Filesystems and block I/O Janitorial Memory management Security-related Virtualization and containers Miscellaneous Page editor: Jonathan Corbet Next page: Distributions>> Linux is a registered trademark of Linus Torvalds
http://lwn.net/Articles/179829/
crawl-002
refinedweb
1,871
58.62
- Code: Select all import ctypes ctypes.windll.kernel32.SetConsoleTitleA("Pokemon Damage Calculator!") print ("Welcome, here we will calculate battle damage caused by pokemon!") level = int( raw_input("What is the level of your pokemon?")) attack = int( raw_input ("What is the attack stat of your pokemon?")) power = int( raw_input ("What is the power of your pokemon's attack?")) defense = int( raw_input("What is the defense stat of your opponent's pokemon?")) stab = float( raw_input("Does STAB apply? If yes make the variable 1.5, if no make the variable 1.")) resist = float( raw_input("Does the opposing pokemon have a weakness or resistance? If no then answer with 1 if yes : Resistance x4 = 0.25, x2 = 0.5 . Weakness x2 = 2, x4 = 4.")) damage = ((((2 * level / 5 + 2) * attack * power / defense) / 50) + 2) * stab * resist * 100 / 100 critical = int( raw_input("Was the attack a critical? If no then answer with 1, if yes than 2.")) total = damage * critical print "The damage the opponent's pokemon will recieve is %s" % (total) print str("Thanks for using this program!") wait = input("PRESS ENTER TO CONTINUE.")
http://www.python-forum.org/viewtopic.php?f=12&t=11172
CC-MAIN-2017-17
refinedweb
180
70.7
Now that we have covered a lot of the introductory material for RabbitMQ, this part of the series will look at developing software to interact with the message broker as both a producer and a consumer. First we will take a look at the RabbitMQ client library. Then we will introduce the business scenario used for the sample applications. Before we start looking at the individual examples we will take a quick look at the common code shared between them. Then we will move onto the actual code examples themselves. The code for this series can be found here. These example will include: - Basic queues - Worker queues - Publisher and subscribers - Direct routing of queues - Topic based publisher and subscribers - Remote procedure calls RabbitMQ client library To develop software against RabbitMQ you will need to install the RabbitMQ client library for .NET. Before we look at how to install the client library, let’s take a brief look at what it is. This series will not serve as an in-depth guide to the whole client library API. You can read a more in-depth document for the client library that explains the full library from the RabbitMQ site. This section will serve as an introduction to the library and the examples in the rest of this series will help you cement your understanding further. What is contained in the Client Library? The RabbitMQ .NET client is an implementation of an AMQP client library for C# and other .NET languages. The client library implements the AMQP specification 0-8 and 0-9. The API is closely modeled on the AMQP protocol specification with little additional abstraction, so if you have a good understanding of the AMQP protocol, then you will find the client library easy to follow. The core API interfaces and classes are defined in the RabbitMQ.Client namespace. The main API interfaces and classes are: - IModel : This represents an AMQP data channel and provides most of the AMQP operations. - IConnection : represents an AMQP connection. - ConnectionFactory: constructs IConnection instances. Some other useful interfaces and classes include: - ConnectionParamters: configures a ConnectionFactory. - QueueingBasicConsumer: receives messages delivered from the server.
https://stephenhaunts.com/tag/rabbitmq/
CC-MAIN-2017-13
refinedweb
357
54.52
On Thu, Feb 3, 2011 at 10:55 AM, Bjoern A. Zeeb <bzeeb-li...@lists.zabbadoz.net> wrote: > On Thu, 3 Feb 2011, Monthadar Al Jaberi wrote: > >> On Wed, Feb 2, 2011 at 8:06 PM, Julian Elischer <jul...@freebsd.org> >> wrote: >>> >>> On 2/2/11 10:05 AM, Monthadar Al Jaberi wrote: >>>> >>>> I just tried something that seems to work, but please dont hit me ^^;;; >>>> >>>> in wtap_ioctl I assigned curthread->td_vnet myself to point to a VNET >>>> (saved it when the module first loaded) (I have not created any jails >>>> yet)... and it works... I didnt put any CURVNET macros... >>> >>> td->td_vnet is exactly what the CURVNET_SET macro sets. >>> You should use the Macros because we may change the place where we store >>> it. >>> >>> The vnet for the current thread is picked up from several places >>> depending >>> on the context, >>> and it is cleared again when it is not needed. the V_xxx usages in the >>> code >>> end up being >>> in effect expanded to curthread->td_vnet.xxx, where each 'xxx' is sort of >>> like an element in a structure >>> but not quite. >>> >>> Now, theoretically we could just leave it set all the time but then it >>> would >>> be nearly impossible >>> to find places where we should have changed it, but forgot and just got >>> the >>> existing one. >>> >>> if you want to find the correct place to go, then look at the vnet of the >>> calling process >>> which should be in the process cred. or just use vnet0. >> >> Can I check it from user space? >> >>> >>> I don't understand why you saw a CRED_TO_VNET of 0 >>> I was under the impression that every process/thread in the system would >>> be >>> on vnet0 >>> in a vimage kernel. >> >> This is how my printf looks like: >> struct thread *td = curthread; >> struct vnet *v = TD_TO_VNET(td); >> struct ucred *cred = CRED_TO_VNET(td->ucred); >> struct vnet *td_vnet = td->td_vnet; > > here's your problem: > > strcut vnet *vnet = cred->cr_prison->pr_vnet; When I add CURVNET_SET(CRED_TO_VNET(curthread->td_ucred)); I get a panic too... But your suggestion works if I do like this: curthread->td_vnet = curthread->td_ucred->cr_prison->pr_vnet; CRED_TO_VNET(curthread->td_ucred) returns NULL > > >> printf("td=%p, td->td_vnet=%p, td->td_ucred=%p, TD_TO_VNET=%p, >> CRED_TO_VNET=%p\n", td, td_vnet, td->td_ucred, v, cred); >> >> I made a fast search in /usr/src for "td_vnet" and found it was >> assigned only in >> int fork1(td, flags, pages, procp): >> #ifdef VIMAGE >> td2->td_vnet = NULL; >> td2->td_vnet_lpush = NULL; >> #endif > > Nice try. Want another search? Hint: there is this in vnet.h: > > #define curvnet curthread->td_vnet > > And then you'll, again, find the CURVNET_SET_* macros. Thank you > > > >> Maybe something wrong with how I declare my wtap_ioctl: >> >> static struct cdevsw wtap_cdevsw = { >> .d_version = D_VERSION, >> .d_flags = 0, >> .d_ioctl = wtap_ioctl, >> .d_name = "wtapctl", >> }; >> ... >> make_dev(&wtap_cdevsw,0,UID_ROOT,GID_WHEEL,0600,(const char *)"wtapctl"); >> >>> >>> your stored vnet idea is ok as well, but may go strange if you load the >>> driver from a vnet jail >>> and then remove the jail. >> >> Ok, will document it in the code for now >> >>> >>> >>> >>> >>>> my assumption is that if ath drivers dont use VNET I shouldnt :P >>>> >>>> What is wrong with this hack? >>>> >>>> br, >>>> >>>> P.S. I have printed "porting to vnet" text to have it always at hand, >>>> but its a bit hard for me... doing my best. >>>> >>>> On Wed, Feb 2, 2011 at 6:30 PM, Julian Elischer<jul...@freebsd.org> >>>> wrote: >>>>> >>>>> >>>>>> >>>>>> Try >>>>>> depends >>>>>> onnet >>>>> >>>>>> It's the type A) kind of change from above that will break eventually >>>>>> in the future. >>>>>> >>>>>> /bz >>>>>> >>>>> >>>> >>>> >>> >>> >> >> >> >> > > -- > Bjoern A. Zeeb You have to have visions! > <ks> Going to jail sucks -- <bz> All my daemons like it! > -- //Monthadar Al Jaberi _______________________________________________ freebsd-virtualization@freebsd.org mailing list To unsubscribe, send any mail to "freebsd-virtualization-unsubscr...@freebsd.org"
https://www.mail-archive.com/freebsd-virtualization@freebsd.org/msg00431.html
CC-MAIN-2018-26
refinedweb
619
79.4
. Josh is one of our EMEA evangelists who spends the majority of his time connecting with our community online and in person. Relevant posts: Unity developers shine at Ludum Dare 30 Oscar1월 29, 2016 3:28 오후 Will there be a code for a 30 day trial of Unity Pro like in previous GGJ??? TNGOG1월 19, 2016 7:24 오후 I have a question offtopic please. Unity3d is still being used for alot of games, I’m using firefox 46 and its not working anymore. are there any plans to fix this coming in new versions of firefox46? or should i just download a old version or is unity planning something so we don’t have to install unity to play those games or is everyone just moving to mobile? Dwill1월 20, 2016 4:29 오전 Browser plug-ins are going the way of the Dodo. You should instead export as WebGL. :) Ippokratis Bournellis1월 19, 2016 3:08 오후 To get rid of the warnings in Unity 5.3 when importing the Game Jam Menu Template: 1. In the EventSystemChecker.cs –>erase line 20 2. In the StartOptions.cs —> Insert at the top the following line using UnityEngine.SceneManagement; —>Replace line 77 with SceneManager.LoadScene (sceneToStart); Matt Schell1월 28, 2016 4:25 오후 Thanks Ippokratis. We’re going to see if we can push a quick update to address those warnings to the Asset Store package. It’s worth noting that the package does still work, even with the warnings, it’s just letting you know that in 5.3 there’s a new API for doing scene changes. Rafaelp1월 19, 2016 3:05 오후 I know this will just be deleted like others, but would be awesome if you could talk anything about what is the plan for Visual Scripting on Unity. I know it’s in research stage, but would be cool know something about. Any piece of information would be cool ;D vulgerstal1월 20, 2016 8:09 오후 Yes, this feature is a very requested one. And finally it’s under review : A free temporary solution for You : Ashkan1월 19, 2016 9:57 오전 Guys , This post is motivational, Yes It’s assumed that you have heard about it and just need to be motivated. The website can be found on google easily, date is last sat-sunday of the month and you should register online and go to one of the sites available in your city or create a site. It’s awesome and you’ll gonna love it hell lot. It’s pure awesomeness!!! Jared Rowe1월 19, 2016 3:33 오전 Is it too late to join the game jam, or do people just kind of show up? I haven’t done a game jam before, so I’m not quite sure how they work, but I don’t see any sort of registration to participate in it on the site. I’ll be doing the one in Seattle if that helps (although, I haven’t found the address of where it’s being hosted either). Matt Kalafut1월 18, 2016 8:39 오후 My team and I are located in the Chicago Suburbs. Would be able to partake in this? Schubkraft1월 18, 2016 5:35 오후 Januar 29th-31st vulgerstal1월 18, 2016 8:16 오후 Moderator should put this link into the first passage of the post so people could go straight to the site. Ray Nothnagel1월 18, 2016 5:06. I guess the assumption is that we all know all these things about GGJ and have actively decided not to participate? Poenga1월 18, 2016 11:57 오후 if you’re not able to copy and paste “Global Game Jam” into Google, please do us a favor and don’t come. Maurizio Margiotta1월 19, 2016 8:31 오전 Looking on how kind and nice people are going to join is better to don’t come. If you write and article on any event that 3 points are a must. Is the same also creating games. You can’t say to a player that if something is not explained in the tutorial you can google it otherwise is better to don’t play the game! Kristyna Paskova1월 19, 2016 10:30 오전 Hi Ray! Fair point, seems like the link disappeared somewhere in editing! Added it again now. Ray Nothnagel1월 18, 2016 5:05.
https://blogs.unity3d.com/kr/2016/01/18/why-you-should-take-part-in-ggj-2/
CC-MAIN-2019-13
refinedweb
728
80.82
Recursion in Ruby The process in which a function calls itself directly or indirectly is called recursion and the corresponding function is called as recursive function. Recursion makes… Read More » The process in which a function calls itself directly or indirectly is called recursion and the corresponding function is called as recursive function. Recursion makes… Read More » Ruby Hook Methods are called in reaction to something you do. They are usually used to extend the working of methods at run time. These… Read More » A directory is a location where files can be stored. For Ruby, the Dir class and the FileUtils module manages directories and the File class… Read More » There are four different types of variables in Ruby- Local variables, Instance variables, Class variables and Global variables. An instance variable in ruby has a name… Read More » In Programming, static keywords are primarily used for memory management. The static keyword is used to share the same method or variable of a class… Read More » The idea of representing significant details and hiding details of functionality is called data abstraction. The interface and the implementation are isolated by this programming… Read More » It is a way of processing a file such as creating a new file, reading content in a file, writing content to a file, appending… Read More » Ruby is an interpreted, high-level, general-purpose programming language. Ruby is dynamically typed and uses garbage collection. It supports multiple programming paradigms, object-oriented, including procedural and… Read More » Include is used to importing module code. Ruby will throw an error when we try to access the methods of import module with the class… Read More » In Ruby, one does not have anything like the variable types as there is in other programming languages. Every variable is an “object” which can… Read More » Before studying about Ruby Mixins, we should have the knowledge about Object Oriented Concepts. If we don’t, go through Object Oriented Concepts in Ruby .… Read More » Class Methods are the methods that are defined inside the class, public class methods can be accessed with the help of objects. The method is… Read More » Learning a first programming language is always special for everyone. We get attached to it and it sticks with us forever. You might have 10… Read More » In this article, we will learn how to initialize the array in Ruby. There are several ways to create an array. Let’s see each of… Read More » In this article, we will learn how to find minimum array element in Ruby. There are multiple ways to find minimum array element in Ruby.… Read More »
https://www.geeksforgeeks.org/category/programming-language/ruby/
CC-MAIN-2020-05
refinedweb
440
50.46
NEW QUESTION: How can I get it to loop through an array until it reaches the value of -1? Can I do while(i!=-1)? I think a for statement afterwards but what do I put in the middle? for(j=0,?,j++)? DISREGARD: I have to get the number of elements in an array using a function. This is what I have, and I have tried many variations and changing things around but I cannot get it to return the correct number. Point me in the right direction please? I only attached the parts that are relevant to my question. Code:printf("Number of elements is %d in Set 1 is %d", setCardinality(a)); /*pass array a to function setCardinality*/ setCardinality(int a[]){ int numberElements; numberElements=((sizeof(a))/(sizeof(int))); return (numberElements); }
https://cboard.cprogramming.com/c-programming/76693-sizeof-help.html
CC-MAIN-2020-29
refinedweb
133
64.71
An experimental web frontend framework named after a lighthouse. It maybe easiest to describe it in contrast to React. - Uses JSX syntax just like React - Function components only - Component function is treated like a constructor, i.e. called just once per component lifecycle - Dynamic content passed to components as observable properties - Directly embed Observables in JSX, resulting to "surgical" DOM updates. No VDOM diffing needed. - Written in Typescript. Type-safety considered a high priority. - Support Lonna, Bacon.js and RxJs for observables at the moment. You can select the desired library by imports (see below). - Strongly inspired by Calmm.js. If you're familiar with Calmm, you can think of Harmaja as "Calmm, but with types and no React dependency Published on NPM: The documentation here is lacking, and it will help if you're already familiar with Redux, Calmm.js and Bacon.js (or other reactive library such as RxJs). This document contains a lot of discussion on state management concepts such as unidirectional data flow, as well as existing implementations that I'm aware of. I present my views on these topics openly, with the goal to paint the whole picture of how I see application state management. So don't expect this to be a focused API document, but more like a research project. I'm very open to discussion and criticism so correct me if I'm wrong. On the other hand, I hope you to understand that many topics here are subjective and I'm presenting my own views of the day. Thanks to Reaktor and my lovely co-Reaktorians for the support in the development of this library! Key conceptsKey concepts Reactive Property (also known as a signal or a behaviour) is an object that encapsulates a changing value. Please check out the Bacon.js intro if you're not familiar with the concept. In Harmaja, reactive properties are the main way of storing and passing application state. EventStream represents a stream of events that you can observe by calling its forEach method. A Bus is an EventStream that allows you to push events to the stream as well as observe events. In Harmaja, buses are used for conveying distinct events from the UI to state reducers. Bus is an EventStream that allows you to push new events into it. It is used in Harmaja for defining events that originate from the UI. Typically, an onClick or similar handler function pushes a new value into a Bus. Atom is a Property that also allows mutation using the set methods. You can create an atom simply by atom("hello") and then use get and set for viewing and mutating its value. May sound underwhelming, but the Atom is also a reactive property, meaning that it's state can be observed and reacted. In Harmaja particularly, you can embed atoms into your VDOM so that your DOM will automatically reflect the changes in their value! Furthermore, you can use view(atom,"attributename") to get a new Atom that represents the state of a given attribute within the data structure wrapped by the original Atom. Currently Harmaja comes with its own Atom implementation. State decomposition means selecting a part or a slice of a bigger state object. This may be familiar to you from Redux, where you mapStateToProps or useSelector for making your component react to changes in some parts of the application state. In Harmaja, you use reactive properties or Atoms for representing state and then select parts of it for your component using map or view, the latter providing a read-write interface. State composition is the opposite of the above (but will co-operate nicely) and means that you can compose state from different sources. This is also a familiar concept from Redux, if you have ever composed reducers. For example, you can use combine to compose two state atoms into a composite state Property. You can very well combine the above concepts so that you start with several state Atoms and EventStreams, then compose them all into a single "megablob" Property and finally decompose from there to deliver the essential parts of each of your components. UsageUsage Install from NPM npm install harmaja or yarn add harmaja. Tweak your tsconfig.json for the custom JSX factory. { "compilerOptions": { // ... "jsx": "react-jsx", // or react-jsxdev for dev mode builds "jsxImportSource": "harmaja/lonna" // or harmaja/bacon or harmaja/rxjs if you are using bacon.js or rxjs respectively instead of lonna } // ... } Then you can start using JSX, creating your application components and mounting them to the DOM. const App = () => <h1>yes</h1> mount(<App />, document.getElementById("root")!) If your build tool doesn't support the new react-jsx transform for JSX, you can use the old method: { "compilerOptions": { // ... "jsx": "react", "jsxFactory": "h" } // ... } When using the old transform, you'll need to import the h function from Harmaja in any files that use JSX, so that the TypeScript compiler can use it for creating DOM nodes. import { h } from "harmaja" Observable Library SelectionObservable Library Selection You can select the desired Observable library with imports. Currently Lonna, Bacon.js and RxJs are supported. Lonna ObservablesLonna Observables To use the default Lonna Observables, install lonna from NPM and then: import { h } from "harmaja" import * as L from "lonna" Lonna includes Atoms and Lenses in addition to Properties, EventStreams and Buses, so you should use import { atom, Atom } from "lonna". Bacon.jsBacon.js To use Bacon.js Observables, install baconjs from NPM and then: import { h } from "harmaja/bacon" import * as L from "baconjs" Bacon.js doesn't include Atoms and Lenses, but Harmaja includes them so you should use import { atom, Atom } from "harmaja/bacon". Note that you'll need to use the variant in all of your Harmaja imports within your application. Mixing and matching two implementations accross your application is a very bad idea. RxJsRxJs To use RxJs Observables, install rxjs from NPM and then: import { h } from "harmaja/rxjs" import * as Rx from "rxjs" RxJs doesn't include Atoms and Lenses, but Harmaja includes them so you should use import { atom, Atom } from "harmaja/rxjs". Note that you'll need to use the variant in all of your Harmaja imports within your application. Mixing and matching two implementations accross your application is a very bad idea. APIAPI Here's a brief API description. Read the chapters below for examples and "philosophy". Mounting, unmounting and lifecycle eventsMounting, unmounting and lifecycle events import { mount, mountEvent, onMount, onUnmount, unmount, unmountEvent, } from "harmaja" RefsRefs With a ref, you can get access to the actual DOM element created by Harmaja, when the element mounted to the DOM. This is similar to the ref concept in React. There are two styles of refs available: atom refs and function refs. To use an atom ref, pass in an atom to the ref prop of the element. The type of the atom must be a union of null and the type of the dom element matching the harmaja element. You can use the helper type RefType<ElementName> that is exported from Harmaja to automatically determine the correct type for a given element. When the harmaja element is mounted in the dom, the atom value is set to the dom element. Note that the atom value will be set to null when the harmaja element it is passed to is constructed, as well as when it is unmounted. Setting the atom will not have any effect on the dom. const atom = L.atom<RefType<'span'>>(null) atom.forEach((el) => alert("Mounted " + el)) <span id="x" ref={atom}> Hello </span> To use function refs, pass in a function that takes in a single parameter. You can use the DomElementType<ElementName> type to get the correct type for the function parameter. When the harmaja element is mounted to the dom, this function will get called with the dom element as the first parameter. <span id="x" ref={(el: DomElementType<"span">) => alert("Mounted " + el)}> Hello </span> FragmentsFragments Harmaja supports JSX Fragments. This feature requires TypeScript 4 or higher. In your tsconfig.json: { "compilerOptions": { // ... "jsx": "react", "jsxFactory": "h", "jsxFragmentFactory": "Fragment" } // ... } Then in your component code: import { h, Fragment } from "harmaja" const App = () => ( <h1> <> <span>hello</span> <span>world</span> </> </h1> ) mount(<App />, document.getElementById("root")!) There are larger examples here. ContextContext Harmaja has limited support for Context, that allows you to bind values in parent components and use them in children, without passing references all the way in the stack. This is in some cases preferable for convenience, and instead of using values as global variables. First, you'll need to introduce a "context key" in a globally shared const like here. import * as H from "harmaja" const MEANING_OF_LIFE = H.createContext<number>("MEANING_OF_LIFE") This key can then be used for binding a value like this: const ComponentWithStaticContextUsage = () => { H.setContext(MEANING_OF_LIFE, 42) return ( <div id="parent"> <ContextUser label="meaning" /> </div> ) } Now the context value MEANING_OF_LIFE will be bound to the value 42 in the child components, namely ContextUser. You can now use the context value thus. const ContextUser = ({ label }: { label: string }) => { const contextValue = O.atom<number>(0) H.onContext(MEANING_OF_LIFE, contextValue.set) return ( <label> {label}: {contextValue} </label> ) } Restrictions of the Harmaja Context API are: - Context API only gives you this kind of asynchronous access to values (I'd speculate that it'll be impossible to get synchronous access without some major re-design) - You can only bind the value once, so the API cannot be used as a way to propagate changes to children. If you want to propagate changes, you can of course pass something like a Propertyas the context value. - If you try to use an unbound context value with onContext, you'll get an exception. - The API only works for components that emit a single JSX element as their root. If you emit a dynamic value or a Fragment, you'll get an exception. This is a limitation of the current implementation and may change later. ListViewListView import { ListView } from "harmaja" ListView implements an efficient view into read-only and read-write list data. It supports three different variants. If you have Read-only view to a PropertyRead-only view to a Property const items: Bacon.Property<A[]> const renderObservable: (item: Bacon.Property<A>) => ChildNode const getKey: (item: A) => string Then you can render the items using ListView thus: <ListView observable={items} renderObservable={renderObservable} getKey={getKey} /> What ListView does here is that it observes items for changes and renders each item using the renderer function. When the list of items changes (something is replaced, added or removed) it uses the given getKey function to determine whether to replace individual item views. Each item view gets a Property<A> so that they can update when the content in that particular item is changed. See an example at examples/todoapp. Read-write view to an AtomRead-write view to an Atom ListView also supports read-write access using Atom. So if you have const items: Atom<A[]> const renderAtom: (item: Atom<A>, remove: () => void) => ChildNode const keyFunction: (item: A) => string You can have read-write access to the items by using ListView thus: <ListView atom={items} renderAtom={renderAtom} getKey={keyFunction} /> As you can see ListView provides a removeItem callback for Atom based views, so that in your ItemView you can implement item removal by calling this function. Static viewStatic view There's a third variation of TextView still, for read-only views: <ListView observable={items} renderItem={(item: TodoItem) => ( <li> <Item item={item} /> </li> )} /> In this variant, everything is replaced on any change to the list. Use only for read-only views into small views of data. The rough edgesThe rough edges I'm not entirely happy with the ergonomics of Harmaja+Lonna yet. Here are some of the rough edges. - Dealing with polymorphism. See this example, line 51. The explicit cast is nasty. - Lonna type inference, or the lack of thereof. Lonna uses overload signatures and therefore TypeScript type inference cannot keep up when using, for instance, map/filter. Pitfalls, be aware!Pitfalls, be aware! Unwanted reloadsUnwanted reloads My component reloads all the time => make sure you've eliminated duplicates in the Property that you use for switching components. <div> { L.view(someProperty, thing => thing.state === "success" ? <HugeComponent/> : <ErrorComponent/> } </div> In the above, the nested components will be re-constructed each time someProperty gets a value. To eliminate duplicate values, do your mapping in two steps, first extracting the discriminator value and then constructing components, only when the discriminator changes: <div> {L.view( someProperty, (t) => t.state === "success", (success) => (success ? <HugeComponent /> : <ErrorComponent />) )} </div> Dangling subscriptionsDangling subscriptions When embedding observables in to the DOM, Harmaja will automatically subscribe an unsubscribe to the source observable. So, this is ok: const scrollPos = L.toStatelessProperty(L.fromEvent(window, "scroll"), () => Math.floor(window.scrollY) ) const ScrollPosDisplay = () => { return ( <div style={{ position: "fixed", right: "20px", background: "black", color: "white", padding: "10px", }} > { scrollPos /* This is ok! Harmaja will unsubscribe if the component is unmounted */ } </div> ) } When this component is unmounted, it will stop listening to updates in the global scrollPos property. But you are in trouble if you want to add some side-effect to scrollPos, like: const ScrollPosDisplay = () => { scrollPos.forEach((pos) => console.log(pos)) // ... } Now this side-effect will continue executing after your component is unmounted. To fix this, you can scope it to component lifecycle like this: import { unmountEvent } from "harmaja" const ScrollPosDisplay = () => { scrollPos .pipe(L.applyScope(componentScope())) .forEach((pos) => console.log(pos)) // ... } And you're good to go! See the full example at examples/side-effects. The pitfall with componentScope()The pitfall with componentScope() When you apply componentScope() to an observable as above, When you create stateful Properties or Atoms (i.e. ones that are based on Properties but add some local state, such as filter), you need to specify a Scope that defines the lifetime of this Property/Atom. Harmaja componentScope is very suitable, as it will activate the Property on component mount and deactivate on unmount. The gotcha here is that when running the component constructor, the stateful Property is not in scope yet (component is not mounted, and we should not activate before mount, or we get a resource leak). So, you can subscribe to stateful Properties in the constructor, but you cannot get their value yet. If you do, the get() call will throw an error saying "not in scope yet". ExamplesExamples Part of my process has been validating my work with some examples I've previously used for the comparison of different React state management solutions. Here I quickly list some examples, but I beg you to read the full story below, which will visit each of these examples in a problem context instead of just throwing a bucket of code in your face. Todo App with Unidirectional data flow: examples/todoapp. I've added some annotations. In this example, application state is reduced from different events (add/remove/complete todo item). Todo App with Atoms: examples/todoapp-atoms. It's rather less verbose, because with Atoms, you can decompose and manipulate substate directly using atom.setinstead using events and reducers. Finally a bit more involved example featuring a "CRM": examples/consultants. It features some harder problems like dealing with asynchronous (and randomly failing!) server calls as well as edit/save/cancel. Examples covered also in the chapters below, with some context. Unidirectional data flowUnidirectional data flow Unidirectional data flow, popularized by Redux, is a leading state management pattern in web frontends today. In short, it means that you have a (usually essentially) global data store or stores that represent pretty much the entire application state. Changes to this state are not effected directly by UI components but instead by dispacthing events or actions which then are processed by reducers and applied to the global state. The state is treated as an immutable object and every time the reducers applies a new change to state, it effectively creates an entire new state object. In Typescript, you could represent these concepts in the context of a Todo App like this: type Item = {} type Event = { type: "add"; item: Item } | { type: "remove"; item: Item } type State = { items: Item[] } type Reducer = (currentState: State, event: Event) => State interface Store { dispatch(event: Event) subscribe(observer: (event: Event) => void) } In this scenario, UI components will Store and dispatch events to effect state changes. The store will apply its Reducer to incoming events and the notify the observer components on updated state. The benefits are (to many, nowadays) obvious. These come from the top of my mind. - Reasoning about state changes is straightforward, as only reducers change state. You can statically backtrack all possible causes of a change to a particular part of application state. - The immutable global state object makes persisting and restoring application state easier, and makes it possible to create and audit trail of all events and state history. It also makes it easier to pass the application state for browser-side hydration after a server-side render. - Generally, reasoning about application logic is easier if there is a pattern, instead of a patchwork of ad hoc solutions Implementations such as Redux allow components to react to a select part of global state (instead of all changes) to avoid expensive updates. With React hooks, you can conveniently just useSelector(state => pick interesting parts) and you're done. It's not a silver bullet though. Especially when using a single global store with React / Redux - There is no solution for local or scoped state. Sometimes you need scoped state that applies, for instance, to the checkout process of your web store. Or to widely used components such as an address selector. Or for storing pending changes to, say, user preferences before applying them to the global state. - This leads to either using React local state or some "corner" of the global state for these transient pieces of state - Refactoring state from local to global is tedious and error-prone because you use an entirely different mechanism for each - You cannot encapsulate functionalities (store checkout) into self-sustaining components because they are dependent on reducers which lively somewhere else completely Other interesting examples of Unidirectional data flow include Elm and Cycle.js. Unidirectional data flow with HarmajaUnidirectional data flow with Harmaja In Harmaja, you can implement Unidirectional data flow too. Sticking with the Todo App example, you define your events as buses: import * as L from "lonna" type AppEvent = { action: "add"; name: string } | { action: "remove"; id: Id } const appEvents = L.bus<AppEvent>() The bus objects allow you to dispatch an event by calling their push method. From the events, the application state can be reduced using L.scan like thus: const initialItems: TodoItem[] = [] function reducer(items: TodoItem[], event: AppEvent): TodoItem[] { switch (event.action) { case "add": return items.concat(todoItem(event.name)) case "remove": return items.filter((i) => i.id !== event.id) } } const allItems = appEvents.pipe(L.scan(initialItems, reducer, L.globalScope)) The L.globalScope parameter is used to specify the lifetime of the allItems property, i.e. how long it will be kept up-to-date. When using globalScope the property updates will never stop. When creating statetul Properties within Harmaja components, you can also use componentScope() from import { componentScope } from "harmaja", to stop updates after the components has been unmounted. You can, if you like, then encapsulate all this into something like interface TodoStore { dispatch: (action: AppEvent) => void items: L.Property<TodoItem[]> } ...so you have an encapsulation of this piece of application state, and you can pass this store to your UI components. You can also define the buses and the derived state properties in your components if you want to have scoped state. There is no such thing as react context in Harmaja, so everything is either passed explicitly or defined in a global scope, at least for now. The globalScope parameter above indicates the lifetime for the constructure reactive Property, and a global lifetime in this case means that the value will be kept up-to-date indefinitely. If you declare state in a component, you should use componentScope() instead to prevent resource leak. You can import { componentScope } from "harmaja". From store to viewFrom store to view In unidirectional data flow setups, there's always a way to reflect the store state in your UI. For instance, - In react-redux you can use the useSelectorhook for extracting the parts of global state your component needs - In Elm and Cycle.js the whole state is always rendered from the root and you trust the framework to be effient in VDOM diffing Pretty soon after React started gaining popularity, my colleagues at Reaktor (and I later) started using a "Bacon megablob" architecture where events and the "store" are defined exactly as in the previous chapter, using Buses and a state Property. Thanks to React's relatively good performance with VDOM and diffing, it's in most cases a viable choice. Sometimes though, it may prove too heavy to render everything everytime. If you have a "furry" (wide and deeply nested) data model, doing a full render on every keystroke just might not work. This has caused pain and various optimizations (often involving local state) have been written. However, it may make more sense to adopt the useSelector-like approach and instead of rendering the whole VDOM on all changes, listen to relevant changes in your components and render on change. I wrote about one React Hooks based approach on the Reaktor blog earlierly. Now if we consider the case of Harmaja, the situation is different from any React based approaches. First of all, Harmaja doesn't have VDOM diffing or Hooks. But the fact that you can pass reactive properties as props fits the bill very nicely, so in the case of a TodoItem view, you can import { React, mount } from "../.." const ItemView = ({ item }: { item: L.Property<TodoItem> }) => { return ( <span> <span className="name">{L.view(item, (i) => i.name)}</span> </span> ) } The first big difference to Redux is that instead of asking for stuff from the global state in your component implementation, you actually require the relevant data in the function signature (how revolutionary!). This rather obviously makes the component easier to reason about, use in different context, test and so on. So just from the function signature you can easily decuce that this component will render a TodoItem and reflect any changes that are effected to that particular TodoItem (because the input is a reactive property). In the implementation, the L.view is used to get a Property<string> which then can be directly embedded into the resulting DOM, because Harmaja natively renders Properties. When the value of name changes, the updated value will be applied to DOM. Think: you can pick a part of your Store and use it as a Store. This removes the need for the component to know where the data is in the global store. In react-redux all components that actually react to store changes, need to know the "location" of their data in the store to be able to get it using useSelector. In contrast using the Property abstraction you can easily map out the data from the store and give a handle to your components. Another big difference is that store data and local data are the same. No separate mechanism for dealing with local state. Instead, you can declare more Properties in your component constructors as you go, to flexibly define data stores at different application layers. Which arguably makes it easier to make changes too, as you don't need to change the mechanism when moving from local to global. More on this topic below. Anyway, let's put the Todo App together right now! To simplify a bit, if were were just rendering the first TodoItem (There's a chapter on array rendering down there), the root element of the application could look like this: const App = () => { const firstItem: Property<TodoItem> = L.view(allItems, (items) => items[0]) return <ItemView item={firstItem} /> } Then you can mount it and it'll start reacting to changes in the store: mount(<App />, document.getElementById("root")!) Although I prefer components that get all of their required inputs in their constructor (this is called dependency injection), there's nothing to prevent you from accessing global "stores" from your components as well. See the full Todo App example here. Composable read and writeComposable read and write So it's easy to decompose data for viewing so that you can compose your application out of components. But what about writes? Do they compose too? It would certainly be nice not to have to worry about every single detail in the high level "main reducer". Instead I find it an attractive idea to deal on a higher abstraction level. Let's try! It's intuitive to start with this: updateTodoItem: L.Bus<TodoItem>() todoItems: L.Property<TodoItem[]> // impl redacted So instead of having to care about all the possible modifications to items on this level, there's a single updateTodoItem event that can be used to perform any update. As shown earlierly, decomposition works nicely as you can call L.view(item, i => i.someField) to get views into its components parts. Now let's revisit ItemView from the previous section and add a onUpdate callback. import { React, mount } from "../.." const ItemView = ({ item, onChange, }: { item: L.Property<TodoItem> onChange: (i: TodoItem) => void }) => { const onNameChange = (newName: string) => { /* wat */ } return ( <span> <TextInput text={L.view(item, (i) => i.name)} onChange={onNameChange} /> </span> ) } const TextInput = ({ value, onChange, }: { text: L.Property<string> onChange: (s: string) => void }) => { return ( <input value={text} onInput={(e) => onChange(e.currentTarget.value)} /> ) } I added an simple TextInput component that renders the given Property<string> into an input element and calls its onChange listener. Yes, with Harmaja, you can embed reactive properties into DOM element props just like that. Now the question is, how to implement onNameChange, as well as the myriad similar functions you may need in your more complex applications. The tricky thing is that in the onNameChange function you don't really have the current full TodoItem at hand. Instead you have a Property<TodoItem> which does not provide a method for extracting its current value. The reason for this omission is that reactive properties are meant to be used in a reactive manner, i.e. by subscribing to them. If you don't subscribe, the property isn't necessarily kept up to date with its underlying data source. Yet, in this case we know that the property has a value and is active (the TextInput is subscribing to it to reflect changes). So we can use a little hack, which is the getCurrentValue function used by Harmaja under the hood for being able to render observables synchronously, and which it generously exports as well. So we can do this: const onNameChange = (newName: string) => { onChange({ ...getCurrentValue(item), name: newName }) } So it's fully doable: you can use a higher level of abstraction in the top-level reducer and deal with individual field updates in "mid-level" components such as the ItemView. Yet, it's far from elegant especially if you've ever worked with Atoms and Lenses with Calmm.js. Read on. Welcome to the Atom AgeWelcome to the Atom Age So you're into decomposing read-write access into data. This is where atoms come handy. An Atom<A> simply represents a two-way interface to data by extending Property<A> and adding a set: (newValue: A) method for changing the value. Let's try it by changing our TextInput to import { Atom, atom } from "lonna" const TextInput = ({ value }: { text: Atom<string> }) => { return ( <input value={text} onInput={(e) => text.set(e.currentTarget.value)} /> ) } This is the full implementation. Because Atom encapsulates both the view to the data (by being a Property) and the callback for data update (through the set method), it can often be the sole prop an "editor" component needs. To create an Atom in our unidirectional data flow context, we can construct an "dependent atom" from a Property and a set function like so: const ItemView = ({ item, onChange, }: { item: L.Property<TodoItem> onChange: (i: TodoItem) => void }) => { const itemAtom: Atom<TodoItem> = atom(item, onChange) return ( <span> <TextInput value={L.view(itemAtom, "name")} /> </span> ) } And that's also the full implementation! I hope this demonstrates the power of the Atom abstraction. The view method there is particularly interesting (I redacted methods and the support for array keys for brevity): export interface Atom<A> extends L.Property<A> { set(newValue: A): this get(): A } function view<K extends keyof A>(a: Atom<A>, key: K): Atom<A[K]> The same view method that you can use to get a read-only views into Properties, can be used to create another atom that gives read-write access to one field of ther TodoItem and done this in a type-safe manner (compiler errors in case you misspelled a field name). Finally, we have an abstraction that makes read-write data decomposition a breeze! Adding more editable fields is no longer a chore. And all this still with unidirectional data flow, immutable data and type-safety. The view method is actually based on the Lenses that's a concept been used in the functional programming world for quite a while. Yet, I haven't heard much talk about using Lenses in web application state management except for Calmm.js and before that the Bacon.Model library. I could rant about lenses all night long but for now, I'll show you the Atom-specific signatures of the view method: export function view<A, K extends keyof A>( a: Atom<A>, key: K ): K extends number ? Atom<A[K] | undefined> : Atom<A[K]> export function view<A, B>(a: Atom<A>, lens: L.Lens<A, B>): Atom<B> It reveals two things. First, it supports numbers for accessing array elements. But most importantly, you can create a view to an Atom with an arbitrary Lens. Which a really simple abstraction: export interface Lens<A, B> { get(root: A): B set(root: A, newValue: B): A } But let's move on. Local stateLocal state So far, it's all been about Unidirectional Data Flow when there's a single source of truth which is a single reactive Property that's reduced from one or more events streams. Yet sometimes it makes sense to use some local state too. That's when standalone Atoms come into play. To use our ItemView as a standalone component you can change it to use the Atom interface just like the lower level TextInput component: const ItemView = ({ item }: { item: Atom<TodoItem> }) => { return ( <span> <TextInput value={L.view(item, "name")} /> </span> ) } and use it in your App like this: const App = () => { const item: Atom<TodoItem> = atom({ id: 1, name: "do stuff", completed: false, }) return <ItemView item={item} /> } See, I created an independent Atom in the App component. It's practically local state to App now. Remember that in Harmaja, just like with Calmm.js, component functions are to be treated like constructors. This means that the local variables created in the function will live as long as the component is mounted, and can thus be used for local state (unlike in React where they would be re-ininitialized on every VDOM render). That's all there is to local state in Harmaja, really. State can be defined globally, or in any level of the component tree. When you use Atoms, you can define them locally or accept them as props. You can even add a fallback: const ItemView = ({ item }: { item: Atom<TodoItem> = atom(emptyTodoItem) }) => { /// } This component could be used with an external atom (often a view into a larger chunk of app state) or without it, in which case it would have it's private state. And it's turtles all the way down by the way. You can define your full application state as an Atom and them view your way into details. An example of fully Atom-based application state can be seen at examples/todoapp-atoms. ArraysArrays Efficient and convenient way of working with arrays of data is a necessary step to success. When there's a substantial number of items (say 1000) of some substantial complexity, performance is not trivial anymore. React VDOM diffing will get its users to some point, but when that's not enough, you'll need to make sure that frequent operations (change to a single item, append new item, depends on use case) do not force the full array VDOM to be re-rendered. This is fully possible with, for instance, react-redux: just make sure the component that renders the array doesn't re-render unless array size changes. In Harmaja, there's no VDOM diffing so relying on that is not an option. Therefore, a perfomant and ergonomic array view is key. So, I've included a ListView component for just that. Imagine again you're building a Todo App again (who isnt'!) and you have the same data model that was introduced in the "Unidirectional data flow" chapter above. To recap, it's this. type TodoItem = { name: string id: number completed: boolean } const addItemBus = new L.Bus<TodoItem>() const removeItemBus = new L.Bus<TodoItem>() const allItems: L.Property<TodoItem[]> = L.update( globalScope, [], [addItemBus, (items, item) => items.concat(item)], [removeItemBus, (items, item) => items.filter((i) => i.id !== item.id)] ) Rendering read-only arraysRendering read-only arrays To render the TodoItems represented by the allItems property you can use ListView thus: ;<ListView observable={allItems} renderObservable={(item: L.Property<TodoItem>) => <ItemView item={item} />} getKey={(a: TodoItem) => a.id} /> const ItemView = ({ item }: { item: L.Property<TodoItem> }) => { // implement view for individual item } What ListView does here is that it observes allItems for changes and renders each item using the ItemView component. When the list of items changes (something is replaced, added or removed) it uses the given getKey function to determine whether to replace individual item views. With the given getKey implementation it replaces views only when the id field doesn't match, i.e. the view no longer represents the same item. Each item view gets a Property<TodoItem> so that they can update when the content in that particular TodoItem is changed. See full implementation in examples/todoapp. Rendering read-write arrays using AtomsRendering read-write arrays using Atoms ListView also supports read-write access using Atom. So if you have const allItems: Atom<TodoItem[]> = atom([]) You can have read-write access to the items by using ListView thus: <ListView atom={items} renderAtom={(item, removeItem) => { return ( <li> <ItemView {...{ item, removeItem }} /> </li> ) }} getKey={(a) => a.id} /> As you can see ListView provides a removeItem callback for Atom based views, so that in your ItemView you can implement removal simply thus: const Item = ({ item, removeItem, }: { item: Atom<TodoItem> removeItem: () => void }) => ( <span> <span className="name">{L.view(item, "name")}</span> <a onClick={removeItem}>remove</a> </span> ) This item view implementation only gives a readonly view with a remove link. To make the name editable, you could now easily use the TextInput component we created earlierly: const Item = ({ item, removeItem, }: { item: Atom<TodoItem> removeItem: () => void }) => ( <span> <TextInput value={L.view(item, "name")} /> <a onClick={removeItem}>remove</a> </span> ) See the full atomic implementation of TodoApp in examples/todoapp-atom. Rendering read-only arrays as static viewsRendering read-only arrays as static views There's a third variation of TextView still, for read-only views: <ListView observable={items} renderItem={(item: TodoItem) => ( <li> <Item item={item} /> </li> )} /> So if you provide renderItem instead of renderObservable or renderAtom, you can use a view that renders a plain TodoItem. This means that the item view cannot react to changes in the item data and simply renders the static data it is given. So, when an item's content changes, the item view will be replaced by ListView. You can optimize this variant a bit by supplying a getKey function to avoid full repaints when an item is added or removed. Component lifecycleComponent lifecycle When components subscribe to data sources, it's vital to unsubscribe on unmount to prevent resource leaks. In traditional React, you used the component lifecycle methods componentDidMount and componentWillUnmount to subscribe and unsubscribe. This kind of manual resource management is, based on my experience, very error-prone. The useEffect hook gives better tools for the job. Still, you have to remember to return a cleanup function (see example). When dealing with data sources such as Observables, Promises or the Redux Store, it's better to use a higher level of abstract to avoid doing cleanup manually. And, because all of them are in fact generic abstractions, you can build/steal/borrow generic utilities for this. The useSelector hook in react-redux is a good example: it gives you the data you need without bothering you with cleanup. Similarly you can build hooks for dealing with Observables as I discovered in my blog post in 2018. In Harmaja, there are no hooks. State management is built on Observables and subscriptions to observables are managed automatically based on component lifecycle. Details follow! As told above, components in Harmaja are functions that are treated as constructors. The return value of a Harmaja component is actually a native HTMLElement. When you embed observables into the DOM, Harmaja creates a placeholder node (empty text node) into the DOM tree and replaces it with the real content when the observable yields a value. Whenever it subscribes to an observable, it attachs the resultant unsub function to the created DOM node so that it can perform cleanup later. When Harmaja replaces any DOM node, it recursively seeks all attached unsubs in the node and its children and calls them to free resources related to the removed nodes. See Dangling Subscriptions for dealing with side-effects and scoping observables into component lifetime (which will ensure that resources are freed after component is unmounted). Promises and async requestsPromises and async requests I don't think a state management solution is complete until it has a strategy for dealing with asynchronous requests, typically with Promises. Common scenarios include - Fetching extra data from server when mounting a component. Gets more complicated if you need to re-fetch in case some componnent prop changes - Fetching data in response to a user action, i.e. the search scenario. This boils down the first scenario if you have a SearchResults component that fetches data in response to changed query string - Storing changed data to server. Complexity arises from the need to disable UI controls while saving, handling errors gracefully etc. Bonus points for considering whether this is a local or a global activity - and where should the transient state be stored. In Harmaja, reactive Properties and EventStreams are used for dealing with asynchrous requests. Promises can be conveniently wrapped in. Let's have a look at an example. The search exampleThe search example Let's consider the search example. Starting from SearchResults component, it might look like this: type SearchState = | { state: "initial" } | { state: "searching"; searchString: string } | { state: "done"; results: string[]; searchString: string } const SearchResults = ({ state }: { state: L.Property<SearchState> }) => { // ? } I didn't want to make this too simple, because simple things are always easy to do. In this case, we want to - Show the results if any - Show "nothing found" in case the result is an empty array - Show an empty component in case there's nothing to show (state=initial) - Show "Searching..." when search is in progress, or show previous search results with opacity:0.5in case there are any For starters we might try a simplistic approach: const SearchResultsSimplest = ({ state } : { state: L.Property<SearchState> }) => { const currentResults: L.Property<string[] | null = L.view(state, s => s.state === "done" ? s.results : null) const message: L.Property<string> = L.view(currentResults, r => r.length === 0 ? "Nothing found" : null) return <div> { message } <ul><ListView observable={currentResults} renderItem={ result => <li>{result}</li>} /></ul> </div> } The list of result and a message string are derived from the state using view (state decomposition in action). Then we can easily include the "Searching" indicator using the same technique. But showing previous results while searching requires some local state, because that's not incluced in state. Fortunately, reactive properties provide good tools for this. For instance, ) Then we can determine the message string to show to the user, based on state and currently shown results: const message = L.view(state, latestResults, (s, r) => { if (s.state == "done" && r.length === 0) return "Nothing found" if (s.state === "searching" && r.length === 0) return "Searching..." return "" }) Here's another way of using L.view for creating a new Property that reflects the latest values from the given two properties (state, latestResults) applying the given mapping function to the values. The opacity:0.5 style can be applied similarly using L.view and the final SearchResults component looks like this: const SearchResults = ({ state }: { state: L.Property<SearchState> }) => { ) const message = L.view(state, latestResults, (s, r) => { if (s.state == "done" && r.length === 0) return "Nothing found" if (s.state === "searching" && r.length === 0) return "Searching..." return "" }) const style = L.view(state, latestResults, (s, r) => { if (s.state === "searching" && r.length > 0) return { opacity: 0.5 } return {} }) return ( <div> {message} <ul style={style}> <ListView observable={latestResults} renderItem={(result) => <li>{result}</li>} /> </ul> </div> ) } But this was supposed to be about dealing with asynchronous requests! Let's get to the main Search component now. declare function search(searchString: string): Promise<string[]> // implement using fetch() function searchAsProperty(s: string): L.Property<string[]> { return L.fromPromise( search(s), () => [], (xs) => xs, (error) => [] ) } const Search = () => { const searchString = L.atom("") const searchStringChange: L.EventStream<string> = searchString.pipe( L.changes, L.debounce(500, componentScope()) ) const searchResult = searchStringChange.pipe( L.flatMapLatest<string, string[]>(searchAsProperty) ) const state: L.Property<SearchState> = L.update( componentScope(), { state: "initial" } as SearchState, [ searchStringChange, (state, searchString) => ({ state: "searching", searchString }), ], [ searchResult, searchString, (state, results, searchString) => ({ state: "done", results, searchString, }), ] ) return ( <div> <h1>Cobol search</h1> <TextInput value={searchString} <SearchResults state={state} /> </div> ) } Lots of interesting details above! First of all, I started with an Atom to store the current searchString. Then I plugged the earlierly defined TextInput in place. The actual search function is redacted and could be easily implemented using Axios or fetch. I added a simple wrapper searchAsProperty that returns search results a Property instead of a Promise. This is easy using L.fromPromise. The searchResult EventStream is created using flatMapLatest which spawns a new EventStream or Property for each input event using the searchAsProperty helper and keeps on listening for results from the latest created stream (that's where the "latest" part in the name comes from). Then I've introduced a reducer, once again using L.update, and come up with the state property. This setup is now local to the Search component, but could be moved into a separate store module if it turned out that it's needed in a larger scope. One more notice: on the last line of the reducer, I've included an extra parameter, i.e. the searchString property. This is a convenient way to plug the latest value of a Property into the equation in a reducer. In each of the patterns in L.update you should have one EventStream and zero or more Properties. Only the EventStream will trigger the update; Properties are there only so that you can use their latest value in the equation. One common pattern in searching is throttling (or debouncing). You don't want to send a potentionally expensive query to your server on each keystroke. When using Lonna, you can choose between debounce and throttle. To use a 300 millisecond debounce, the change looks like this: const searchStringChange: L.EventStream<string> = searchString .changes() .debounce(300) See the full search implementation at examples/search. More dealing with async request at examples/consultants. Detaching and syncing stateDetaching and syncing state I find quite often myself wanting to have some local state for editing something that comes from the global state. I mean so that the local changes are not automatically pushed to the global state. I wrote the following helper for this: export function editAtom<A>(source: L.Property<A>): L.Atom<A> { const localValue = L.atom<A | undefined>(undefined) const value = L.view(source, localValue, (s, l) => l !== undefined ? l : s ) return L.atom(value, localValue.set) } This method gives you an Atom that reflects the global state until a local change is made and after that, reflects the local state. You can do const globalState: Atom<string> const localState = editAtom(globalState) Now in your component you can work with the localState atom freely. When you want to commit the value back to global state, you can globalState.set(localState.get()) The topic is also covered in examples/todoapp-backend. ´ Motivation and backgroundMotivation and background For a long time I've been pondering different state management solutions for React. My thinkin in this field is strongly affected byt the fact that I'm pretty deep into Observables and FRP (functional reactive programming) and have authored the Bacon.js library back in the day. I've seen many approaches to frontend state management and haven't been entirely satisfied with any of them. This has lead into spending lots of time considering how I could apply FRP to state management in an "optimal" way. So one day I had some spare time and couldn't go anywhere so I started drafting on what would be my ideal "state management solution". I wrote down the design goals, which are in no particular priority order at the moment. - G1 Intuitive: construction, updates, teardown - G2 Safe: no accidental updates to nonexisting components etc. - G3 Type-safe (Typescript) - G4 Immutable data all the way - G5 Minimum magic (no behind-the-scenes watching of js object property changes etc) - G6 Small API surface area - G7 Small runtime footprint - G8 Easy mapping of (changing) array of data items to UI elements - G9 Easy to interact with code outside the "framework": don't get in the way, this is just programming - GA Minimal boilerplate - GB Composability, state decomposition (Redux is composing, Calmm.js with lenses is decomposing) - GC Easy and intuitive way of creating local state (and pushing it up the tree when need arises) - GD Performant updates with minimal hassle. No rendering the full page when something changes Calmm.js, by [Vesa] (), is pretty close! It uses Atoms and Observables for state management and treats React function components essentially as constructors. This approach makes it straightforward to introduce, for example, local state variables as regular javascript variables in the "constructor" function. It treats local and global state similarly and makes it easy to refactor when something needs to change from local to higher-up in the tree. Yet, it's not type-safe and is hard to make thus. Especially the highly flexible partial.lenses proves hard. Also, when looking at it more closely, it goes against the grain of how React is usually used, which will make it a bit awkward for React users. Suddenly you have yet another kind of component at your disposal, which expects you not to call it again on each render. In fact, I felt that Calmm.js doesn't really need anything from React which is more in the way instead of being helpful. A while ago Vesa once-gain threw a mindblowing demonstration of how he had adapted the Calmm approach to WPF using C#. This opened my eyes to the fact that you don't need a VDOM diffing framework to do this. It's essentially just about calling component constructors and passing reactive variables down the tree. After some hours of coding I had ~200 lines of Typescript which already rendered function components and allowed embedding reactive values into the VDOM, replacing actual DOM nodes when the reactive value changed. After some more hours of coding I have a prototype-level library that you can also try out. Let me hear your thoughts! StatusStatus This is an experimental library. I have no idea whether it will evolve into something that you would use in production. Feel free to try and contribute though! I'll post the crucial shortcomings as Issues. Next challenge: - JSX typings, including allowing Properties as attribute values. Currently using React's typings which are not correct and cause compiler errors which require using anyhere and there More work - Support list of elements as render result - Remove the spanwrapper from smartarray - Render directly as DOM elements instead of creating VDOM (when typings are there)
https://www.npmjs.com/package/harmaja
CC-MAIN-2021-39
refinedweb
8,068
55.44
kdeui KMainWindow Class ReferenceKDE top level main window More... #include <kmainwindow.h> Detailed DescriptionK a KMainWindow per-default is created with the WDestructiveClose flag, i.e. it is automatically destroyed when the window is closed. If you do not want this behavior, specify 0 as widget flag in the constructor. - See also: - KApplication Definition at line 98 of file kmainwindow.h. Member Enumeration Documentation Flags that can be passed in an argument to the constructor to change the behavior. NoDCOPObject tells KMainWindow not to create a KMainWindowInterface. This can be useful in particular for inherited classes, which might want to create more specific dcop interfaces. It's a good idea to use KMainWindowInterface as the base class for such interfaces though (to provide the standard mainwindow functionality via DCOP). - Enumerator: - Definition at line 148 of file kmainwindow.h. - See also: - setupGUI() - Enumerator: - Definition at line 546 of file kmainwindow.h. Constructor & Destructor Documentation Construct a main window. - Parameters: - KMainWindow *kmw = new KMainWindow (...); Definition at line 167 of file kmainwindow.cpp. Overloaded constructor which allows passing some KMainWindow::CreationFlags. - Since: - 3.2 Definition at line 173 of file kmainwindow.cpp. Destructor. Will also destroy the toolbars, and menubar if needed. Definition at line 314 of file kmainwindow.cpp. Member Function Documentation - Returns: - A KAccel instance bound to this mainwindow. Used automatically by KAction to make keybindings work in all cases. Definition at line 1171 of file kmainwindow.cpp.): KIconLoader &loader = *KGlobal::iconLoader(); QPixmap pixmap = loader.loadIcon( "help" ); toolBar(0)->insertButton( pixmap, 0, SIGNAL(clicked()), this, SLOT(appHelpActivated()), true, i18n("Help") ); Definition at line 600 of file kmainwindow.cpp. Definition at line 829 of file kmainwindow.cpp. Read settings for statusbar, menubar and toolbar from their respective groups in the config file and apply them. - Parameters: - Definition at line 834 of file kmainwindow.cpp. - Returns: - the group used for setting-autosaving. Only meaningful if setAutoSaveSettings() was called. This can be useful for forcing a save or an apply, e.g. before and after using KEditToolbar. - Since: - 3.1 Definition at line 1044 of file kmainwindow.cpp. - Returns: - the current autosave setting, i.e. true if setAutoSaveSettings() was called, false by default or if resetAutoSaveSettings() was called. - Since: - 3.1 Definition at line 1039 of file kmainwindow.cpp. Session Management Try to restore the toplevel widget as defined by the number (1..X). If the session did not contain so high a number, the configuration is not changed and false returned. That means clients could simply do the following: if (kapp->isRestored()){ int n = 1; while (KMainWindow::canBeRestored(n)){ (new childMW)->restore(n); n++; } } else { // create default application as usual } (kapp->is. Definition at line 352 of file kmainwindow.cpp. Reimplemented from QMainWindow. Definition at line 1136 of file kmainwindow.cpp. Returns the className() of the number of the toplevel window which should be restored. This is only useful if your application uses different kinds of toplevel windows. Definition at line 364 of file kmainwindow.cpp. Reimplemented to call the queryClose() and queryExit() handlers. We recommend that you reimplement the handlers rather than closeEvent(). If you do it anyway, ensure to call the base implementation to keep queryExit() running. Definition at line 634 of file kmainwindow.cpp. Show a standard configure toolbar dialog. This slot can be connected dirrectly to the action to configure shortcuts. This is very simple to do that by adding a single line KStdAction::configureToolbars( guiFactory(), SLOT( configureToolbars() ), actionCollection() ); - Since: - 3.3 Definition at line 431 of file kmainwindow.cpp. Create a GUI given a local XML file. If xmlfile is NULL, then it will try to construct a local XML filename like appnameui.rc where 'appname' is your app's name. If that file does not exist, then the XML UI code will only use the global (standard) XML file for the layout purposes. Note that when passing true for the conserveMemory argument subsequent calls to guiFactory()->addClient/removeClient may not work as expected. Also retrieving references to containers like popup menus or toolbars using the container method will not work. - Parameters: - Definition at line 491 of file kmainwindow.cpp. Sets whether KMainWindow should provide a menu that allows showing/hiding of the statusbar ( using KToggleStatusBarAction ). The menu / menu item is implemented using xmlgui. It will be inserted in your menu structure in the 'Settings' menu. Note that you should enable this feature before calling createGUI() ( or similar ). If an application maintains the action on its own (i.e. never calls this function) a connection needs to be made to let KMainWindow know when that status (hidden/shown) of the statusbar has changed. For example: connect(action, SIGNAL(activated()), kmainwindow, SLOT(setSettingsDirty())); Otherwise the status (hidden/show) of the statusbar might not be saved by KMainWindow. - Since: - 3.2 Definition at line 796 of file kmainwindow.cpp.()->insertItem( i18n("&Help"), customHelpMenu() ); - Parameters: - - Returns: - A standard help menu. Definition at line 341 of file kmainwindow.cpp. For internal use only. Definition at line 884 of file kmainwindow.cpp. - Since: - 3.1 Reimplemented from KXMLGUIBuilder. Definition at line 1239 of file kmainwindow.cpp. List of members of KMainWindow class. - Since: - 3.4 Definition at line 1235 of file kmainwindow.cpp. Definition at line 424 of file kmainwindow.cpp. Returns true, if there is a menubar. - Since: - 3.1 Definition at line 1066 of file kmainwindow.cpp. Retrieve the standard help menu. It contains entires for the help system (activated by F1), an optional "What's This?" entry (activated by Shift F1), an application specific dialog box, and an "About KDE" dialog box. Example (adding a standard help menu to your application): KPopupMenu *help = helpMenu( <myTextString> ); menuBar()->insertItem( i18n("&Help"), help ); - Parameters: - - Returns: - A standard help menu. Definition at line 324 of file kmainwindow.cpp. Reimplementation of QMainWindow::hide(). Definition at line 391 of file kmainwindow.cpp. For internal use only. Used from Konqueror when reusing the main window. Definition at line 985 of file kmainwindow.cpp. - Returns: - true if a -geometry argument was given on the command line, and this is the first window created (the one on which this option applies) Definition at line 980 of file kmainwindow.cpp. Return true when the help menu is enabled. Definition at line 576 of file kmainwindow.cpp. - Since: - 3.1 Definition at line 791 of file kmainwindow.cpp. Returns a pointer to the menu bar. If there is no menu bar yet one will be created. Definition at line 1071 of file kmainwindow.cpp. Definition at line 1178 of file kmainwindow.cpp. parse the geometry from the geometry command line argument Definition at line 279 of file kmainwindow.cpp.() KApplication::sessionSaving() Definition at line 669 of file kmainwindow.cpp. KApplication::shutDown(). Default implementation returns true. Returning false will cancel the exiting. In the latter case, the last window will remain visible. If KApplication::sessionSaving() is true, refusing the exit will also cancel KDE logout. - See also: - queryClose() KApplication::sessionSaving() Definition at line 664 of file kmainwindow.cpp. The counterpart of saveGlobalProperties(). Read the application-specific properties in again. Definition at line 678 of file kmainwindow.cpp. Read your instance-specific properties. Definition at line 881 of file kmainwindow.h. Definition at line 805 of file kmainwindow.cpp. Disable the auto-save-settings feature. You don't normally need to call this, ever. Definition at line 1032 of file kmainwindow.cpp. Definition at line 1060 of file kmainwindow.cpp. Restore the session specified by number. Returns false if this fails, otherwise returns true and shows the window. You should call canBeRestored() first. If show is true (default), this widget will be shown automatically. Definition at line 411 of file kmainwindow.cpp. For inherited classes Note that the group must be set before calling, and that a -geometry on the command line has priority. Definition at line 934 of file kmainwindow.cpp. This slot should only be called in case you reimplement closeEvent() and if you are using the "auto-save" feature. In all other cases, setSettingsDirty() should be called instead to benefit from the delayed saving. - See also: - setAutoSaveSettings - Since: - 3.2 void MyMainWindow::closeEvent( QCloseEvent *e ) { // Save settings if auto-save is enabled, and settings have changed if ( settingsDirty() && autoSaveSettings() ) saveAutoSaveSettings(); .. } Definition at line 1049 of file kmainwindow.cpp.. Definition at line 674 of file kmainwindow.cpp. Save settings for statusbar, menubar and toolbar to their respective groups in the config file config. - Parameters: - Definition at line 716 of file kmainwindow.cpp. Rebuilds the GUI after KEditToolbar changed the toolbar layout. - See also: - configureToolbars() Definition at line 439 of file kmainwindow.cpp. Save your instance-specific properties. The function is invoked when the session manager requests your application to save its state. You must not change the group of the kconfig object, since KMainWindow uses one group for each window. Please reimplement these function in childclasses. Note: No user interaction is allowed in this function! Definition at line 876 of file kmainwindow.h. Definition at line 692 of file kmainwindow.cpp. For inherited classes Note that the group must be set before calling. Definition at line 902 of file kmainwindow.cpp. Call this to enable "auto-save" of toolbar/menubar/statusbar settings (and optionally window size). If the *bars were moved around/shown/hidden when the window is closed, saveMainWindowSettings( KGlobal::config(), groupName ) will be called. - Parameters: -. Definition at line 1017 of file kmainwindow.cpp. Makes a KDE compliant caption. - Parameters: - Definition at line 586 of file kmainwindow.cpp. Makes a KDE compliant caption. - Parameters: - Definition at line 581 of file kmainwindow.cpp. Definition at line 417 of file kmainwindow.h. Enables the build of a standard help menu when calling createGUI(). The default behavior is to build one, you must call this function to disable it Definition at line 571 of file kmainwindow.cpp. For internal use only. Definition at line 1225 of file kmainwindow.cpp. Make a plain caption without any modifications. - Parameters: - Definition at line 591 of file kmainwindow.cpp. Apply a state change. Enable and disable actions as defined in the XML rc file, can "reverse" the state (disable the actions which should be enabled, and vice-versa) if specified.. Definition at line 990 of file kmainwindow.cpp. Sets whether KMainWindow should provide a menu that allows showing/hiding the available toolbars ( using KToggleToolBarAction ) . In case there is only one toolbar configured a simple 'Show <toolbar name here>' menu item is shown. The menu / menu item is implemented using xmlgui. It will be inserted in your menu structure in the 'Settings' menu. If your application uses a non-standard xmlgui resource file then you can specify the exact position of the menu / menu item by adding a <Merge name="StandardToolBarMenuHandler" /> line to the settings menu section of your resource file ( usually appname.rc ). Note that you should enable this feature before calling createGUI() ( or similar ) . You enable/disable it anytime if you pass false to the conserveMemory argument of createGUI. - Since: - 3.1 Definition at line 769 of file kmainwindow.cpp. For inherited classes. Definition at line 1007 of file kmainwindow.cpp. For inherited classes. Definition at line 1012 of file kmainwindow.cpp. Configures the current windows and its actions in the typical KDE fashion. The options are all enabled by default but can be turned off if desired through the params or if the prereqs don't exists. defaultSize The default size of the window Typically this function replaces createGUI(). - See also: - StandardWindowOptions - Since: - 3.5 Definition at line 449 of file kmainwindow.cpp. Configures the current windows and its actions in the typical KDE fashion. The options are all enabled by default but can be turned off if desired through the params or if the prereqs don't exists. Typically this function replaces createGUI(). - See also: - StandardWindowOptions - Since: - 3.3 Definition at line 445 of file kmainwindow.cpp. For internal use only. - Since: - 3.3.1 Definition at line 273 of file kmainwindow.cpp. Reimplementation of QMainWindow::show(). Definition at line 381 of file kmainwindow.cpp. This slot does nothing. It must be reimplemented if you want to use a custom About Application dialog box. This slot is connected to the About Application entry in the menu returned by customHelpMenu. Example: void MyMainLevel::setupInterface() { .. menuBar()->insertItem( i18n("&Help"), customHelpMenu() ); .. } void MyMainLevel::showAboutApplication() { <activate your custom dialog> } Definition at line 688 of file kmainwindow.cpp. - Returns: - the size the mainwindow should have so that the central widget will be of size. - Deprecated: - You normally don't need this, the recommended way to achieve a certain central widget size is as follows: - Override sizeHint() in the central widget so that it returns the desired size. - Call updateGeometry() in the central widget whenever the desired size changes. This ensures that the new sizeHint() is properly propagated to any parent layout. - Now call adjustSize() in the mainwindow to resize the mainwindow such that the central widget will become the desired size. Definition at line 1183 of file kmainwindow.cpp. Apply a state change. Enable and disable actions as defined in the XML rc file, can "reverse" the state (disable the actions which should be enabled, and vice-versa) if specified. - Since: - 3.1 Definition at line 618 of file kmainwindow.cpp. Apply a state change. Enable and disable actions as defined in the XML rc file - Since: - 3.1 Definition at line 610 of file kmainwindow.cpp. Definition at line 1083 of file kmainwindow.cpp. Returns a pointer to the toolbar with the specified name. This refers to toolbars created dynamically from the XML UI framework. If the toolbar does not exist one will be created. - Parameters: - - Returns: - A pointer to the toolbar Definition at line 1141 of file kmainwindow.cpp. - Returns: - An iterator over the list of all toolbars for this window. Definition at line 1156 of file kmainwindow.cpp. Returns a pointer to the mainwindows action responsible for the toolbars menu. - Since: - 3.1 Definition at line 264 of file kmainwindow.cpp. Reimplemented from KXMLGUIBuilder. Reimplemented in KDockMainWindow. Definition at line 1242 of file kmainwindow.cpp. Member Data Documentation List of members of KMainWindow class. Definition at line 385 of file kmainwindow.h. The documentation for this class was generated from the following files:
http://api.kde.org/3.5-api/kdelibs-apidocs/kdeui/html/classKMainWindow.html
CC-MAIN-2016-07
refinedweb
2,357
52.66
Stackdriver Logging API client library Project description Stackdriver Logging API: Writes log entries and manages your Stackdriver Logging configuration. Quick Start In order to use this library, you first need to go through the following steps: - Select or create a Cloud Platform project. - Enable billing for your project. - Enable the Stackdriver Logging.5 Deprecated Python Versions Python == 2.7. Python 2.7 support will be removed on January 1, 2020. Mac/Linux pip install virtualenv virtualenv <your-env> source <your-env>/bin/activate <your-env>/bin/pip install google-cloud-logging Windows pip install virtualenv virtualenv <your-env> <your-env>\Scripts\activate <your-env>\Scripts\pip.exe install google-cloud-logging Using the API from google.cloud import logging_v2 client = logging_v2.LoggingServiceV2Client() entries = [] - Read the Client Library Documentation for to see other available methods on the client. - Read the Product documentation to learn more about the product and see How-to Guides. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/google-cloud-logging/
CC-MAIN-2019-35
refinedweb
179
51.14
First things first: how it works If we ever want to have our apps both on the Windows Phone and the Windows 8 stores, we might want to use some kind of syncing mechanism between the two. Since neither can access the other’s isolated data directly, some sort of file transfer between the two apps will be necessary. There are, of course, several possibilities here, and the most popular one is using an intermediate web server, located “somewhere”. However, this has a few disadvantages; if the user’s wide world internet access is restricted due to various circumstances, your apps might not interact properly any more. While we like to think that the wide world web is generally available for everyone at any given moment, this is not always true. The user might not like paying for that data transfer and you have to host some web site which is unlikely to be free. However, users usually have an internet connection of their own, called intranet or ad-hoc internet, most of the time, without even realizing it. For instance, if a user has a router or an access point with several devices connected (wirelessly or not), the devices in that network form some sort of “internet”, and the protocols associated with internet will work regardless of the wide world web accessibility. Instead of using an intermediate server, we can use direct communication between the devices on the same network, through the TCP protocol. The TCP protocol is the building foundation of the internet. Pretty much any application which accesses the internet uses TCP to get some data from distant servers (most of the time, the request is routed through several servers but that is a story for another day). The TCP protocol consists of a server (listener) and a client, the two entities exchanging data through the various stages of the request. The request must always be initiated by the client by opening a connection, using the server IP address and a specified port. The server cannot simply connect to the client. The main advantage of the TCP protocol is that all packages of data sent by either of the entities will always be received by the other. If a package is lost along the way, the sender will keep sending said package until the receiver acknowledges complete delivery. This protocol basically behaves like a stream of data, hence all TCP sockets are also known as “stream sockets”. A socket is basically the low level object used in connections between devices. If the data queued for transmission is too big to fit in a single package, the LAN driver will split said data into packages, and send packages one after another, until all data is successfully sent, the entire process is abstracted away from the inquisitive eyes of the developer. TCP sockets can be opened through a wide range of connections: RFCOMM Bluetooth, devices that share the same access point (one device can be connected wirelessly and the other through Ethernet and it would still work), even when the phone is connected through USB to the tablet/computer and it would still work. Of course, you can use this to connect to any PC on the planet (and probably outside of the planet?), if you know the port name and IP address. For the purpose of this article, the computer will act as server and phone as client. Computer refers to anything that runs windows runtime apps, including ARM tablets, x86 tablets and the good old PC running windows 8. Until the release of windows phone 8, the phones could only be client. Now the phones can be server as well. The funny part is that the line between server and client is not really distinguishable. Either device can be server or client, but trying to do both at the same time may prove unstable. So let’s get ourselves familiar with the classes we will work with, by consulting the MSDN documentation. Stream Socket, Stream Socket Listener. Notice the documentation even covers a step by step guide on how to do things properly. Ok, now that we got bored enough with the documentation, it’s time to get to work. We shall start with the listener. Create a windows 8 app, and create the UI as you see fit. It is advisable to create a new class to store the whole thing, so it is easier to access latter on. The file we want to send contains some random misc data about the app’s state. Imagine we want the phone app to continue whatever work the computer app was doing. So we store reconstruction data in said file. We will hide the file in the Roaming folder string fileName = "MyFileName.txt"; you create the file using this line of code (in case you wonder why we do not await this async call, is because we simply don't need to: it will just create on background thread. You must await it if you want to use the file in a follow up method) ApplicationData.Current.RoamingFolder.CreateFileAsync(fileName, CreationCollisionOption.ReplaceExisting); And you get it using this line of code var myfile =await ApplicationData.Current.RoamingFolder.GetFileAsync(fileName); You can write things in the file using the methods in the FileIO class. Make sure you actually write something in the file so the size is bigger than 0. You can wrap the code you use to write things in that file in a for or while loop, so that you can artificially inflate the file size, if you want to observe the behavior of stream sockets with files that do not fit in one package. Now that you’ve prepped the file, it is time to set up the listener. Since we can’t use async calls in the class constructor, we must create a new task for this purpose, and call it after using the class constructor. The variables at work here… public StreamSocketListener ServerListener = new StreamSocketListener(); StreamSocket socket; public async Task InitStuff() { var f = ServerListener.Control; f.QualityOfService = SocketQualityOfService.LowLatency; ServerListener.ConnectionReceived += ServerListener_ConnectionReceived; await ServerListener.BindEndpointAsync(null, "13001"); } The variable ‘f” is used to set the QualityOfService property for the stream socket listener. We set it to low latency to make sure the transfer occurs as fast as possible. We must also make sure the packages are processed fast enough, to avoid a time out (this will actually be our job too). After we set the event handler for received connections, we simply start listening. await ServerListener.BindEndpointAsync(null, "13001"); The line above does just that. It defines the host name and port on which to listen for connection. Since we do not have a host name for our phone, we simply make it null to accept connections from any host. Jumping to the connection received event handler async void ServerListener_ConnectionReceived(StreamSocketListener sender, StreamSocketListenerConnectionReceivedEventArgs args) The field args contains the socket used in the request. We shall assign said socket to the stream socket we declared earlier. socket = args.Socket; The stream socket class exposes two streams. An input stream and an output stream. The input stream is the stream of incoming data sent by the remote socket. The output stream is the stream of outgoing data to the remote socket. Since this is a fairly simple request and serve communication, we don’t really care what the phone sends us. It will always demand the file in the same fashion, so we only play with the output stream. var outputstream = socket.OutputStream; DataWriter writer = new DataWriter(outputstream); We will use the writer object to write data to the output stream. Now, we open our file, read its contents, and then use the data writer to write those bytes to the output stream. 1. var myfile =await ApplicationData.Current.RoamingFolder.GetFileAsync(fileName); 2. 3. var streamdata = await myfile.OpenStreamForReadAsync(); The first stage of the transfer involves sending the length of the file in bytes if(stage==0) stage++; writer.WriteBytes(UTF8Encoding.UTF8.GetBytes((await streamdata.Length.ToString())) await writer.StoreAsync(); The second stage code is as follows. You should use a variable to count the stage index. If you want, you can also read what the client is sending you. Check out the stream socket sample for windows 8.1 which gives a pretty detailed example on how to read input data. How you figure out the stage is really not important. if(stage==1) byte[] bb = new byte[streamdata.Length]; streamdata.Read(bb, 0, (int)streamdata.Length); writer.WriteBytes(bb); await writer.StoreAsync(); stage=0; Notice the call to StoreAsync() method. This call flushes the data writer’s buffers and pours all the data in the output stream. And this is pretty much all we have to do. The system takes care of the rest. Another thing to note is how we send the data. We first write the length of the file, then we write the file itself. Keep this in mind, we will use this latter on. Now, this code presents itself with one big problem: the entire thing is not really type safe. We send raw bytes to the client, and the client also sends us raw bytes. For the purpose of transferring a single file from one device for another, we don’t have to worry about type safety that much. But be warned: if you want some more complex, like a message based communication, you will need to take into account the fact that what you get are only bytes, and you need to create the parsing system yourself. Now, it’s time for the client code. This time we will use the Silverlight runtime. This code will work on both windows phone 7 and windows phone 8. The Socket class is contained in the System.Net.Sockets namespace. Create a new class, add the following fields to it MemoryStream ArrayOfDataTransfered; string _serverName = string.Empty; private int _port = 13001; long FileLength = 0; int PositionInStream = 0; int Stage = 0; The user will need to input the server name and port numbers. Because the connection is established in a home or private network, the actual IP address of the server can change. Using the name of the computer directly solves this problem. It would be a good idea to have some sort of settings page where the user can set up this connection. The following lines of code should be located in some initialization method, like the class constructor. The flow of the process looks like this: connect > send > receive. If you want to transfer more than 1 file, you will restart the entire process. if (String.IsNullOrWhiteSpace(serverName)) throw new ArgumentNullException("serverName"); if (portNumber < 0 || portNumber > 65535) throw new ArgumentNullException("portNumber"); _serverName = serverName; _port = portNumber; public void SendData(string data); This method just sends data to the remote server. This is the breakdown of the method: First, we will need this object, we will use it later on. The SocketAsyncEventArgs represents a socket operation. Operations can be either send, receive or connect. SocketAsyncEventArgs socketEventArg = new SocketAsyncEventArgs(); Then, we will need the server end point, construct the object using the port number and server name. DnsEndPoint hostEntry = new DnsEndPoint(_serverName, _port); Next step is creating the socket and setting various properties then connecting to the remote server Socket sock = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); socketEventArg.Completed += new EventHandler< SocketAsyncEventArgs >(SocketEventArg_Completed); socketEventArg.RemoteEndPoint = hostEntry; socketEventArg.UserToken = sock; and then we connect sock.ConnectAsync(socketEventArg); The event handler for the SocketEventArgs looks like this void SocketEventArg_Completed(object sender, SocketAsyncEventArgs e) switch (e.LastOperation) case SocketAsyncOperation.Connect: ProcessConnect(e); break; case SocketAsyncOperation.Receive: ProcessReceive(e); case SocketAsyncOperation.Send: ProcessSend(e); default: throw new Exception("Invalid operation completed"); When the operation is a connection request, we will simply use the SendData method defined above to send data to the server, something along the lines of byte[] buffer = Encoding.UTF8.GetBytes(dataIn); e.SetBuffer(buffer, 0, buffer.Length); Socket sock = e.UserToken as Socket; sock.SendAsync(e); Where dataIn is a random string we want to send to the server. This is useful if you ever want to request more than a single file from the server, or if the transfer is done in more than one stage. The SendAsync method sends raw bytes of data to the remote connected socket. The ProcessSend method looks like this if (e.SocketError == SocketError.Success) //Read data sent from the server sock.ReceiveAsync(e); else ResponseReceivedEventArgs args = new ResponseReceivedEventArgs(); args.response = e.SocketError.ToString(); args.isError = true; OnResponseReceived(args); The only interesting thing here is the call to RecieveAsync(e). This method is basically the core of the file transfer. We will want to get the data in two stages. The first stage sends us the size of the file. The chances of not getting the file size in one piece is slim to non-existent (just convert the biggest possible 64 bit number in megabytes and see for yourself) . The second one sends us the file itself. var dataFromServer = Encoding.UTF8.GetString(e.Buffer, 0, e.BytesTransferred); Stage++; FileLength = long.Parse(dataFromServer); The second stage is where all the interesting things happen. Basically, this is the most important part of the process. First, we will read the bytes in the buffer. ArrayOfDataTransfered.Write(e.Buffer, 0, e.BytesTransferred); The PositionInStream field simply counts how many bytes we got so far. If its value is lower than the FileLength, it means the file is not yet here. Otherwise it means the file transfer is done. if (PositionInStream < FileLength) Socket socks = e.UserToken as Socket; socks.ReceiveAsync(e); Stage = 0; //save the file If the file is not yet here, it means it was too big to fit in one package. Which means more packages are on their way and we simply have to read from the input buffer of the network adapter, by calling the RecieveAsync(e) method again. Now all you have to do is copy the memory stream to an isolated storage file stream, and the transfer will be complete. Since we are using a memory stream, you should probably prevent the users from sending files bigger than 100 MB, otherwise it might trigger an Out of Memory Exception. In that case, you can write the data directly to an isolated storage file stream if you so desire. You also need to take into account the security of the socket connection, if you send sensitive data. PS: You also need to add capabilities to your apps : ID_CAP_NETWORKING for windows phone, and Private Networks for windows store apps.
http://social.technet.microsoft.com/wiki/contents/articles/20495.how-to-transfer-files-between-a-windows-store-app-and-any-windows-phone-app.aspx
CC-MAIN-2015-22
refinedweb
2,433
64.41
How can I create a recipe that will populate its attributes using the fiels from an instance of an object in a generic way? As an example, consider the following recipe: component = $auth_docker docker_image component.name do registry component.registry tag component.tag action :pull end When you have 50s of recipes that look like this, maintaining them really gets overwhelming. In Python, i would probably implement a solution that would look a bit like this: docker_image = DockerImage(**$auth_docker) Or, I would create some sort of helper function to build it for me: def generate_docker_image_lwrp(attributes): lwrp = DockerImage() lwrp.registry = attributes.registry lwrp.tag = attributes.tag return lwrp The goal is to reduce maintenance on the recipes. For instance this morning I wanted to add Chef's "retries" attribute on all recipes that pull an image. I had to edit all of them - I don't want that. I should've been able to a) add the attribute to the stack's JSON b) edit the Ruby wrapper class so that instances of it (i.e.: $auth_docker) get the "retries" field, then c) add the retries attribute to the lwrp-generator. Since all recipes would use the same generator, recipes wouldn't need to be edited at all. Is this possible using Chef, in a way that 'notifies' still work?
http://www.howtobuildsoftware.com/index.php/how-do/jxg/ruby-chef-aws-opsworks-configure-providers-from-variables-in-a-generic-way
CC-MAIN-2018-47
refinedweb
220
64
- NAME - DESCRIPTION - USAGE - METHODS - PROPERTIES - SHORTCUT METHODS - TIE INTERFACE - REMOVAL STRATEGY METHODS - UTILITY METHODS - SEE ALSO - DIFFERENCES FROM CACHE::CACHE - AUTHOR NAME Cache - the Cache interface DESCRIPTION files on a filesystem, or in memory). USAGE. METHODS - my $cache_entry = $c->entry( $key ) Return a 'Cache::Entry' object for the given key. This object can then be used to manipulate the cache entry in various ways. The key can be any scalar string that will uniquely identify an entry in the cache. - $c->purge() Remove all expired data from the cache. - $c->clear() Remove all entries from the cache - regardless of their expiry time. - my $num = $c->count() Returns the number of entries in the cache. - my $size = $c->size() Returns the size (in bytes) of the cache. PROPERTIES When a cache is constructed these properties can be supplied as options to the new() method. - default_expires The current default expiry time for new entries into the cache. This property can also be reset at any time. my $time = $c->default_expires(); $c->set_default_expires( $expiry ); - removal_strategy(); - size_limit The size limit for the cache. my $limit = $c->size_limit(); - load_callback The load callback for the cache. This may be set to a function that will get called anytime a 'get' is issued for data that does not exist in the cache. my $limit = $c->load_callback(); $c->set_load_callback($callback_func); - validate_callback The validate callback for the cache. This may be set to a function that will get called anytime a 'get' is issued for data that does not exist in the cache. my $limit = $c->validate_callback(); $c->set_validate_callback($callback_func); SHORTCUT METHODS These methods all have counterparts in the Cache::Entry package, but are provided here as shortcuts. They all default to just wrappers that do '$c->entry($key)->method_name()'. For documentation, please refer to Cache::Entry. - my $bool = $c->exists( $key ) - - $c->set( $key, $data, [ $expiry ] ) - - my $data = $c->get( $key ) - - my $data = $c->size( $key ) - - $c->remove( $key ) - - $c->expiry( $key ) - - $c->set_expiry( $key, $time ) - - $c->handle( $key, [$mode, [$expiry] ] ) - - $c->validity( $key ) - - $c->set_validity( $key, $data ) - - $c->freeze( $key, $data, [ $expiry ] ) - - $c->thaw( $key ) - TIE INTERFACE. REMOVAL STRATEGY METHODS). - my $size = $c->remove_oldest() Removes the oldest entry in the cache and returns its size. - my $size = $c->remove_stalest() Removes the 'stalest' (least used) object in the cache and returns its size. - $c->check_size( . UTILITY METHODS These methods are only for use internally (by concrete Cache implementations). - my $time = Cache::Canonicalize_Expiration_Time($timespec) Converts a timespec as described for Cache::Entry::set_expiry() into a unix time. SEE ALSO Cache::Entry, Cache::File, Cache::RemovalStrategy DIFFERENCES FROM CACHE::CACHE: - The get/set methods DO NOT serialize complex data types. Use freeze/thaw instead (but read the notes in Cache::Entry). - - The get_object / set_object methods are not available, but have been superseded by the more flexible entry method and Cache::Entry class. - - There is no concept of 'namespace' in the basic cache interface, although implementations (eg. Cache::Memory) may choose to provide them. For instance, File::Cache does not provide this - but different namespaces can be created by varying cache_root. - - In the current Cache implementations purging is done automatically - there is no need to explicitly enable auto purge on get/set. The purging algorithm is no longer implemented in the base Cache class, but is left up to the implementations and may thus be implemented in the most efficient way for the storage medium. - - - Cache::File no longer supports separate masks for entries and directories. It is not a very secure configuration and presents numerous issues for cache consistency and is hence deprecated. There is still some work to be done to ensure cache consistency between accesses by different users. - AUTHOR $
https://metacpan.org/pod/Cache
CC-MAIN-2017-30
refinedweb
611
55.74
I thought that when the callback was still busy when the timer expires the function would be called again and stacked in a run queue. Code: Select all import pyb import micropython timer = pyb.Timer(1, freq = 1) count = 0 def print_(parm): print(parm) def count_cb(tim): global count count += 1 def timer_cb(tim): global count tim.callback(count_cb) micropython.schedule(print_, count) if count % 5 == 0: pyb.delay(5000) else: pyb.delay(100) count += 1 tim.callback(timer_cb) timer.callback(timer_cb) pyb.delay(60 * 1000) timer.callback(None) I know that micropython.schedule works like this. I've had a runtime error when i wasn't servicing them fast enough. What's the rationale of supplying the timer to it's callback?
https://forum.micropython.org/viewtopic.php?t=6281
CC-MAIN-2019-47
refinedweb
124
62.54
Created on 2007-12-21 18:53 by roudkerk, last changed 2008-12-13 14:59 by loewis. This issue is now closed. I got a report that one of the tests for processing () was failing with Fatal Python error: Invalid thread state for this thread when run with a debug interpreter. This appears to be caused by the interaction between os.fork() and threads. The following attached program reliably reproduces the problem for me on Ubuntu edgy for x86. All that happens is that the parent process starts a subthread then forks, and then the child process starts a subthread. With the normal interpreter I get started thread -1211028576 in process 18683 I am thread -1211028576 in process 18683 started thread -1211028576 in process 18685 I am thread -1211028576 in process 18685 as expected, but with the debug interpreter I get started thread -1210782816 in process 18687 I am thread -1210782816 in process 18687 started thread -1210782816 in process 18689 Fatal Python error: Invalid thread state for this thread [5817 refs] Notice that the child process is reusing a thread id that was being used by the parent process at the time of the fork. The code raising the error seems to be in pystate.c: PyThreadState * PyThreadState_Swap(PyThreadState *new) { PyThreadState *old = _PyThreadState_Current; _PyThreadState_Current = new; /* It should not be possible for more than one thread state to be used for a thread. Check this the best we can in debug builds. */ #if defined(Py_DEBUG) && defined(WITH_THREAD) if (new) { PyThreadState *check = PyGILState_GetThisThreadState(); if (check && check != new) Py_FatalError("Invalid thread state for this thread"); } #endif return old; } It looks as though PyGILState_GetThisThreadState() is returning the thread state of the thread from the parent process which has the same id as the current thread. Therefore the check fails. I think the thread local storage implementation in thread.c should provide a function _PyThread_ReInitTLS() which PyOS_AfterFork() can call. I think _PyThread_ReInitTLS() just needs to remove and free each "struct key" in the linked list which does not match the id of the calling thread. The included patch against python2.51 fixes the problem for me. Bug day task we need this in before 2.6 is released. Gregory, go ahead and apply and see if can stop the hell in the buildbots. Updated version of roudkerk's patch. Adds the new function to pythread.h and is based off of current trunk. Note that Parser/intrcheck.c isn't used on my box, so it's completely untested. roudkerk's original analysis is correct. The TLS is never informed that the old thread is gone, so when it seems the same id again it assumes it is the old thread, which PyThreadState_Swap doesn't like. Incidentally, it doesn't seem necessary to reinitialize the lock. Posix duplicates the lock, so if you hold it when you fork your child will be able to unlock it and use it as normal. Maybe there's some non-Posix behaviour or something even more obscure when #401226 was done? (reinitializing is essentially harmless though, so in no way should this hold up release.) I applied this in r64212. The fix is required to run multiprocessing on Python 2.4 and 2.5, see #4451. I suggest we fix the issue in 2.5.3. The fork-thread-patch-2 patch doesn't work on Python 2.5. I'm getting a segfault on my system: test_connection (multiprocessing.tests.WithProcessesTestConnection) ... Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fa2e999f6e0 (LWP 10594)] 0x000000000052065f in PyCFunction_Call (func=Cannot access memory at address 0x7ffeffffffd8 ) at Objects/methodobject.c:73 73 return (*meth)(self, arg); Linux on AMD64 with Python 2.5 svn --with-pydebug. I was wrong and the patch is right. Something is wrong in multiprocessings connection_recvbytes_into() function for the old buffer protocol. Somehow PyArg_ParseTuple(args, "w#|" ...) fucks up the process' memory. Martin, are you fine with the patch? fork-thread-patch-2 still applies with some fuzzying. Looks fine to me, somebody please backport it. I'm concerned about the memory leak that this may cause, though (IIUC): all the values are garbage, provided that they are pointers to dynamic memory in the first place. I think in the long run, we should allow to store a pointer to a release function along with actual value to release. Since this is a Python 2.5.3 issue, I'm lowering to deferred blocker until after 3.0 and 2.6.1 are released. Committed fork-thread-patch-2 as r67736 into 2.5 branch.
http://bugs.python.org/issue1683
CC-MAIN-2016-44
refinedweb
757
76.22
Solution : A simple brute force approach would require for each pair of points in the matrix, whether the points form the top left and bottom right corner of a rectangle (all 1's) and compute its area. Since there are O(N2) points, thus there are O(N4) pairs of points. Checking whether there is a rectangle between them and computing its area takes O(N2) comparisons. Thus the total run-time is O(N6), too bad !!! Definitely we can re-use some of our computations. For example, for a pair of points (X1, Y1) and (X2, Y2), use the intermediate points (X1, Y) and (X2, Y) for some Y in range [Y1, Y2], to determine whether there is a rectangle between (X1, Y1) and (X2, Y2). Rectangles There is a rectangle between (X1, Y1) and (X2, Y2) if for some Y in the range [Y1, Y2], there is a rectangle between (X1, Y1) and (X2, Y) (red dots) and a rectangle between (X1, Y) and (X2, Y2) (yellow dots). The run-time with this optimization is O(N4), still bad !!! But we got the idea that this requires some kind of dynamic programming approach. Instead of sticking with the above approach of checking whether there exist a rectangle or not, let's modify our approach by assuming that there is a rectangle starting at a point (X, Y) with the value 1 in the matrix, because there will always be a rectangle starting at a point having value of 1 (even if it's a rectangle of size 1x1). Rows in matrix w.r.t. a single column Let's say that currently we are at the top left cell with a 1 as in the above diagram, then any rectangle starting at that cell, will have a maximum height equal to the maximum number of continuous 1's along the same column starting at that cell. In the above diagram it is 5. But for each of those rows, the number of continuous 1's along the width can be different (as shown in the diagram), which suggests that multiple possible rectangles could be formed. For only the first row, area of rectangle = 1x5 = 5. For the first 2 rows, area of rectangle = 2x5 = 10 (because width of rectangle has to be minimum of all the rows). For the first 3 rows, area of rectangle = 3x3 = 9. For the first 4 rows, area of rectangle = 4x3 = 12. For the first 5 rows, area of rectangle = 5x3 = 15. Note that the above area of rectangles are all rectangles starting at the top left cell. Rectangles can start anywhere in the matrix with a cell value of 1. Among all possible rectangles in the above diagram, maximum area is 15, which co-incidentally starts at the top left cell and ends in the last row. Longest running histogram In the above diagram, the maximum area of rectangle is 3x7 = 21, starting at the start of the 2nd row and ending at end of the 4th row. Thus for each cell (X, Y) in the matrix, compute the width towards right (same row), starting from the column of that cell (i.e. number of continuous 1's along the same row towards right, but starting from (X, Y)). Then iterate towards the bottom of the matrix starting from (X, Y) but along the same column. While iterating each row towards the bottom, compute the width of each row (similar to the starting row). If the cell value of a bottom row (same column) is 1 then continue to go down else stop iterating further. If the width of this row is greater than the current minimum width then considering the minimum width, compute the area of the rectangle of height equal to the distance of this row from the start row and width equal to the current minimum width. For e.g. In the first matrix diagram, if the blue row is our starting row and the orange row is our current row, then height = 4 and width = 3. Else if the width of a bottom row is less than the current minimum width, then update the current minimum width and compute the area of the rectangle. Python code for the above approach : class Solution(object): def maximalRectangle(self, matrix): if len(matrix) == 0: return 0 max_len_rt = [[0] * (len(matrix[0]) + 1) for _ in range(len(matrix) + 1)] max_area = 0 for row in reversed(range(len(matrix))): for col in reversed(range(len(matrix[0]))): if matrix[row][col] == "1": x = max_len_rt[row][col + 1] + 1 max_len_rt[row][col] = x max_area = max(max_area, x) for row2 in range(row + 1, len(matrix)): if matrix[row2][col] == "1": y = max_len_rt[row2][col] if y <= x: area = (row2 - row + 1) * y x = y else: area = (row2 - row + 1) * x max_area = max(max_area, area) else: break return max_area Note that in order to compute the width along the right side starting from a particular cell, we are using a dynamic programming approach. If a cell value is 1, then the width along the right side starting from the cell is equal to : width(X, Y) = 1 + width(X, Y+1), if matrix[X][Y] = 1 Since we are iterating starting from the cell at the bottom right corner, thus when we compute width(X, Y), we already have computed width(X, Y+1). In the above code 'max_len_rt' is our width variable. The run-time of the above code is O(N3), since for each cell, we do one more iteration towards the bottom. The above code is accepted by the LeetCode online judge. But some of the solutions presented by submitters runs in O(N2) time complexity. So I took up the additional effort of solving this problem in O(N2). To do that we will revisit one of our earlier LeetCode problem solution. The Largest Rectangle in Histogram problem. But why ? Note that the matrix diagrams presented above when rotated by 90 degrees ant-clockwise, presents itself as histograms of width 1. Matrix transformed into Histograms. Finding the largest rectangle in the matrix, is equivalent to finding the largest rectangle in a histogram, for each cell in the matrix and then taking the max out. The above is one such histogram for the cell in blue at the bottom left corner. The height of a bin is same as that of a row width defined in our earlier approach. In our solution to the largest rectangle in a histogram problem, for a histogram with N bins, the time complexity to search for the longest "compatible" bins along both the left and right side of a bin was O(logN) and thus the total time taken for N such bins was O(N*logN). In-fact, the total time taken could be bettered to O(N) by using a different technique, which we did not discuss there. The idea is to find the closest incompatible bin for each bin. So for a bin of height 'h1', if the next bin is of height 'h2', is smaller than h1, then we are done, else if h2 > h1, we go to the location of the closest incompatible bin for h2. Now since h2 > h1, thus the closest incompatible bin for h2 should be located before the closest incompatible bin for h1. If the height h3 of the closest incompatible bin for h2 is less than h1, then we are done, else repeat the above process with h3. Closest incompatible bin At first this does not look like an O(N) solution because for each bin, we are hopping across bins on the right or left. But careful analysis reveals that the total number of hops is also O(N). In the worst case the number of hops is 2N. Below is the python code for finding the longest "compatible" distance along the right side for each bin. Uses the logic described above. def get_max_flow(bins): flows = [-1] * len(bins) for idx in reversed(range(len(bins))): if idx == len(bins) - 1 or bins[idx] > bins[idx + 1]: flows[idx] = idx + 1 else: y = idx + 1 while True: x = flows[y] if x == len(bins) or bins[idx] > bins[x]: flows[idx] = x break else: y = x return flows In order to convince myself that the run time of the above code is indeed O(N), I did a benchmark analysis of the code with random inputs of sizes 1000 to 10000 and got the below time analysis. Timing analysis of the above code. Linear Trend Note that the trend is linear but there are multiple spikes in between due to the variation from O(N) to O(2N) due to the input patterns. Thus convinced that finding the largest rectangle in histogram indeed takes O(N) time complexity, i applied the same algorithm above, for each cell in the matrix having a value of 1. The total time complexity of finding the longest compatible heights along each direction for a particular cell in the matrix is O(N2) (using dynamic programming). Below is the final code for finding the largest rectangle in the matrix : class Solution(object): def get_column_flow(self, matrix, max_len_rt, direction=1): flow = [[0] * (len(matrix[0])) for _ in range(len(matrix))] n, m = range(len(matrix)), range(len(matrix[0])) r_iter = n[::-1] if direction == 1 else n c_iter = m[::-1] if direction == 1 else m edge_row = len(matrix) - 1 if direction == 1 else 0 for row in r_iter: for col in c_iter: if matrix[row][col] == "1": if row == edge_row or max_len_rt[row][col] > max_len_rt[row + direction][col]: flow[row][col] = row + direction else: y = row + direction while True: x = flow[y][col] if x == edge_row + direction or max_len_rt[row][col] > max_len_rt[x][col]: flow[row][col] = x break else: y = x return flow def maximalRectangle(self, matrix): if len(matrix) == 0: return 0 max_len_rt = [[0] * (len(matrix[0]) + 1) for _ in range(len(matrix) + 1)] for row in reversed(range(len(matrix))): for col in reversed(range(len(matrix[0]))): if matrix[row][col] == "1": max_len_rt[row][col] = max_len_rt[row][col + 1] + 1 fwd_flow = self.get_column_flow(matrix, max_len_rt, 1) bwd_flow = self.get_column_flow(matrix, max_len_rt, -1) max_area = 0 for row in range(len(matrix)): for col in range(len(matrix[0])): if matrix[row][col] == "1": width = max_len_rt[row][col] a = fwd_flow[row][col] - row b = row - bwd_flow[row][col] height = a + b - 1 max_area = max(max_area, width * height) return max_area Looks like we have achieved our goal of O(N2) complexity. - Designing an Automated Question-Answering System - Part III - Designing a Cab Hailing Service like Uber Categories: PROBLEM SOLVING Tags: Dynamic Programming, Histogram, Largest Rectangle Histogram, Leetcode, Maximal Rectangle
http://www.stokastik.in/leetcode-maximal-rectangle/
CC-MAIN-2018-47
refinedweb
1,792
62.82
Question You’re trying to determine whether to expand your business by building a new manufacturing plant. The plant has an installation cost of $12 million, which will be depreciated straight-line to zero over its four-year life. If the plant has projected net income of $1,854,300, $1,907,600, $1,876,000, and $1,329,500 over these four years, what is the project’s average accounting return (AAR)? Answer to relevant QuestionsA firm evaluates all of its projects by applying the IRR rule. If the required return is 14 percent, should the firm accept the following project?Year Cash Flow0...... -$28,0001...... 12,0002...... 15,0003...... ...Light Sweet Petroleum, Inc., is trying to evaluate a generation project with the following cash flows:Year Cash Flow0 -$39,000,0001 63,000,0002 -12,000,000a. If the company requires a 12 percent return on its ...A project has the following cash flows: Year Cash Flow0 $42,0001 21,0002 32,000What is the IRR for this project? If the required return is 12 percent, should the firm accept the project? What is the NPV of this project? ...Consider an asset that costs $640,000 and is depreciated straight-line to zero over its eight-year tax life. The asset is to be used in a five-year project; at the end of the project, the asset can be sold for $175,000. If ...Consider a project to supply 100 million postage stamps per year to the U.S. Postal Service for the next five years. You have an idle parcel of land available that cost $1,900,000 five years ago; if the land were sold today, ... Post your question
http://www.solutioninn.com/youre-trying-to-determine-whether-to-expand-your-business-by
CC-MAIN-2017-09
refinedweb
284
74.29
Created on 2010-05-18.18:33:04 by mcieslik, last changed 2015-01-14.00:48:23 by santa4nt. It takes ~ 300x longer to create instances of array.array in Jython2.5.1 vs Python2.6 and Python3.1 e.g. the following: from array import array array('b', large_string) $ python2.6 profile_array.py 0.0104711055756 $ python3.1 profile_array.py 0.00699281692505 $ jython profile_array.py 3.00600004196 $ jython --version Jython 2.5.1 Did you measure total program time? The 3s of jython profile_array.py do **NOT** include the JVM start-up time, so it is 'wall-clock' time of the loop. this is what is in the attached script: start = time() for i in range(10000): array('b', large_string) stop = time() The problem here is that we copy the string. In 2.6 this can be avoided by supporting a string to back an array. This can (and should) be part of a general support for memoryview. better title - "Jython ____" is just noise here The reported performance problem is still seen in 2.7.0 beta 4. In reviewing CPython 2.7's arraymodule.c, I don't see any support for copy-on-write semantics to do this speedup. Instead it's just a straightforward memcpy in the frombytes function. So the additional overhead here has a simple root cause: unlike CPython, Jython uses the same method, PyArray.fromStream, to read from an input stream into a given array. Although the read should be reasonably fast/inlineable (but more overhead than simply looping through the string), the write performance into the array is very slow since it uses java.lang.reflect.Array, in this case java.lang.reflect.Array#setByte. Some simple specialization would speed things up considerably, much as was done with CPython. Changing misleading title! (Copy-on-write would still be interesting, and perhaps more feasible on Jython.) @zyasoft Something like the patch I have in mind? I can get a better profile number with this naive "bulk" put() implementation sans-copy-on-write optimization, but it's modest at best.
http://bugs.jython.org/issue1612
CC-MAIN-2018-05
refinedweb
347
70.09
So the issue is, everytime I close the window my video is showing on, it hangs and I have to force quit Python. I couldn’t figure out for the life of me what the heck was going on and finally, FINALLY after a million years of googling I realised…the code doesn’t work in Jupyter Notebook faints In any case - thanks to a guy who raised that up and stopped my endless blind searching for solutions!!!!! I am now running the code in a .py file and it works!! Follow the code below (credits to this guy:) if you want to record a live video. It opens the camera, records the video, closes the window when you press ‘q’, and saves the video in .avi format. By the way, I am on macOS High Sierra 10.13.4, Python 3.6.5, OpenCV 3.4.1. If you want to read a file instead, replace cap = cv2.VideoCapture(0) with cap = cv2.VideoCapture('yourfilename.avi'). import cv2 import numpy as np # 'outpy.avi' file. out = cv2.VideoWriter('outpy.avi',cv2.VideoWriter_fourcc('M','J','P','G'), 10, (frame_width,frame_height)) while(True): ret, frame = cap.read() if ret == True: # Write the frame into the file 'output.avi' out.write(frame) # Display the resulting frame cv2.imshow('frame',frame) # Press Q on keyboard to stop recording if cv2.waitKey(1) & 0xFF == ord('q'): break # Break the loop else: break # When everything done, release the video capture and video write objects cap.release() out.release() # Closes all the frames cv2.destroyAllWindows()
https://www.yinglinglow.com/blog/2018/06/23/opencv-video
CC-MAIN-2021-49
refinedweb
259
84.47
In previous articles i explained Multiple ways to generate DropDownList in MVC using HTML helpers and hard coded items and Multiple ways to pass data from controller to view in MVC with example and example to Create,Read,Update,Delete operation using Asp.Net MVC and Entity Framework Implementation: Let’s create a sample MVC application to dynamically fill dropdownlist from sql server database. But first of all create a Sql Server Database and name it "MySampleDataBase" and in this database create a table and name it "Department". You can create the above mentioned Database, Table and insert data in table by pasting the following script in sql server query editor: CREATE DATABASE MySampleDataBase GO USE [MySampleDataBase] GO CREATE TABLE Department ( DepartmentId INT IDENTITY(1,1) NOT NULL PRIMARY KEY, DepartmentName VARCHAR(50) NOT NULL ) Now insert some data in table using the following queries: INSERT INTO Department(DepartmentName) VALUES('HR') INSERT INTO Department(DepartmentName) VALUES('SALES') INSERT INTO Department(DepartmentName) VALUES('ACCOUNTS') INSERT INTO Department(DepartmentName) VALUES('IT') Now our database and table is ready. Let's create the demo application: Step 1: Open Visual Studio. I am using Visual studio 12 for this applications. File Menu -> New project. Select you preferred language either Visual C# or Visual Basic. from the left pane For this tutorial we will use Visual C#. Select ASP.NET MVC 4 Web Application. Name the project "MvcBindDropDownList". Specify the location where you want to save this project as shown in image below. Click on Ok button. Step 2: A "New ASP.NET MVC 4" project dialog box having various Project Templates will open. Select Internet template from available templates. Select Razor as view engine. Click on Ok button. It will add required files and folder automatically in the solution .So a default running MVC applications is ready. But our aim is to bind dropdownlist from database using entity framework. Step 3: Now right click on the project name (MvcBindDropDownList) in solution explorer. Select Add New Item. Select ADO.NET Entity Data Model and name it “MySampleDataModel.edmx” as shown in image below: Step 4: A New Entity Data Model Wizard dialog box will open. Select Generate Model from Database and click next: Step 5: Select your database from server as shown in image below. It will automatically create connection string in web.config file with the name “MySampleDataBaseEntities” . Just check your web,config file place in the root folder.You can also change the name of the connection string. But leave it as it is. Step 6: Select Database object that you wish to add in the Model as shown in image below. In our case it is “Department” Table. Leave the Model namespace “MySampleDataBaseModel” as it is. Click Finish Button. Step 7: EDMX and Model files are added in the solution as shown in image below (HIGHLIGHTED): Step 8: Now in Home Controller (Controllers folder\ HomeController.cs) remove all the auto generated code and paste the code: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using MvcBindDropDownList.Models; namespace MvcBindDropDownList.Controllers { public class HomeController : Controller { public ActionResult Index() { MySampleDataBaseEntities db = new MySampleDataBaseEntities(); ViewBag.Departments = new SelectList(db.Departments, "DepartmentId", "DepartmentName"); return View(); } } } Explaination: --> In first line of the above index action I first created the object of MySampleDataBaseEntities class. e.g. MySampleDataBaseEntities db = new MySampleDataBaseEntities(); Notice that Entity framework has automatically added a class with the name “MySampleDataBaseEntities “. To locate the class you need to open Model folder -> expand MySampleDataModel.edmx -> expand MySampleDataModel.Context.tt -> double click MySampleDataModel.Context.cs file. So this “MySampleDataBaseEntities” class will help us to connect to our database. Then through this object we can get the data from the department table. e.g. db.Departments will return all the departments contained in Department table --> Then in second line I specified the “DepartmentId” as DataValueField and “DepartmentName” as DataTextField as we specify in asp.net while binding dropdownlist. And stored that in ViewBag’s dynamic property Departments (property name can be anything but here I named it Departments) --> Then in third line I returned the view. Step 9: In your Index.cshtml view (Views folder\Home folder\Index.cshtml) remove all the auto generated stuff and paste the following: @{ ViewBag.Title = "Home Page"; } Select Department: @Html.DropDownList("Departments", "Select") Explaination: In the above code, the DropDownList helper accepts two parameters. The first parameter named DropDownList ("Departments") is compulsory parameter and it must be same as ViewBag name (ViewBag.Departments)and the second parameter, optionLabel, as the name says is optional. The optionLabel is generally used for first option in DropDownList. E.g. In our case it is “Select”. Now run the application and check." 8 commentsClick here for comments Nice Explained... But can you explain about selected Index ??Reply thanks that was so helpful ^_^Reply Ur welcome Marloo Stuart..Stay connected and keep reading for more useful updates like this..Reply Nice article very clearly explained Thank youReply Thank u it's helpful 😊Reply Thanks for you feedback..I am glad you liked this article..stay connected and keep reading...Reply Thank u so much.... very nice...Reply Thanks for your comment..Stay connected and keep reading..
https://www.webcodeexpert.com/2015/01/how-to-dynamically-bind-aspnet-mvc.html
CC-MAIN-2021-43
refinedweb
867
52.05