text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Connecting and reading from MySQL954674 Aug 8, 2012 7:23 PM I have written previous programs that connect and read entries from MySQL using C#, but I'm sort of new to Java and having trouble doing this. I'm basically trying to connect to the database and only retrieve computer names and nothing else. This is the code I've got so far and can't figure out what I'm doing wrong as I keep getting: Cannot connect to database server. If anyone has any suggestions that would be great. ThanksIf anyone has any suggestions that would be great. Thanks public class ComputerCheck { /** * @param args * @throws ClassNotFoundException * @throws IllegalAccessException * @throws InstantiationException * @throws SQLException */ public static void main(String[] args) throws InstantiationException, IllegalAccessException, ClassNotFoundException, SQLException { Connection conn = null; try { String url = "jdbc:mysql://servername.domain.uni.edu"; String username = "root"; String password = "pass"; String dbName = "database"; Class.forName("com.mysql.jdbc.Driver").newInstance(); conn = DriverManager.getConnection(url + dbName, username, password); System.out.println("Database Connection Established"); Statement s = conn.createStatement(); s.executeQuery("SELECT name FROM table"); ResultSet rs = s.getResultSet (); int count = 0; while (rs.next()) { String computerName = rs.getString("name"); System.out.println("Computer Name: " + computerName); ++count; } rs.close(); s.close(); System.out.println(count + " rows were retrieved"); } catch (Exception e) { System.err.println ("Cannot connect to database server"); } conn.close(); } } This content has been marked as final. Show 8 replies 1. Re: Connecting and reading from MySQLDrClap Aug 8, 2012 7:39 PM (in response to 954674)1 person found this helpful 951671 wrote:That's because your code outputs that regardless of what the actual problem was. Not surprising you can't tell what the actual problem was. So change your code to do this in the catch block: I keep getting: Cannot connect to database server. e.printStackTrace(); 2. Re: Connecting and reading from MySQL954674 Aug 9, 2012 4:32 PM (in response to DrClap)Getting this now: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure 3. Re: Connecting and reading from MySQLsabre150 Aug 9, 2012 4:55 PM (in response to 954674)1 person found this helpful 951671 wrote:At which line was this exception thrown? Getting this now: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure Note - the stack trace will give you the line number and you can then use that to find the line within your source. 4. Re: Connecting and reading from MySQLmaheshguruswamy Aug 9, 2012 4:56 PM (in response to 954674)The URL should be like jdbc:mysql://servername.domain.uni.edu/database . The way you have it in your code, the URL being passed to the connection manager is jdbc:mysql://servername.domain.uni.edudatabase. Edited by: maheshguruswamy on Aug 9, 2012 9:56 AM 5. Re: Connecting and reading from MySQL954674 Aug 9, 2012 5:09 PM (in response to maheshguruswamy)jdbc:mysql://servername.domain.uni.edu/database That was it. I was looking at couple of websites and the way the had database listed was declaring a string variable with database name and concatenate it to the url, which is what I did but obviously didn't work. Thanks 6. Re: Connecting and reading from MySQL954674 Aug 9, 2012 5:56 PM (in response to 954674)Also would the same syntax work for connecting to SQL server? 7. Re: Connecting and reading from MySQLmaheshguruswamy Aug 9, 2012 6:27 PM (in response to 954674)No...check the individual jdbc driver providers for information on connection string. For sql server, check the msdn docs person found this helpful 8. Re: Connecting and reading from MySQL954674 Aug 9, 2012 6:39 PM (in response to maheshguruswamy)Thanks, I'll check that out.
https://community.oracle.com/thread/2427135
CC-MAIN-2017-22
refinedweb
617
57.06
22 February 2010 15:00 [Source: ICIS news] HOUSTON (ICIS news)--Here are some of the top stories from ICIS news ?xml:namespace> (Please click on the link to read the full text.) Interview: US Huntsman to boost CAPX to $250-$275m in 2010 US-based chemical firm Huntsman will boost capital spending (CAPX) to $250m-$275m (€183m-€201m) in 2010, up sharply from 2009 levels, a company official said on Friday. US producer Huntsman’s fourth quarter net income plummets to $66m US producer Huntsman's net income for the fourth quarter of 2009 plummeted to $66m (€48.8m), from $598m for the same period in 2008 which included $815m in income related to Hexion’s terminated takeover of the company, it said on Friday. Interview: Bio-succinic acid can beat petchems on price US benzene buyers wary of spike in spot market Terra Industries swings to $3.8m Q4 loss, sales drop 47% Terra Industries posted a 2009 fourth-quarter net loss of $3.8m (€2.8m), versus a profit of $165.7m in the year-earlier period, largely because of $77.8m in special charges and a 47% decline in sales, the US fertilizer firm said on Thursday. US Rockwood swings to $10.8m Q4 profit, sales rise 7.4% US ethanol healthy but higher blends, tax breaks critical – RFA The US ethanol industry is in a healthy state but must tackle some critical challenges in 2010, including raising gasoline blend levels and preserving tax incentives that are due to expire at year-end, the Renewable Fuels Association (RFA) said on Tuesday. Update: Lyondell reaches $450m deal to settle creditor case LyondellBasell on Tuesday announced that a $450m (€333m) settlement had been reached to satisfy a dispute with unsecured creditors, pushing the petrochemicals major closer to an emergence from US Chapter 11 bankruptcy
http://www.icis.com/Articles/2010/02/22/9336507/americas-top-stories-weekly-summary.html
CC-MAIN-2014-15
refinedweb
306
59.33
How not to code By eschrock on Mar 31, 2005) snprintf(func_name, sizeof (func_name), "do_%s", cmd); func_ptr = (int (\*)(int, char \*\*)) dlsym(RTLD_DEFAULT, func_name); if (func_ptr == NULL) { fprintf(stderr, "Unrecognized command %s", cmd); usage(); } return ((\*func_ptr)(argc, argv)); So when you type "a.out foo", the command would sprint: for (i = 0; i < sizeof (func_table) / sizeof (func_table[0]); i++) { if (strcmp(func_table[i].name, cmd) == 0) return (func_table[i].func(argc, argv)); } fprintf(stderr, "Unrecognized command %s", cmd); usage(); do_foo() using cscope. This serves as a good reminder that the most clever way of doing something is usually not the right answer. Unless you have a really good reason (such as performance), being overly clever will only make your code more difficult to maintain and more prone to error. Update - Since some people seem a little confused, I thoght I'd elaborate two points. First off, there is no loadable library. This is a library linked directly to the application. There is no need to asynchronously update the commands. Second, the proposed function table does not have to live separately from the code. It would be quite simple to put the function table in the same file with the function definitions, which would improve maintainability and understability by an order of magnitude.
https://blogs.oracle.com/eschrock/date/200503
CC-MAIN-2015-48
refinedweb
212
63.29
Red Hat Bugzilla – Bug 89698 Statically linked programs segfault Last modified: 2016-11-24 10:11:37 EST Description of problem: A static executable linked against glibc-2.2.x or older will segfault when calling gethostbyname with glibc-2.3.x. Compile this program with static linking under redhat 6.x or 7.x and try to run it on redhat 9: #include <netdb.h> main() { gethostbyname("localhost"); } It will segfault. This makes it almost impossible to distribute binaries for redhat linux. If I compile dynamically on redhat 7.x, users of 9 won't have the right version of libgmp or a dozen other libraries. And users of 6.x won't have the right version of glibc. Compile dynamically on 9, and 7.x uses won't have the right glibc version or a dozen other libraries. Compile statically, and it still doesn't work because all the libnss stuff isn't backward compatible. Version-Release number of selected component (if applicable): glibc-2.3.2-27.9 First of all, static linking does *not* guarantee the binary runs everywhere if the program uses NSS. To the contrary, it makes it less likely. Second, I cannot reproduce any problem. Running statically linked binary on RHL7.2 works just fine on a RHL9 system. You have to be much more specific. What services are used? The steps to reproduce are quite simple. On a system with glibc-2.2.5-43 (or glibc-2.2.4-32) and gcc-2.96-113 compile the following two line program with the gcc option -static (no other options are necessary): #include <netdb.h> main() {gethostbyname("localhost");} Then run the program on a redhat 9 system with glibc-2.3.2-27.9. The program will segfault. /etc/hosts is the default (files, dns). The backtrace from gdb is: #0 0x08082901 in _dl_relocate_object () at ../sysdeps/i386/dl-machine.h:348 #1 0x0806e3bf in dl_open_worker (a=0xbfffed10) at dl-open.c:294 #2 0x08052a1b in _dl_catch_error (objname=0xbfffed08, errstring=0xbfffed0c, operate=0x806dfa8 <dl_open_worker>, args=0xbfffed10) at dl-error.c:152 #3 0x0806e4fb in _dl_open (file=0xbfffee80 "libnss_files.so.2", mode=1, caller=0x0) at dl-open.c:407 #4 0x0805ab5e in do_dlopen (ptr=0xbfffee58) at dl-libc.c:78 #5 0x08052a1b in _dl_catch_error (objname=0xbfffee50, errstring=0xbfffee54, operate=0x805ab48 <do_dlopen>, args=0xbfffee58) at dl-error.c:152 #6 0x0805aa51 in __libc_dlopen (__name=0xbfffee80 "libnss_files.so.2") at dl-libc.c:42 #7 0x080588ba in __nss_lookup_function (ni=0x80afd28, fct_name=0x80a7c60 "gethostbyname_r") at nsswitch.c:340 #8 0x0805922a in __nss_lookup (ni=0xbfffef90, fct_name=0x80a7c60 "gethostbyname_r", fctp=0xbfffef94) at nsswitch.c:147 #9 0x0804d27f in __gethostbyname_r (name=0x8097ec8 "localhost", resbuf=0x80ae3d8, buffer=0x80af630 "", buflen=1024, result=0xbfffefd4, h_errnop=0xbfffefd8) at ../nss/getXXbyYY_r.c:168 #10 0x0804d077 in gethostbyname (name=0x8097ec8 "localhost") at ../nss/getXXbyYY.c:131 #11 0x080481f3 in main () #12 0x080482da in __libc_start_main (main=0x80481e0 <main>, argc=1, ubp_av=0xbffff074, init=0x80480b4 <_init>, fini=0x8097ea0 <_fini>, rtld_fini=0, stack_end=0xbffff06c) at ../sysdeps/generic/libc-start.c:129 If the libnss_files.so from glibc 2.3 isn't backward compatible, then why does it have the same major version number? With the version number the same, it is _impossible_ to run a program compiled statically with glibc 2.2 on a glibc 2.3 system. I have told you that a) I cannot reproduce the problem. I'm using that glibc-2.2.4-32 and gcc-2.96-112.7.2 and b) that statically linked problems have no right to assume compatibility if they are using NSS. Anyway, since I cannot reproduce any problem I have to assume it's some local bogosity on your system. I am an author of the distributed computing project Seventeen or Bust () and this problem has been the main reason I have stopped supporting Linux. I got tired of having to recompile dozens of binaries for every version of glibc when they just end up seg faulting due to buggy code I can't fix in NSS. This problem has existed for over a year. This report is not a localized problem to him. I find that any statically linked program which uses gethostbyname() will segfault on any machine except one built with the exact same version of libc. This makes it impossible to distribute static binaries for Linux which require networking support. Incredibly frustrating. Oddly enough, this problem disappears if gethostbyname is passed an IP address in dot notation (ie 127.0.0.1) instead of a hostname (ie localhost). Here is a bt from one of the many machines I can cause this problem on: Program received signal SIGSEGV, Segmentation fault. 0x080b08f5 in _dl_relocate_object () (gdb) bt #0 0x080b08f5 in _dl_relocate_object () #1 0x080a7643 in dl_open_worker () #2 0x08092253 in _dl_catch_error () #3 0x080a7837 in _dl_open () #4 0x08093306 in do_dlopen () #5 0x08092253 in _dl_catch_error () #6 0x080931f9 in __libc_dlopen () #7 0x0808c4c6 in __nss_lookup_function () #8 0x0808cb4e in __nss_lookup () #9 0x0808d337 in __nss_hosts_lookup () #10 0x08070fbc in gethostbyname_r () #11 0x08070db7 in gethostbyname () Cheers, Louie So why you link statically instead of dynamically? That's almost always a bad idea. If you look at e.g. Solaris, you cannot link a statically linked application using gethostbyname at all (and for a reason). That routine in /usr/lib/libnsl.a uses dlopen/dlsym/dlerror and Solaris provides no libdl.a library. In GLIBC, you can link such programs but they are guaranteed to work only if run against the same glibc as they have been linked against. Current glibc even issues a link time warning about it. If you need to link some specific library into the program, you should use -Bstatic -lthatlibrary -Bdynamic instead and keep libraries included in glibc linked in dynamically. That way symbol versioning ensures binary compatibility. I have two reason to link statically instead of dynamically. 1. Binary portability. If you dynamically link against glibc-2.3, then the binary won't run on a system with glibc-2.2, glibc-2.0, libc5, or any future glibc. You now have the exact same problem of needing a different binary for every different glibc version. 2. Security. A dynamically linked glibc makes it very easy to override a C library function with a custom version. This makes it a lot easier to fake out a licensing system or cheat at on online game. Yes you can still modify binaries or modify the kernel, but LD_PRELOAD=flexlm_crack.so is a lot easier. You're wrong about "or any future glibc". If you link dynamically against glibc-2.3, it will run against any future glibc (assuming it doesn't poke into glibc internals etc.). If you link against say glibc-2.1, it will run against glibc-2.2, glibc-2.3 and later glibcs as well. If you link statically but your program is not self-contained (e.g. because it uses NSS/iconv/locales), then you certainly don't get any portability advantages, just disadvantages, because the binary portability is suddenly not with the library you linked against and any future versions, but just the single one you linked against. If you think that when you link statically against say glibc 2.0 your program will run on a libc5 system, it will not. What you perhaps could do is ship all the NSS modules/locale definitions etc. you use together with the statically linked binary and tweak LD_LIBRARY_PATH in the statically linked program to point to the library with the modules (and libc.so/ld.so etc.). But that can be fairly huge. Plus you risk not including some NSS module needed on the target system. Or use a small dynamically linked helper application for NSS etc. from the statically linked program. We are also seeing this bug when running statically linked code. It showed up after upgrading glibc rpm sets from 2.3.2-11.9 to 2.3.2-27.9.7 Why did we upgrade glibc? Beacuse RedHat says it will "resolve vulnerabilities and address several bugs". Looks like it introduces or exposes some too. Why do we run statically linked code? Because it is a highly specialised piece of software, we don't have the source code, and the authors will not change the way it is distributed. We don't pay for it (so no leverage there) but it is essential. Looks like we downgrade glibc until it is fixed. This issue has cost us, our customers, and our prospective customers too much time and money. The bottom line is that we cannot support Redhat Linux 7.3. Whether we can support some other distribution remains to be seen. We are very reluctantly canceling Linux support for our products. (Aside: we have all the respect in the world for the Linux and Redhat programmers. I have been writing computer programs since 1962, and I know how difficult it is. Even in the days when computer programs occupied only 1000 bits, the things hardly ever worked.) In case anyone is interested: The trouble starts with a call to check out a license feature using FlexLm. This makes MacroVision look bad, but it isn't MacroVision's fault. Running on Redhat 7.3, statically linked, we see this: Starting program: xxx.vhd // Built Tue Nov 23 2004 Linux 2.4.18-3 // Running.. Program received signal SIGSEGV, Segmentation fault. 0x081cbf61 in _dl_relocate_object () (gdb) bt #0 0x081cbf61 in _dl_relocate_object () #1 0x081c69b3 in dl_open_worker () #2 0x081c558f in _dl_catch_error () #3 0x081c6ba7 in _dl_open () #4 0x081a2a06 in do_dlopen () #5 0x081c558f in _dl_catch_error () #6 0x081a28f9 in __libc_dlopen () #7 0x0819e9be in __nss_lookup_function () #8 0x0819f046 in __nss_lookup () #9 0x0819f23f in __nss_passwd_lookup () #10 0x0819c4b0 in getpwuid_r () #11 0x0819c2df in getpwuid () #12 0x0814e314 in lc_username () #13 0x081675ad in l_conn_msg () #14 0x0815702f in l_try_connect () #15 0x08156e37 in l_connect_host_or_list () #16 0x08156cb9 in l_connect () #17 0x0813fa89 in checkout_from_server () #18 0x0813e2e5 in lm_start_real () #19 0x0813de4b in l_checkout () #20 0x0813dcb1 in lc_checkout () #21 0x0814e934 in lp_checkout () ... And here is info on our Linux and libraries (I've removed people's names): Red Hat Linux release 7.3 (Valhalla) Kernel 2.4.18-3 on an i686 ---------------------- ./libc-2.2.5.so GNU C Library stable release version 2.2.5, by et al. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Compiled by GNU CC version 2.96 20000731 (Red Hat Linux 7.3 2.96-110). Compiled on a Linux 2.4.9-9 system on 2002-04-15. Available extensions: GNU libio by crypt add-on version 2.1 by and others The C stubs add-on version 2.1.2. linuxthreads-0.9 by BIND-8.2.3-T5B NIS(YP)/NIS+ NSS modules 0.19 by Glibc-2.0 compatibility add-on by libthread_db work sponsored by Report bugs using the `glibcbug' script to <bugs@gnu.org>. ---------------- libdl-2.2.5.so ---------------- gcc -v Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/2.96/specs gcc version 2.96 20000731 (Red Hat Linux 7.3 2.96-110).
https://bugzilla.redhat.com/show_bug.cgi?id=89698
CC-MAIN-2018-34
refinedweb
1,841
68.57
The Enumerable Enumerator is something I threw together to address a need to iterate and sequence through enumerations. I find this class useful when working with hardware and dealing with such things as bit fields. I also found application for this class in managing state and sequencing through states. For example, I may have an enumeration of bits: enum AlarmBits { Door=1, Window=2, Security=4, CoinJam=8, BillJam=16, } and I simply want to iterate through the enumeration to test each bit. Or, I may have a set of states: enum States { Slow, Medium, Fast } and in this case, I want to be able to change state in a forward or backwards direction but not exceed the state limits. As a side benefit, I found that using the methods in this class resulted in more robust application code, as I could change the enumeration without having to go back and update the application code. If I add an alarm bit or a state, I'm only changing the enumeration, not the code that implements iterating through the enumeration or managing a state. The class Enumerator consists of the following properties and methods. All the examples use this enum: Enumerator enum Test { A=1, B=2, C=3, D=4, } You will note that all the methods are static because they operate on the enumeration type as opposed to an instance of the enumeration. The use of generics improves type checking (for example, the current value and the minimum value must be of the same type) and avoids downcasting of the returned value in the application code. Returns the first value in the enumerator. As with all these methods, the enumerations are sorted by value, which may not be the same as the ordinal value. public static T First { get { return ((T[])Enum.GetValues(typeof(T)))[0]; } } Example: Console.WriteLine("First = " + Enumerator<Test>.First); The output is: First = A Returns the last enumeration value. public static T Last { get { T[] vals = (T[])Enum.GetValues(typeof(T)); return vals[vals.Length-1]; } } Console.WriteLine("Last = " + Enumerator<Test>.Last); Last = D This method returns the previous enumeration value, or returns the first value if the enumeration cannot be decremented further. I find this method, and the corresponding NextOrLast method, useful to work with enumerators that represent state. NextOrLast public static T PreviousOrFirst(T val) { T[] vals = (T[])Enum.GetValues(typeof(T)); int idx = Array.FindIndex<T>(vals, delegate(T v) { return v.Equals(val); }); return idx == 0 ? vals[0] : vals[idx - 1]; } Console.WriteLine("Previous Or First of A = " + Enumerator<Test>.PreviousOrFirst(Test.A)); Console.WriteLine("Previous Or First of B = " + Enumerator<Test>.PreviousOrFirst(Test.B)); Outputs: Previous Or First of A = A Previous Or First of B = A This method is the compliment of the PreviousOrFirst method. PreviousOrFirst public static T NextOrLast(T val) { T[] vals = (T[])Enum.GetValues(typeof(T)); int idx = Array.FindIndex<T>(vals, delegate(T v) { return v.Equals(val); }); return idx == vals.Length-1 ? vals[idx] : vals[idx + 1]; } Console.WriteLine("Next Or Last of C = " + Enumerator<Test>.NextOrLast(Test.C)); Console.WriteLine("Next Or Last of D = " + Enumerator<Test>.NextOrLast(Test.D)); Next Or Last of C = D Next Or Last of D = D It seemed reasonable to also supply a couple methods that are bounded by a supplied enum value rather than the first or last enum value. public static T PreviousBounded(T val, T min) { T[] vals = (T[])Enum.GetValues(typeof(T)); int curIdx = Array.FindIndex<T>(vals, delegate(T v) {return v.Equals(val);}); int minIdx = Array.FindIndex<T>(vals, delegate(T v) {return v.Equals(min);}); return curIdx <= minIdx ? vals[minIdx] : vals[curIdx - 1]; } Console.WriteLine("Enumerator low limit (C->B) = " + Enumerator<Test>.PreviousBounded(Test.C, Test.B)); Console.WriteLine("Enumerator low limit (B->B) = " + Enumerator<Test>.PreviousBounded(Test.B, Test.B)); Console.WriteLine("Enumerator low limit (A->B) = " + Enumerator<Test>.PreviousBounded(Test.A, Test.B)); Enumerator low limit (C->B) = B Enumerator low limit (B->B) = B Enumerator low limit (A->B) = B public static T NextBounded(T val, T max) { T[] vals = (T[])Enum.GetValues(typeof(T)); int curIdx = Array.FindIndex<T>(vals, delegate(T v) { return v.Equals(val); }); int maxIdx = Array.FindIndex<T>(vals, delegate(T v) { return v.Equals(max); }); return curIdx >= maxIdx ? vals[maxIdx] : vals[curIdx + 1]; } Console.WriteLine("Enumerator high limit (B->C) = " + Enumerator<Test>.NextBounded(Test.B, Test.C)); Console.WriteLine("Enumerator high limit (C->C) = " + Enumerator<Test>.NextBounded(Test.C, Test.C)); Console.WriteLine("Enumerator high limit (D->C) = " + Enumerator<Test>.NextBounded(Test.D, Test.C)); Enumerator high limit (B->C) = C Enumerator high limit (C->C) = C Enumerator high limit (D->C) = C The original goal was simply to be able to iterate over the enumeration (changed as per the suggestions in the comments): public static IEnumerable<T> Items() { return (T[])Enum.GetValues(typeof(T)); } foreach (Test t in Enumerator<Test>.Items()) { Console.WriteLine(t + " = " + Convert.ToInt32(t)); } A = 1 B = 2 C = 3 D = 4 So, it's a simple class but it was fun to write, provides some useful functionality, and helps to decouple the enumeration implementation from the application logic, which is something I always find useful. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here wurakeem wrote:I don't mean to be pedantic about unimportant things but the category of the article does not specify that this is .NET 2.0 code. wurakeem wrote:I assumed it was .NET 1.1 compatible which is something I could've.
http://www.codeproject.com/Articles/16216/The-Enumerable-Enumerator?fid=354436&df=90&mpp=10&sort=Position&spc=None&select=1738777&tid=1752121
CC-MAIN-2015-14
refinedweb
959
50.63
I am trying to create a program that changes the object colour on click from white to black or from white to black depending of the previous colour. I would want the program change the colour only if the object is a rectangle. How can I make the this happen? Here is my code: import tkinter as tk root = tk.Tk() cv = tk.Canvas(root, height=800, width=800) cv.pack() def onclick(event): item = cv.find_closest(event.x, event.y) current_color = cv.itemcget(item, 'fill') if current_color == 'black': cv.itemconfig(item, fill='white') else: cv.itemconfig(item, fill='black') cv.bind('<Button-1>', onclick) cv.create_line(50, 50, 60, 60, width=2) cv. create_rectangle(80, 80, 100, 100) root.mainloop() Here are three common solutions to this problem: You can ask the canvas for the type of the object: item_type = cv.type(item) if item_type == "rectangle": # this item is a rectangle else: # this item is NOT a rectangle Another solution is to give each item one or more tags. You can then query the tags for the current item. First, include one or more tags on the items you want to be clickable: cv. create_rectangle(80, 80, 100, 100, tags=("clickable",)) Next, check for the tags on the item you're curious about, and check to see if your tag is in the set of tags for that item: tags = cv.itemcget(item, "tags") if "clickable" in tags: # this item has the "clickable" tag else: # this item does NOT have the "clickable" tag A third option is to attach the bindings to the tags rather than the canvas as a whole. When you do this, your function will only be called when you click on item with the given tag, eliminating the need to do any sort of runtime check: cv.tag_bind("clickable", "<1>", onclick)
https://codedump.io/share/qUZG4VKe0hh7/1/pythontkinter---identify-object-on-click
CC-MAIN-2016-44
refinedweb
306
75.91
Created on 2017-06-20 19:15 by Guillaume Sanchez, last changed 2021-06-29 16:33 by jwilk. . We missed 3.7 train. I'm sorry about I couldn't review it. But I have many shine features I want in 3.7 and I have no time to review all. Especially, I need to understand tr29. It was hard job to me. I think publishing this (and any other functions relating to unicode) to PyPI is better than waiting 3.8. It make possible to discuss API design with working code, and make it "battle tested" before adding it to standard library. Hi, Unicodey person here, I'm involved in Unicode itself and also maintain an implementation of this particular spec[1]. So, firstly, > "a⃑".center(width=5, fillchar=".") If you're trying to do terminal width stuff, extended grapheme clusters *will not* solve the problem for you. There is no algorithm specified in Unicode that does this, because this is font dependent. Extended grapheme clusters are better than code points for this, however, and will not ever produce *worse* results. It's fine to expose this, but it's worth adding caveats. Also, yes, please do not expose a direct indexing function. Aside from almost all Unicode algorithms being streaming algorithms and thus inefficient to index directly, needing to directly index a grapheme cluster is almost always a sign that you are making a mistake. >. There's only one language I can think of that uses extended grapheme clusters as its default notion of "character": Swift. Swift is largely designed for UI stuff, and it makes sense in this context. This is also baked in very deeply to the language (e.g. their Character class is a thin wrapper around String, since grapheme clusters can be arbitrarily large). You'd need a pretty major paradigm shift for python to make a similar change, and it doesn't make as much sense for python in the first place. Starting off with a library published to pypi makes sense to me. [1]:. Agreed. I won't pretend to be able to predict what Python 5.0 will bring *wink* but there's too much history around the "code point = character" notion for the language to change now. If the language can expose a grapheme iterator, then people can experiment with grapheme-based APIs in libraries. (By grapheme I mean "extended grapheme cluster", but that's a mouthful. Sorry linguists.) What do you think of these as a set of grapheme primitives? (1) is_grapheme_break(string, i) Return True if a grapheme break would occur *before* string[i]. (2) graphemes(string, start=0, end=len(string)) Iterate over graphemes in string[start:end]. (3) graphemes_reversed(string, start=0, end=len(string)) Iterate over graphemes in reverse order. I *think* is_grapheme_break would be enough for people to implement their own versions of graphemes and graphemes_reversed. Here's an untested version: def graphemes(string, start, end): cluster = [] for i in range(start, end): c = string[i] if is_grapheme_break(string, i): if i != start: # don't yield the empty cluster at Start Of Text yield ''.join(cluster) cluster = [c] else: cluster.append(c) if cluster: yield ''.join(cluster) Regarding is_grapheme_break, if I understand the note here: one never needs to look at more than two adjacent code points to tell whether or not a grapheme break will occur between them, so this ought to be pretty efficient. At worst, it needs to look at string[i-1] and string[i], if they exist. > one never needs to look at more than two adjacent code points to tell whether or not a grapheme break will occur between them, so this ought to be pretty efficient. That note is outdated (and has been outdated since Unicode 9). The regional indicator rules (GB12 and GB13) and the emoji rule (GB11) require arbitrary lookbehind (though thankfully not arbitrary lookahead). I think the ideal API surface is an iterator and nothing else. Everything else can be derived from the iterator. It's theoretically possible to expose an is_grapheme_break that's faster than just iterating -- look at the code in unicode-segmentation's _reverse_ iterator to see how -- but it's going to be tricky to get right. Building the iterator on top of is_grapheme_break is not a good idea. >. Is the idea here that we'd take on a new dependency on the compiled `unicode-segmentation` binary, rather than adding Rust into our build system? Does `unicode-segmentation` support all platforms that CPython supports? I was under the impression that Rust requires llvm and llvm doesn't necessarily have the same support matrix as CPython (I'd love to be corrected if I'm wrong on this). (Note: I don't actually know what the process is for taking on new dependencies like this, just trying to point at one possible stumbling block.) > Does `unicode-segmentation` support all platforms that CPython supports? It's no-std, so it supports everything the base Rust compiler supports (which is basically everything llvm supports). And yeah, if there's something that doesn't match with the support matrix this isn't going to work. However, I suggested this more for the potential PyPI package. If you're working this into CPython you'd have to figure out how best to include Rust stuff in your build system, which seems like a giant chunk of scope creep :) For including in CPython I'd suggest looking through unicode-segmentation and writing a C version of it. We use a python script[1] to generate the data tables, this might be something y'all can use. Swift's UAX 29 implementation is also quite interesting, however it's baked in deeply to the language so it's less useful as a starting point. [1]:
https://bugs.python.org/issue30717
CC-MAIN-2022-05
refinedweb
969
63.8
#include "Teuchos_ParameterList.hpp" #include "Teuchos_StandardParameterEntryValidators.hpp" #include "Teuchos_Array.hpp" #include "Teuchos_Version.hpp" #include "Optika_GUI.hpp" #include "Teuchos_XMLParameterListHelpers.hpp" #include "Teuchos_FancyOStream.hpp" #include "Teuchos_VerboseObject.hpp" Go to the source code of this file. Now here's were things get a little differnent. Instead of just calling Optika::getInput(), we're actually going to create and OptikaGUI object. It will be the vehical through which we customize the GUI. We'll pass it the ParameterList in which we want it to story user input. Now we can start configuring what our GUI will look like. Let's start with the window title. We can set the information that will be displayed in the about dialog for the gui. If you have an icon you'd like to use as the WindowIcon you can do that too. Just specify the path to the file containig the icon. Supported formats will vary from system to system and your QT installation, but the following should always work: -BMP -GIF -JPG -JPEG -MNG -PNG -PBM -PGM -PPM -TIFF -XBM -XPM -SVG Now if you really wanna dive into your GUI design you can use QT style sheets. Since optika is build on top of QT, you can use QT style sheets to style the various widgets used by Optika. The main widges Optika uses thay you'll probably wanna style are: -QTreeView -QDialog -QMainWindow -QPushButton -QSpinBox -QMenu -QMenuBar You might need to look at some of the Optika source code to really get fine-grained styling control. Once your stylesheet is made, all you need to do is specify the filepath to the Optika_GUI object. Also note the style sheet provided in this example is exceptionally ugly. Now that we're all ready to go, we just call the exec funtion on our Optika_GUI object. This will get the user input and put it in our pramater list. That's it! You can make even more awesome GUIs now!. Definition at line 50 of file example/CustomGUIExample/main.cpp.
http://trilinos.sandia.gov/packages/docs/r10.10/packages/optika/browser/doc/html/example_2CustomGUIExample_2main_8cpp.html
CC-MAIN-2014-15
refinedweb
333
67.76
Contents Function basicsEdit Function definitions in Scala are quite flexible and still somewhat concise. Let's look at a simple function definition: def simpleFunction = 3 println(simpleFunction) //Prints "3". In the above, the "def" keyword is used to start the definition of the function. After the "def" keyword, the name of the function follows, in this case "simpleFunction". "=" is used to indicate that the function has a return value. After the "=" comes an expression of some sort, indicating the body of the function, which in this case is a 3. We then call the function by simply writing its name, and print the result. Note that since the function has no parameters, we don't use "()" to call it. Since scopes are expressions, and the last statement in the scope is the result value of the scope, we can write a more advanced function as follows: def someFunction = { val a = 42 - 13*2 a*a } println(someFunction) //Prints "256". The final statement "a*a" is an expression, and thus becomes the result value of the scope, and since the scope is the expression of the function, it becomes the return value of the function. If the final statement in the scope had been a non-expression statement, the return type would have been "Unit", indicating no return value, as seen here: def printNameFunction = println("Jack Sparrow") printNameFunction //Prints "Jack Sparrow". Since the function has no return value, but only has side-effects, it is considered good style to indicate this in the function definition. Instead of using "=" to indicate the body of the function, it is possible to just use a scope: def printNameFunction2 {println("Captain Haddock")} printNameFunction2 //Prints "Captain Haddock". Independent of what value the last expression have, a function that forgoes the "=" will always have the return type "Unit". The above functions are all a bit boring, since they don't take any arguments. Functions are given argument lists by appending a "()" after the name, like so: def cubed(a:Int) = a*a*a def pythagoras(a:Double, b:Double) = math.sqrt(a*a + b*b) println(cubed(-3)) //Prints "-27". println(pythagoras(3, 4)) //Prints "5.0". Each argument in the function is separated by a ",", and calling the functions requires using the "()" and putting the arguments in the right order. Note that the type annotations for the arguments are mandatory. As you may have noted in the above, the return type was never explicitly declared. The reason is that the Scala compiler can often infer the right return type. If for some reason you want to declare it explicitly, it can be done by putting it between the last parameter list and the start of the body (denoted by either "=" or "{"), like so: def squared(a:Int):Int = a*a A common case where the result type needs to be explicitly declared (as of Scala 2.9.1) is for recursive functions, namely functions that call themselves. The basic example is the factorial function: def factorial(n:Int):Int = { if (n == 0) 1 else n*factorial(n-1) } //ERROR: Does not compile! /*def factorial(n:Int) = { if (n == 0) 1 else n*factorial(n-1) }*/ Function namesEdit TODO: Syntax regarding names of functions. Prefix functionsEdit TODO: Syntax regarding prefix and infix functions. Function precedenceEdit TODO: Syntax regarding function precedence as well as ":". Function typesEdit TODO: Types of functions, for instance "Int => Double" or "(String => String) => String".
https://en.m.wikibooks.org/wiki/Scala/Functions
CC-MAIN-2017-09
refinedweb
570
61.56
help me I'm new programmer[code] #include<iostream> #include<cctype> using namespace std; char type_of_char() { char x=' ';... help me I'm new programmerHi every body , I hope everyone doing well . I just need your help pleas if you can and I will be th... please help me --I'm sorry really I don't know I forget ,sorry again and thank u ^^ please help me --#include<iostream> using namespace std; int maltiple(int y , int x) { cout<<"Enter tow entger nu... please help me --oh thank u so much but I don't understand how can I use the function with it ?? This user does not accept Private Messages
http://www.cplusplus.com/user/waad_abdulla/
CC-MAIN-2017-04
refinedweb
110
71.65
This chapter describes how to use the demonstration program designed for Oracle COM Automation Feature for Java. This chapter contains these topics: Overview of Oracle COM Automation Feature for Java Demos Oracle COM Automation Feature. Each COM Automation server, such as Word and Excel, provides more advanced capabilities than what is offered through the demo APIs. To take advantage of these advanced features, you must design and code your own Java classes. In this release, COM Automation has provided the Microsoft Word Java Demo, which exchanges data between an Oracle Database instance and Microsoft Word. The following sections describe how to install the Microsoft Word Java demo and the APIs that it exposes. This demo is provided as an example of the types of solutions that can be built with Oracle Database that creates the call specification for the demo. Microsoft Word must be installed on the local computer before you install this demo. To install the demo: Run the loadjava tool from the command line: loadjava -force -resolve -user hr ORACLE_BASE\ORACLE_HOME\com\java\demos\TestWORD.class Password: password Start SQL*Plus. C:\> sqlplus /NOLOG Connect to the Oracle Database instance as the user who will use the Microsoft Word demo. For example: SQL> connect hr Enter password: password Run the TestWORD.sql script to create the call specification: SQL> @ORACLE_BASE\ORACLE_HOME\com\java\demos\TestWORD.sql See Also:Oracle Database Java Developer's Guide for further information about the loadjavatooldemoj.doc to see its contents. The public class TestWORD API as described in "Core Functionality" , provides a wrapper around the Word.Basic COM Automation class as well as some sample code that demonstrates how to use the wrapper. This code was written to be run on the Oracle database server. To create a custom application that uses this wrapper: Instantiate an object of this class. Create the Word.Basic object by calling the CreateWordObject method. Create a new Microsoft Word document with the FileNew method, or open an existing document with the FileLoad method. Use the FormatFontSize, InsertText, and InsertNewLine methods to add text and formatting to the document. Save the document with the FileSaveAs or the FileSave method. Call the FileClose method when you are finished with the document. Call the DestroyWordObject method when you are finished with the Word.Basic object. The following subsections describe the APIs that the Microsoft Word Java demo exposes. These APIs are primitive and do not expose all the functionalities that Microsoft Word exposes through COM Automation. This API is the constructor. It does nothing. Syntax public TestWORD() Creates the Word.Basic COM object. Syntax public void CreateWordObject(java.lang.String servername) This API destroys the Word.Basic COM object. Syntax public void DestroyWordObject() This API creates a new Microsoft WORD document. Syntax public void FileNew() Remarks This API is a wrapper for the FileNewDefault COM method of the Word.Basic COM object. This API loads an existing Microsoft WORD document. Syntax public void FileLoad(java.lang.String filename) Remarks This API is a wrapper for the FileOpen COM method of the Word.Basic COM object. This API sets the font size. Syntax public void FormatFontSize(long fontsize) Remarks This API is a wrapper for the FormatFont COM method of the Word.Basic COM object. This API inserts text into the Microsoft Word document. Syntax public void InsertText(java.lang.String textstr) Remarks This API is a wrapper for the Insert COM method of the Word.Basic COM object. This API inserts a new line into the Microsoft Word document. Syntax public void InsertNewLine() Remarks This API is a wrapper for the InsertPara COM method of the Word.Basic COM object. This API saves the Microsoft Word document using a specified name. Syntax public void FileSaveAs(java.lang.String filename) Remarks This API is a wrapper for the FileSaveAs COM method of the Word.Basic COM object. This API saves the Microsoft Word document. Syntax public void FileSave() Remarks This API is a wrapper for the FileSave COM method of the Word.Basic COM object. This API closes the Microsoft Word document, and exits Microsoft Word. Syntax public void FileClose() Remarks This API is a wrapper for the FileClose and FileExit COM methods of the Word.Basic COM object.
http://docs.oracle.com/cd/E18283_01/appdev.112/e10591/ch5java.htm
CC-MAIN-2015-48
refinedweb
708
52.15
Using GDI To Draw Graphics This tutorial takes you through the basics of using GDI+ to draw your own graphics to a form or control. Frequently there just isn't a widget that does what you need. Using GDI+ to draw your own user interface components helps to open up the possibilities for user interaction in your application. GDI+ requires a basic understanding of how to use primitives (lines, ellipses, and rectangles) in a coordinate system to compose your desired output. The GDI+ Coordinate System If you've created visual C# applications you've already used the GDI+ coordinate system. A component's Location property is the X and Y coordinate relative to the parent form. Several data structures exist in the System.Drawing namespace to store coordinates, dimensions, and areas. - The Point object stores an integer X and Y coordinate. - The PointF object stores a floating-point X and Y coordinate. - The Size object stores an integer Width and Height value. - The SizeF object stores a floating-point Width and Height value. - The Rectangle object is an aggregation of a Point (X, Y) and a Size (Width, Height) to give a sense of area. - The RectangleF object is an aggregation of a PointF (X, Y) and SizeF (Width, Height) to give a sense of area. Every Form or Control has a main panel called a Client Area, where sub-controls are placed, or the control itself is drawn. Think of this panel as a blank canvas for you to paint on. In order to specify the location of primitives, you must specify an X and Y coordinate. - These coordinates are relative to the client area of the form or control, so the top lefthand corner is (0, 0). * X coordinates increase from Left to Right, Y coordinates increase from Top to Bottom. - You can use negative X and Y coordinates to place something beyond the top or left boundaries of the panel. - The furthest right and bottom coordinate can be found by accessing the panel's Width and Height properties respectively. - A control's Bounds property returns a Rectangle object representing the area of the control. C# provides methods for translating Control Coordinates to Screen Coordinates. Screen Coordinates are absolute coordinates on the screen where the top lefthand corner is (0, 0). For systems that utilize more than one monitor, the screen area is assumed to be a continuous rectangular area that spans both monitors. - The PointToScreen function takes a Point object (assumed relative to the control) and converts it to absolute coordinates on the screen, returning another Point object. - The PointToClient function takes a Point object (assumed relative to the screen) and convertis it to a coordinate relative to the control, returning another Point object. The Paint Event Forms and Controls have a Paint event which is called every time the control needs to be redrawn. For instance, when part of a control is occluded by some other object on the screen, the area that has been occluded needs to be redrawn when the occluding object is removed - thus the control is asked to repaint itself. The Paint event is called whenever this repaint is requested, thus it is the place where you should place your code to draw the control. - You can force a repaint event by calling the control's Refresh() function. - Repainting of the current control and all sub-controls can be suspended by calling the SuspendLayout() function, and resumed again by calling ResumeLayout(). The Graphics Interface Forms and controls each have a CreateGraphics() function, which returns an instance of System.Drawing.Graphics, an object used to conduct drawing to the client area of the control. OR The Graphics object also has a handy function for clearing the entire area of the canvas with a given color. You may wish to specify your own color, or use the default control color. g.Clear( Control.BackColor ); By default, the Graphics object draws without antialiasing - it is computationally cheaper, but results in jagged edges on drawn shapes. To turn on antialiasing, do the following function call before any drawing operations: Drawing Styles Several objects exist to control color for different types of drawing: - The Color enumeration has a list of named web colors. This can be nice for quick access to colors whose names that you know (Red, Green, Blue, Yellow, Purple...), but you can create your own color from RGB values using the Color.FromArgb( r, g, b );, where r, g, and b are integers between 0 (lowest intensity) and 255 (highest intensity). - For lines/outlines, the Pen object stores a Color and thickness in pixels for the line. You can also use the Pens enumeration which gives a 1 pixel thick pen in any color from the Color enumeration. - For solids/fills, the SolidBrush object represent a solid Color. You can also use the Brushes enumeration which gives a SolidBrush in any color from the Color enumeration. - For text, the Font object represents the Font Face, Point Size, and optional font styles such as Bold and Italic. Styles are applied using a boolean OR (| operator) on the FontStyles enumeration. Drawing Primitives The Graphics object has methods for drawing operations. - The Graphics.DrawLine Function draws a line from: - A Pen, a starting (X,Y) and ending (X,Y). - A Pen, a starting Point, and an ending Point. - The Graphics.DrawRectangle Function draws the outline of a rectangle from: - A Pen and X, Y, Width, Height values. - A Pen, a Point, and a Size. - A Pen and a Rectangle - The Graphics.FillRectangle Function draws a solid rectangle from: - A Brush and X, Y, Width, Height values. - A Brush , a Point, and a Size. - A Brush and a Rectangle - The Graphics.DrawEllipse Function draws the outline of an ellipse. Note that the (X,Y) coordinates indicate the top left anchor for the ellipse, NOT the center point as you would probably expect. This is a minor annoyance that you must deal with. This primitive is drawn using the same parameters as from DrawRectangle. - The Graphics.FillEllipse Function draws a solid ellipse. Once again the (X,Y) coordinates indicate the top left anchor for the ellipse, NOT the center point. This primitive is drawn using the same parameters as from FillRectangle. g.DrawLine( new Pen( Color.Blue, 3 ), new Point(0, 0), new Point(100, 100) ); g.DrawRectangle( Pens.Red, 10, 10, 20, 20 ); // Note the bottom corner is at (30,30). g.DrawRectangle( Pens.Red, new Point(10, 10), new Size(20, 20) ); g.DrawRectangle( Pens.Red, new Rectangle(10, 10, 20, 20) ); g.FillRectangle( new SolidBrush( Color.Lime ), 50, 50, 20, 20 ); g.DrawEllipse( new Pen( Color.Purple, 10 ), 100, 100, 100, 100 ); g.FillEllipse( Brushes.Yellow, 10, 100, 20, 30 ); Drawing Text The Graphics.DrawString Function draws text to the canvas using: - The String of text that you wish to draw. - A Font object. - A Brush. - A PointF object indicating the top lefthand corner of the text bounds OR a RectangleF representing a rectangular clipping area for the text. - OPTIONALLY: A System.Drawing.StringFormat object, which contains the style for horizontal and vertical alignment, text direction, clipping, line spacing, etc.. StringFormat sf = new StringFormat(); sf.Alignment = StringAlignment.Center; // Horizontally center. Near = left align, Far = right align. sf.LineAlignment = StringAlignment.Center; // Vertically center. Near = top align, Far = bottom align. g.DrawRectangle( Pens.Red ); g.DrawString( "Hello world!", new Font("Times", 12, FontStyle.Regular), new SolidBrush( Color.Black ), new RectangleF(0, 0, 100, 100), sf ); // Text will wrap to horizontal bounds, clip to vertical bounds You can measure the width of a graphically rendered string in pixels as follows: Using Images When you want to use an image in your control rather than drawing it manually, GDI+ lets you easily paste images to the canvas. First load the desired image into your application as follows: For simple image drawing, take a look at the Graphics.DrawImageUnscaled function. For more advanced options, such as image scaling, use Graphics.DrawImage. There are many overloads for both of these functions, so I leave it to the reader to investigate which overload satisfies their desired outcome. GDI+ also lets you draw to images as if they were a canvas. To do this, you must create a Graphics object from the Image as follows: You can use all the same GDI+ drawing functions as mentioned above - it functions exactly the same as drawing to a control's client area except your drawings are stored to an off-screen buffer. HINT: You can use Images as off-screen buffers for double-buffering your controls. Double buffering can be used to reduce flickering when drawing. Drawing en Masse Hard-coding GDI+ commands can only take you so far. Say you have a control that draws a 10x10 grid of circles - you're not going to enjoy writing out 100 DrawEllipse commands, especially when you have to manually enter coordinates for each one. Naturally you can use a nested loop to iterate over X and Y positions, but what if they don't conform to some pattern. It's time to dredge up the old Software Engineering Factory Design pattern. Let's use an object oriented approach to store properties such as location, color, size etc. for primitives. We can have each one implement an interface so it is a simple matter to call on the object to render itself. Take a look at the following code: public interface IDrawable { void Draw( Graphics g ); } public class Ellipse : IDrawable { public Color c = Color.Black; public Point l = Point(0, 0); public Size s = Size(0, 0); public int stroke = 0; public Ellipse( Color c, Point l, Size s, int stroke ) { this.c = c; this.l = l; this.s = s; this.stroke = stroke; } public void Draw( Graphics g ) { if ( stroke == 0 ) { g.FillEllipse( SolidBrush( c ), l, s ); } else { g.DrawEllipse( new Pen( c, stroke ), l, s ); } } } We can create similar objects that implement the Drawable interface for rectangles and lines. We can then store an ArrayList of these objects, and draw them whenever the Paint event is triggered. ... alDrawingObjects.Add( new Ellipse( Color.Red, new Point(100, 100), new Size(10, 10), 5 ) ); alDrawingObjects.Add( new Ellipse( Color.Blue, new Point(50, 50), new Size(20, 20), 0 ) ); ... Graphics g = this.CreateGraphis(); foreach ( IDrawable d in alDrawingObjects ) { d.Draw( g ); }
https://grouplab.cpsc.ucalgary.ca/cookbook/index.php/VisualStudio/UsingGDIToDrawGraphics
CC-MAIN-2021-31
refinedweb
1,722
65.52
Hello, I need help with figuring out the logic to print the day number of the year. The directions for my homework are: Write a program that prints the day number of the year, given the date is in the form: month day year. For example, if the input is 1 1 05, the day number is 1; if the input is 12 25 05, the day number is 359. The program should check for a leap year . I have a basic outline of the program but i am having trouble with figuring out the logic to display the day number of the year given the date. This is what i have so far Code java: import javax.swing.JOptionPane; public class Practice4 { public static void main(String[] args) { String monthStr; String dayStr; int inputMonth; int inputDay; String yearStr2; String dayStr2; int inputyear2; int inputDay2; String[] Months = {"January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"}; int[] Days = {31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31}; monthStr = JOptionPane.showInputDialog(null, "Enter the Month: "); inputMonth = Integer.parseInt(monthStr); dayStr = JOptionPane.showInputDialog(null, "Enter the Day: "); inputDay = Integer.parseInt(dayStr); yearStr2 = JOptionPane.showInputDialog(null, "Enter the year: "); inputyear2 = Integer.parseInt(yearStr2); JOptionPane.showMessageDialog(null, );//To display the day number of the year public static int dayNumber(int dayCount)//Method for logic to find day number { for( )//Need help figuring out logic to print day number of the year { boolean isLeapYear = (year % 4 == 0 && year % 100 != 0)||(year % 400 == 0); //accounting for leap year(probably doesnt go here) } } } }
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/37077-help-logic-find-day-number-year-printingthethread.html
CC-MAIN-2014-41
refinedweb
262
62.48
Parallel Programming Crash Course. There are two broad categories of parallel programs: shared memory and message passing. You likely will see both types being used in various scientific arenas. Shared-memory programming is when all of the processors you are using are on a single box. This limits you as to how big your problem can be. When you use message passing, you can link together as many machines as you have access to over some interconnection network. Let's start by looking at message-passing parallel programming. The most common version in use today is MPI (Message Passing Interface). MPI is actually a specification, so many different implementations are available, including Open MPI, MPICH and LAM, among others. These implementations are available for C, C++ and FORTRAN. Implementations also are available for Python, OCaml and .NET. An MPI program consists of multiple processes (called slots), running on one or more machines. Each of these processes can communicate with all other processes. Essentially, they are in a fully connected network. Each process runs a full copy of your program as its executable content and runs independently of the others. The parallelism comes into play when these processes start sending messages to each other. Assuming you already have some MPI code, the first step in using it is to compile it. MPI implementations include a set of wrapper scripts that handle all of the compiler and linker options for you. They are called mpicc, mpiCC, mpif77 and mpif90, for C, C++, FORTRAN 77 and FORTRAN 90, respectively. You can add extra options for your compiler as options to the wrapper scripts. One very useful option is -showme. This option simply prints out the full command line that would be used to invoke your compiler. This is useful if you have multiple compilers and/or libraries on your system, and you need to verify that the wrapper is doing the right thing. Once your code is compiled, you need to run it. You don't actually run your program directly. A support program called mpirun takes care of setting up the system and running your code. You need to tell mpirun how many processors you want to run and where they are located. If you are running on one machine, you can hand in the number of processors with the option -np X. If you are running over several machines, you can hand in a list of hostnames either on the command line or in a text file. If this list of hostnames has repeats, mpirun assumes you want to start one process for each repeat. Now that you know how to compile and run your code, how do you actually write an MPI program? The first step needs to initialize the MPI subsystem. There is a function to do this, which in C is this: int MPI_Init(&argc, &argv); Until you call this function, your program is running a single thread of execution. Also, you can't call any other MPI functions before this, except for MPI_Initialized. Once you run MPI_Init, MPI starts up all of the parallel processes and sets up the communication network. After this initialization work is finished, you are running in parallel, with each process running a copy of your code. When you've finished all of your work, you need to shut down all of this infrastructure cleanly. The function that does this is: int MPI_Finalize(); Once this finishes, you are back to running a single thread of execution. After calling this function, the only MPI functions that you can call are MPI_Get_version, MPI_Initialized and MPI_Finalized. Remember that once your code goes parallel, each processor is running a copy of your code. If so, how does each copy know what it should be doing? In order to have each process do something unique, you need some way to identify different processes. This can be done with the function: int MPI_Comm_rank(MPI_Comm comm, int *rank); This function will give a unique identifier, called the rank, of the process calling it. Ranks are simply integers, starting from 0 to N–1, where N is the number of parallel processes. You also may need to know how many processes are running. To get this, you would need to call the function: int MPI_Comm_size(MPI_Comm comm, int *size); Now, you've initialized the MPI subsystem and found out who you are and how many processes are running. The next thing you likely will need to do is to send and receive messages. The most basic method for sending a message is: int MPI_Send(void *buf, int count, MPI_Datatype type, ↪int dest, int tag, MPI_Comm comm); In this case, you need a buffer (buf) containing count elements of type type. The parameter dest is the rank of the process that you are sending the message to. You also can label a message with the parameter tag. Your code can decide to do something different based on the tag value you set. The last parameter is the communicator, which I'll look at a little later. On the receiving end, you would need to call: int MPI_Recv(void *buf, int count, MPI_Datatype type, ↪int source, int tag, MPI_Comm comm, MPI_Status *status); When you are receiving a message, you may not necessarily care who sent it or what the tag value is. In those cases, you can set these parameters to the special values MPI_ANY_SOURCE and MPI_ANY_TAG. You then can check what the actual values were after the fact by looking at the status struct. The status contains the values: status->MPI_source status->MPI_tag status->MPI_ERROR Both of these functions are blocking. This means that when you send a message, you end up being blocked until the message has finished being sent. Alternatively, if you try to receive a message, you will block until the message has been received completely. Because these calls block until they complete, it is very easy to cause deadlocks where, for example, two processes are both waiting for a message to arrive before any messages get sent. They end up waiting forever. So if you have problems with your code, these calls usually are the first places to look. These functions are point-to-point calls. But, what if you want to talk to a group of other processes? MPI has a broadcast function: int MPI_Bcast(void *buf, int count, MPI_Datatype type, ↪int root, MPI_Comm comm); This function takes a buffer containing count elements of type type and broadcasts to all of the processors, including the root process. The root process (from the parameter root) is the process that actually has the data. All the others receive the data. They all call MPI_Bcast, and the MPI subsystem is responsible for sorting out who has the data and who is receiving. This call also sends the entire contents of the buffer to all the processes, but sometimes you want each process to work on a chunk of the data. In these cases, it doesn't make sense to send the entire data buffer to all of them. There is an MPI function to handle this: int MPI_Scatter(void *send, int sendcnt, MPI_Datatype type, void *recv, int recvcnt, MPI_Datatype type, int root, MPI_Comm comm); In this case, they all call the same function, and the MPI subsystem is responsible for sorting out which is root (the process with the data) and which are receiving data. MPI then divides the send buffer into even-size chunks and sends it out to all of the processes, including the root process. Then, each process can work away on its chunk. When they're done, you can gather up all the results with: int MPI_Gather(void *send, int sendcnt, MPI_Datatype type, void *recv, int recvcnt, MPI_Datatype type, int root, MPI_Comm comm); This is a complete reversal of MPI_Scatter. In this case, all the processes send their little chunks, and the root process gathers them all up and puts them in its receive buffer. Taking all of the information from above and combining it together, you can put together a basic boilerplate example: #include <mpi.h> // Any other include files int main(int argc, char **argv){ int id,size; // all of your serial code would // go here MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &id); MPI_Comm_size(MPI_COMM_WORLD, &size); // all of your parallel code would // go here MPI_Finalize(); // any single-threaded cleanup code // goes here exit(0); } Hopefully, you now feel more comfortable with MPI programs. I looked at the most basic elements here, but if you feel inspired, you should grab a good textbook and see what other functions are available to you. If not, you at least should be able to read existing MPI code and have a good idea of what it's trying to do. As always, if you'd like to see a certain area covered in this space, feel free to let me know. Setup a prepaid account I would like to setup a prepaid account so that payment can automatically deducted from this prepaid account. Sort of like an iTunes account where you purchase a card to add money to the account when it gets low. That way you get your money at the time of purchase. If it is possible to establish such an account, please, please notify me. Why MPI instead of OPENMP? Perhaps it will be nice to have an updated view of MPI vs OPENMP as quad-processors in a single unit are very common nowadys. administrators point of view in parallel environment Hello, Thanks allot for this insightful article on parallel environments! I'm working for a company where clusters are our product, so mpi mpich and related programs/techniques are quite important to me. And then not so much wat you can do with it, but more what are the differences and and benefits of one over the other. I would love to see more articles on parallel programing from an administrators point of view. Thanks again! Not as easy as it sounds. You should mention that if you use a network to run parallel programs using messages, the performance is heavily limited by the cost of establishing connections and passing the messages, so the speed-up you would get won't be as good as if running all the processes on the same box, even for small scale problems. Only if the connection and Only if the connection and setup time out stripes the processing time. If it does, "you're doing it wrong" or have picked too small of a data set. Actually, this is not true. A Actually, this is not true. A small data-set does not void parallel programming principle as long as the result is worth it. Perhaps, finding recurring strings of integers in "small data-sets" might not sound beneficial - nevertheless it has been done. Or, for more connection, set-up time and other non-sense just look @SETI. Missing some setup/configuration Having worked with LAM/MPI clusters 10 years ago, I know you're missing some significant setup documentation (and I'm sure/hoping setup has become a little easier since then). A link to current setup stuff would be nice. Nice Very nice your program but I think that it would be cool to have this app on the iPhone This looks pretty cool. I'm This looks pretty cool. I'm going to have to try this out on my spare time.
http://www.linuxjournal.com/content/parallel-programming-crash-course?quicktabs_1=2
CC-MAIN-2015-18
refinedweb
1,908
70.13
Back to index import "SALOME_PACOExtension.idl"; Definition at line 38 of file SALOME_PACOExtension.idl. Copy a file from a remote host (container) to a local file. Create a new servant instance of a component. Component library must be loaded. Create a new servant instance of a component with environment variables specified. Component library must be loaded. Create a PyScriptNode in the container. Create a Salome_file. returns a Salome_file object if origFileName exists and is readable else returns null object. Unload component libraries from the container. Find a servant instance of a component. Create a fileTransfer. returns a fileTransfer object used to copy files from the container machine to the clients machines. Only one fileTransfer instance is created in a container. Returns the hostname of the container. Returns the PID of the container. Kill the container. Returns True if the container has been killed. Kept for Superv compilation but can't work, unless oneway... TO REMOVE ! Loads a new component class (dynamic library). Find a servant instance of a component, or create a new one. Loads the component library if needed. Only applicable to multiStudy components. Determines whether the server has been loaded or not. Remove the component servant, and deletes all related objects. name of the container log file (this has been set by the launcher) Definition at line 158 of file SALOME_Component.idl. Name of the container. Definition at line 152 of file SALOME_Component.idl. working directory of the container Definition at line 155 of file SALOME_Component.idl.
https://sourcecodebrowser.com/salome-kernel/6.5.0/interface_engines_1_1_p_a_c_o___container.html
CC-MAIN-2017-51
refinedweb
249
54.79
java-jar-executable-manifest-main-class Importance of Main Manifest Attribute in a Self-Executing JAR 1. Overview Every executable Java class has to contain a main method. Simply put, this method is a starting point of an application. To run our main method from a self-executing JAR file, we have to create a proper manifest file and pack it along with our code. This manifest file has to have a main manifest attribute that defines the path to the class containing our main method. In this tutorial, we’ll show how to pack a simple Java class as a self-executing JAR and demonstrate the importance of a main manifest attribute for a successful execution. 2. Executing a JAR Without the Main Manifest Attribute To get more practical, we’ll show an example of unsuccessful execution without the proper manifest attribute. Let’s write a simple Java class with a main method: public class AppExample { public static void main(String[] args){ System.out.println("AppExample executed!"); } } To pack our example class to a JAR archive, we have to go to the shell of our operating system and compile it: javac -d . AppExample.java Then we can pack it into a JAR: jar cvf example.jar com/baeldung/manifest/AppExample.class Our example.jar will contain a default manifest file. We can now try to execute the JAR: java -jar example.jar Execution will fail with an error: no main manifest attribute, in example.jar 3. Executing a JAR With the Main Manifest Attribute As we have seen, JVM couldn’t find our main manifest attribute. Because of that, it couldn’t find our main class containing our main method. Let’s include a proper manifest attribute into the JAR along with our code. We’ll need to create a MANIFEST.MF file containing a single line: Main-Class: com.baeldung.manifest.AppExample Our manifest now contains the classpath to our compiled AppExample.class. Since we already compiled our example class, there’s no need to do it again. We’ll just pack it together with our manifest file: jar cvmf MANIFEST.MF example.jar com/baeldung/manifest/AppExample.class This time JAR executes as expected and outputs: AppExample executed! 4. Conclusion In this quick article, we showed how to pack a simple Java class as a self-executing JAR, and we demonstrated the importance of a main manifest attribute on two simple examples. The complete source code for the example is available over on GitHub. This is a Maven-based project, so it can be imported and used as-is.
https://getdocs.org/java-jar-executable-manifest-main-class/
CC-MAIN-2020-34
refinedweb
432
56.35
-- | -- Module : Data.Edison.Seq.ListSeq -- Copyright : Copyright (c) 1998 Chris Okasaki -- License : MIT; see COPYRIGHT file for terms and conditions -- -- Maintainer : robdockins AT fastmail DOT fm -- Stability : stable -- Portability : GHC, Hugs (MPTC and FD) -- -- This module packages the standard prelude list type as a -- sequence. This is the baseline sequence implementation and -- all methods have the default running times listed in -- "Data.Edison.Seq", except for the following two trivial operations: -- -- * toList, fromList @O( 1 )@ -- module Data.Edison.Seq.ListSeq ( -- * Sequence Type Seq, -- * Sequence Operations empty,singleton,lcons,rcons,append,lview,lhead,lheadM,ltail,ltailM, rview,rhead,rheadM,rtail,rtailM, null,size,concat,reverse,reverseOnto,fromList,toList,map,concatMap, fold,fold',fold1,fold1',foldr,foldr',foldl,foldl',foldr1,foldr1',foldl1,foldl1', reducer,reducer',reducel,reducel',reduce1,reduce1', copy,inBounds,lookup,lookupM,lookupWithDefault,update,adjust, mapWithIndex,foldrWithIndex,foldrWithIndex',foldlWithIndex,foldlWithIndex', take,drop,splitAt,subseq,filter,partition,takeWhile,dropWhile,splitWhile, zip,zip3,zipWith,zipWith3,unzip,unzip3,unzipWith,unzipWith3, strict,strictWith, -- * Unit testing structuralInvariant, -- * Documentation moduleName ) where import Prelude hiding (concat,reverse,map,concatMap,foldr,foldl,foldr1,foldl1, filter,takeWhile,dropWhile,lookup,take,drop,splitAt, zip,zip3,zipWith,zipWith3,unzip,unzip3,null) import qualified Control.Monad.Identity as ID import qualified Prelude import Data.Edison.Prelude import qualified Data.List import Data.Monoid import qualified Data.Edison.Seq as S ( Sequence(..) ) -- signatures for exported functions moduleName :: String empty :: [a] singleton :: a -> [a] lcons :: a -> [a] -> [a] rcons :: a -> [a] -> [a] append :: [a] -> [a] -> [a] lview :: (Monad rm) => [a] -> rm (a, [a]) lhead :: [a] -> a lheadM :: (Monad rm) => [a] -> rm a ltail :: [a] -> [a] ltailM :: (Monad rm) => [a] -> rm [a] rview :: (Monad rm) => [a] -> rm (a, [a]) rhead :: [a] -> a rheadM :: (Monad rm) => [a] -> rm a rtail :: [a] -> [a] rtailM :: (Monad rm) => [a] -> rm [a] null :: [a] -> Bool size :: [a] -> Int concat :: [[a]] -> [a] reverse :: [a] -> [a] reverseOnto :: [a] -> [a] -> [a] fromList :: [a] -> [a] toList :: [a] -> [a] map :: (a -> b) -> [a] -> [b] concatMap :: (a -> [b]) -> [a] -> [b] fold :: (a -> b -> b) -> b -> [a] -> b fold' :: (a -> b -> b) -> b -> [a] -> b fold1 :: (a -> a -> a) -> [a] -> a fold1' :: (a -> a -> a) -> [a] -> a foldr :: (a -> b -> b) -> b -> [a] -> b foldl :: (b -> a -> b) -> b -> [a] -> b foldr1 :: (a -> a -> a) -> [a] -> a foldl1 :: (a -> a -> a) -> [a] -> a reducer :: (a -> a -> a) -> a -> [a] -> a reducel :: (a -> a -> a) -> a -> [a] -> a reduce1 :: (a -> a -> a) -> [a] -> a foldl' :: (b -> a -> b) -> b -> [a] -> b foldl1' :: (a -> a -> a) -> [a] -> a reducer' :: (a -> a -> a) -> a -> [a] -> a reducel' :: (a -> a -> a) -> a -> [a] -> a reduce1' :: (a -> a -> a) -> [a] -> a copy :: Int -> a -> [a] inBounds :: Int -> [a] -> Bool lookup :: Int -> [a] -> a lookupM :: (Monad m) => Int -> [a] -> m a lookupWithDefault :: a -> Int -> [a] -> a update :: Int -> a -> [a] -> [a] adjust :: (a -> a) -> Int -> [a] -> [a] mapWithIndex :: (Int -> a -> b) -> [a] -> [b] foldrWithIndex :: (Int -> a -> b -> b) -> b -> [a] -> b foldlWithIndex :: (b -> Int -> a -> b) -> b -> [a] -> b foldlWithIndex' :: (b -> Int -> a -> b) -> b -> [a] -> b take :: Int -> [a] -> [a] drop :: Int -> [a] -> [a] splitAt :: Int -> [a] -> ([a], [a]) subseq :: Int -> Int -> [a] -> [a] filter :: (a -> Bool) -> [a] -> [a] partition :: (a -> Bool) -> [a] -> ([a], [a]) takeWhile :: (a -> Bool) -> [a] -> [a] dropWhile :: (a -> Bool) -> [a] -> [a] splitWhile :: (a -> Bool) -> [a] -> ([a], [a]) zip :: [a] -> [b] -> [(a,b)] zip3 :: [a] -> [b] -> [c] -> [(a,b,c)] zipWith :: (a -> b -> c) -> [a] -> [b] -> [c] zipWith3 :: (a -> b -> c -> d) -> [a] -> [b] -> [c] -> [d] unzip :: [(a,b)] -> ([a], [b]) unzip3 :: [(a,b,c)] -> ([a], [b], [c]) unzipWith :: (a -> b) -> (a -> c) -> [a] -> ([b], [c]) unzipWith3 :: (a -> b) -> (a -> c) -> (a -> d) -> [a] -> ([b], [c], [d]) strict :: [a] -> [a] strictWith :: (a -> b) -> [a] -> [a] structuralInvariant :: [a] -> Bool moduleName = "Data.Edison.Seq.ListSeq" type Seq a = [a] empty = [] singleton x = [x] lcons = (:) rcons x s = s ++ [x] append = (++) lview [] = fail "ListSeq.lview: empty sequence" lview (x:xs) = return (x, xs) lheadM [] = fail "ListSeq.lheadM: empty sequence" lheadM (x:xs) = return x lhead [] = error "ListSeq.lhead: empty sequence" lhead (x:xs) = x ltailM [] = fail "ListSeq.ltailM: empty sequence" ltailM (x:xs) = return xs ltail [] = error "ListSeq.ltail: empty sequence" ltail (x:xs) = xs rview [] = fail "ListSeq.rview: empty sequence" rview xs = return (rhead xs, rtail xs) rheadM [] = fail "ListSeq.rheadM: empty sequence" rheadM (x:xs) = rh x xs where rh y [] = return y rh y (x:xs) = rh x xs rhead [] = error "ListSeq.rhead: empty sequence" rhead (x:xs) = rh x xs where rh y [] = y rh y (x:xs) = rh x xs rtailM [] = fail "ListSeq.rtailM: empty sequence" rtailM (x:xs) = return (rt x xs) where rt y [] = [] rt y (x:xs) = y : rt x xs rtail [] = error "ListSeq.rtail: empty sequence" rtail (x:xs) = rt x xs where rt y [] = [] rt y (x:xs) = y : rt x xs null = Prelude.null size = length concat = foldr append empty reverse = Prelude.reverse reverseOnto [] ys = ys reverseOnto (x:xs) ys = reverseOnto xs (x:ys) fromList xs = xs toList xs = xs map = Data.List.map concatMap = Data.List.concatMap fold = foldr fold' f = foldl' (flip f) fold1 f [] = error "ListSeq.fold1: empty sequence" fold1 f (x:xs) = foldr f x xs fold1' f [] = error "ListSeq.fold1': empty sequence" fold1' f (x:xs) = foldl' f x xs foldr = Data.List.foldr foldl = Data.List.foldl foldr' f e [] = e foldr' f e (x:xs) = f x $! foldr' f e xs foldl' f e [] = e foldl' f e (x:xs) = e `seq` foldl' f (f e x) xs foldr1 f [] = error "ListSeq.foldr1: empty sequence" foldr1 f xs = fr xs where fr [x] = x fr (x:xs) = f x $ fr xs fr _ = error "ListSeq.foldr1: bug!" foldr1' f [] = error "ListSeq.foldr1': empty sequence" foldr1' f xs = fr xs where fr [x] = x fr (x:xs) = f x $! fr xs fr _ = error "ListSeq.foldr1': bug!" foldl1 f [] = error "ListSeq.foldl1: empty sequence" foldl1 f (x:xs) = foldl f x xs foldl1' f [] = error "ListSeq.foldl1': empty sequence" foldl1' f (x:xs) = foldl' f x xs reducer f e [] = e reducer f e xs = f (reduce1 f xs) e reducer' f e [] = e reducer' f e xs = (f $! (reduce1' f xs)) $! e reducel f e [] = e reducel f e xs = f e (reduce1 f xs) reducel' f e [] = e reducel' f e xs = (f $! e) $! (reduce1' f xs) reduce1 f [] = error "ListSeq.reduce1: empty sequence" reduce1 f [x] = x reduce1 f (x1 : x2 : xs) = reduce1 f (f x1 x2 : pairup xs) where pairup (x1 : x2 : xs) = f x1 x2 : pairup xs pairup xs = xs -- can be improved using a counter and bit ops! reduce1' f [] = error "ListSeq.reduce1': empty sequence" reduce1' f [x] = x reduce1' f (x1 : x2 : xs) = x1 `seq` x2 `seq` reduce1' f (f x1 x2 : pairup xs) where pairup (x1 : x2 : xs) = x1 `seq` x2 `seq` (f x1 x2 : pairup xs) pairup xs = xs copy n x | n <= 0 = [] | otherwise = x : copy (n-1) x -- depends on n to be unboxed, should test this! inBounds i xs | i >= 0 = not (null (drop i xs)) | otherwise = False lookup i xs = ID.runIdentity (lookupM i xs) lookupM i xs | i < 0 = fail "ListSeq.lookup: not found" | otherwise = case drop i xs of [] -> fail "ListSeq.lookup: not found" (x:_) -> return x lookupWithDefault d i xs | i < 0 = d | otherwise = case drop i xs of [] -> d (x:_) -> x update i y xs | i < 0 = xs | otherwise = upd i xs where upd _ [] = [] upd i (x:xs) | i > 0 = x : upd (i - 1) xs | otherwise = y : xs adjust f i xs | i < 0 = xs | otherwise = adj i xs where adj _ [] = [] adj i (x:xs) | i > 0 = x : adj (i - 1) xs | otherwise = f x : xs mapWithIndex f = mapi 0 where mapi i [] = [] mapi i (x:xs) = f i x : mapi (succ i) xs foldrWithIndex f e = foldi 0 where foldi i [] = e foldi i (x:xs) = f i x (foldi (succ i) xs) foldrWithIndex' f e = foldi 0 where foldi i [] = e foldi i (x:xs) = f i x $! (foldi (succ i) xs) foldlWithIndex f = foldi 0 where foldi i e [] = e foldi i e (x:xs) = foldi (succ i) (f e i x) xs foldlWithIndex' f = foldi 0 where foldi i e [] = e foldi i e (x:xs) = e `seq` foldi (succ i) (f e i x) xs take i xs | i <= 0 = [] | otherwise = Data.List.take i xs drop i xs | i <= 0 = xs | otherwise = Data.List.drop i xs splitAt i xs | i <= 0 = ([], xs) | otherwise = Data.List.splitAt i xs subseq i len xs = take len (drop i xs) strict l@[] = l strict l@(_:xs) = strict xs `seq` l strictWith f l@[] = l strictWith f l@(x:xs) = f x `seq` strictWith f xs `seq` l filter = Data.List.filter partition = Data.List.partition takeWhile = Data.List.takeWhile dropWhile = Data.List.dropWhile splitWhile = Data.List.span zip = Data.List.zip zip3 = Data.List.zip3 zipWith = Data.List.zipWith zipWith3 = Data.List.zipWith3 unzip = Data.List.unzip unzip3 = Data.List.unzip3 unzipWith f g = foldr consfg ([], []) where consfg a (bs, cs) = (f a : bs, g a : cs) -- could put ~ on tuple unzipWith3 f g h = foldr consfgh ([], [], []) where consfgh a (bs, cs, ds) = (f a : bs, g a : cs, h a : ds) -- could put ~ on tuple -- no invariants structuralInvariant = const True -- declare the instance instance S.Sequence [] where {lcons = lcons; rcons = rcons; null = null; lview = lview; lhead = lhead; ltail = ltail; lheadM = lheadM; ltailM = ltailM; rview = rview; rhead = rhead; rtail = rtail; rheadM = rheadM; rtailM = rtailM; size = size; concat = concat; reverse = reverse; reverseOnto = reverseOnto; fromList = fromList; toList = toList; fold = fold; fold' = fold'; fold1 = fold1; fold1' = fold1'; foldr = foldr; foldr' = foldr'; foldl = foldl; foldl' = foldl'; foldr1 = foldr1; foldr1' = foldr1'; foldl1 = foldl1; foldl1' = foldl1'; reducer = reducer; reducel = reducel; reduce1 = reduce1; reducel' = reducel'; reducer' = reducer'; reduce1' = reduce1'; copy = copy; inBounds = inBounds; lookup = lookup; lookupM = lookupM; lookupWithDefault = lookupWithDefault; update = update; adjust = adjust; mapWithIndex = mapWithIndex; foldrWithIndex = foldrWithIndex; foldrWithIndex' = foldrWithIndex'; foldlWithIndex = foldlWithIndex; foldlWithIndex' = foldlWithIndex'; take = take; drop = drop; splitAt = splitAt; subseq = subseq; filter = filter; partition = partition; takeWhile = takeWhile; dropWhile = dropWhile; splitWhile = splitWhile; zip = zip; zip3 = zip3; zipWith = zipWith; zipWith3 = zipWith3; unzip = unzip; unzip3 = unzip3; unzipWith = unzipWith; unzipWith3 = unzipWith3; strict = strict; strictWith = strictWith; structuralInvariant = structuralInvariant; instanceName s = moduleName}
http://hackage.haskell.org/package/EdisonAPI-1.2.2/docs/src/Data-Edison-Seq-ListSeq.html
CC-MAIN-2015-35
refinedweb
1,689
56.08
Hello! This is my first post, and my first program. I need help with it because it is throwing up some error that I can't seem to fix... It is 2 errors, both about reaching the end of the file while parsing. The program is for changing the sign in front of my school, because we spend about 10 minutes seeing what letters we can salvage from the previous message, and this program is supposed to do that by removing the spaces from two entered strings (The New and Previous message), removing the white space, turning it into an array of Characters, then checking one letter of the first statement with all of the second, and removing the like ones (Leaving you with two arrays of what you need to bring and what will be left over) Here is the code: Downloadable .java file:Downloadable .java file:Code:import java.util.Scanner; import java.io.*; public class marqueemaker { static Scanner sc = new Scanner(System.in); public static void main (String[] args) { String input; String output; System.out.println("Enter the new message"); input = sc.nextString; System.out.println("Enter the old message"); output = sc.nextString; String inputnospace = input.replaceAll("\\s",""); String outputnospace = output.replaceAll("\\s",""); char[] inputArray = input.toCharArray(); char[] outputArray = output.toCharArray(); for(int i=1; i<input.length; i++){ for(int j=1; i<output.length; i++){ if(inputArray[i] == outputArray[j]) { outputArray.splice(j, 1); } else { } System.out.println("Bring the following letters:"); for(int k=1; k<outputArray.length; k++) { System.out.print(outputArray[k] + ", "); }
http://forums.devshed.com/java-help/952303-basic-help-program-please-certified-noob-last-post.html
CC-MAIN-2017-26
refinedweb
260
60.61
How to create visualizations in Python One of the most important things you can do with Python (as far as data processing goes) is creating visualizations with your data. First off, make sure you have Python installed on your machine, if not please read this guide that I made, next up make sure you have “pip” installed on your machine, if not please read this guide that I also made. Awesome, we’re ready to start! Data Setup At this point let’s go ahead and install the packages that we need. Let’s go ahead and install this package with one of the following commands: #FOR PYTHON 2 USE THIS pip install matplotlib#FOR PYTHON 3 USE THISpip3 install matplotlib#IF YOU'RE USING ANACONDA USE THISconda install matplotlib Awesome, now that we have this package installed, be sure to have the pandas package installed as well, install this package by using the following command: #FOR PYTHON 2 USE THIS pip install pandas#FOR PYTHON 3 USE THISpip3 install pandas#IF YOU'RE USING ANACONDA USE THISconda install pandas We also need the numpy package for this case, unless you’re bringing in your own dataset, here is how you install this package: #FOR PYTHON 2 USE THIS pip install numpy#FOR PYTHON 3 USE THISpip3 install numpy#IF YOU'RE USING ANACONDA USE THISconda install numpy Awesome, let’s go ahead and create our dataset, like I said before we can use a different dataset by importing through pandas, if not let’s create our own with the following code: import pandas as pd import matplotlib import numpy as npdata = pd.DataFrame(np.random.rand(100, 4), columns=['One','Two','Three','Four']) Awesome, our dataset will look something like this: Next up, let’s go ahead and plot these points! We start off by taking the name of our dataframe (which is data in this example) and add the matplotlib function “.plot.bar”, this is the code to do so: data.plot.bar() Awesome! This is our output: We can change so many parameters within our bar graph, here is a website that shows a bunch of different combinations you can use with matplotlib. Next up, let’s say we wanted to change the color of this bar graph, let’s start off by rewriting our code to use the color command: data.plot.bar(color = "green") our new output should look like this: Now let’s say we don’t want a bar graph, instead we wanted a graph that covers the whole area, let’s do so by change the “bar” to “area”: data.plot.area(color = “green”) This is the output: Congrats! You’ve now created some visualizations within Python! Matplotlib is an extremely functional tool that isn’t really hard to use, enjoy creating visualizations!
https://preettheman.medium.com/how-to-create-visualizations-in-python-a91c2134618f?source=post_internal_links---------1----------------------------
CC-MAIN-2021-21
refinedweb
471
57.23
Hi Gryffyn, I was having the same issue as mentatf and found that the patched given by GloW wasn't working for me. So I had a look and found another patch from, which has solved the issue for me. The patch can be found at. For convenience I have created a new PKGBUILD: Search Criteria Package Details: gcc44 4.4.7-6 Dependencies (6) - binutils (binutils-git, binutils-hjl-git, binutils-tune-bfd-hash) - cloog (cloog-git) - elfutils - mpfr - zlib (libz, zlib-asm) - setconf (make) Required by (3) - matlab-r2012a (requires gcc44) (optional) - matlab-r2012b (requires gcc44) (optional) - matlab-r2013a (requires gcc44) (optional) Sources (4) - gcc-hash-style-both.patch - gcc_pure64.patch - - siginfo_t_fix.patch Latest Comments hv15 commented on 2016-03-02 16:09 Hi Gryffyn, I was having the same issue as mentatf and found that the mentatf commented on 2015-11-25 17:11 @marker5a the patch given below doesn't work for me, the same error message posted by @agwblack remains. The steps I've followed are: 1) download the snapshot 2) extract it in some directory 3) add fix_gcc5.patch in it 4) add 3 lines in the PKGBUILD: 'fix_gcc5.patch', '"the checksum"' and "patch -Np0 -i "$srcdir/fix_gcc5.patch" 5) makepkg -sri edit: in the meantime, I manage to install it by downgrading gcc to 4.9. marker5a commented on 2015-10-21 21:03 @GloW, thanks for handling that. I was trying to figure out ways to get the patch posted... didn't even think about pastebin :) Pardon the n00b question, is there a more elegant way to get patches submitted for aur packages? Or is that still a wip with the transition to git for managing the packages? Anyways, thanks again for your help GloW! GloW commented on 2015-10-21 15:36 here is a corrected version of fix_gcc5.patch : GloW commented on 2015-10-21 07:22 @marker5a can't really use your patch, since identation is brokeN marker5a commented on 2015-10-14 19:41 @agwblack, dieghen89... I did some digging and found a solution. You need to apply the following patch which adds a new patch file to the package and PKGBUILD. The patch addresses the issue of compiling this package using the newer versions of gcc diff --git a/PKGBUILD b/PKGBUILD index 77967f1..ec8abc8 100644 --- a/PKGBUILD +++ b/PKGBUILD @@ -20,11 +20,14 @@ options=('!libtool' '!buildflags' 'staticlibs') source=("" 'gcc-hash-style-both.patch' 'gcc_pure64.patch' - 'siginfo_t_fix.patch') + 'siginfo_t_fix.patch' + 'fix_gcc5.patch') sha256sums=('5ff75116b8f763fa0fb5621af80fc6fb3ea0f1b1a57520874982f03f26cd607f' 'a600550d3d2b2fb8ee6a547c68c3a08a2af7579290b340c35ee5598c9bb305a5' '2d369cf93c6e15c3559c3560bce581e0ae5f1f34dc86bca013ac67ef1c1a9ff9' - '4df866dcfd528835393d2b6897651158faf6d84852158fbf2e4ffc113ec7d201') + '4df866dcfd528835393d2b6897651158faf6d84852158fbf2e4ffc113ec7d201' + 'e4f1f0f7616ad5c72db4b0b02bce33c79b101277b705ec90da957dee14091d5e') + prepare() { cd "gcc-$pkgver" @@ -34,6 +37,7 @@ prepare() { # Do not run fixincludes sed -i -e 's:\./fixinc\.sh:-c true:' gcc/Makefile.in + patch -Np0 -i "$srcdir/fix_gcc5.patch" patch -Np0 -i "$srcdir/gcc-hash-style-both.patch" patch -Np1 -i "$srcdir/siginfo_t_fix.patch" if [[ "$CARCH" == "x86_64" ]]; then diff --git a/fix_gcc5.patch b/fix_gcc5.patch new file mode 100644 index 0000000..4959a77 --- /dev/null +++ b/fix_gcc5.patch @@ -0,0 +1,99 @@ +diff --git a/gcc/toplev.c b/gcc/toplev.c +index 267df59..4836238 100644 +--- gcc/toplev.c ++++ gcc/toplev.c +@@ -532,11 +532,11 @@ read_integral_parameter (const char *p, const char *pname, const int defval) + return atoi (p); + } + +-/* When compiling with a recent enough GCC, we use the GNU C "extern inline" +- for floor_log2 and exact_log2; see toplev.h. That construct, however, +- conflicts with the ISO C++ One Definition Rule. */ ++#if GCC_VERSION < 3004 + +-#if GCC_VERSION < 3004 || !defined (__cplusplus) ++/* The functions floor_log2 and exact_log2 are defined as inline ++ functions in toplev.h if GCC_VERSION >= 3004. The definitions here ++ are used for older versions of gcc. */ + + /* Given X, an unsigned number, return the largest int Y such that 2**Y <= X. + If X is 0, return -1. */ +@@ -549,9 +549,6 @@ floor_log2 (unsigned HOST_WIDE_INT x) + if (x == 0) + return -1; + +-#ifdef CLZ_HWI +- t = HOST_BITS_PER_WIDE_INT - 1 - (int) CLZ_HWI (x); +-#else + if (HOST_BITS_PER_WIDE_INT > 64) + if (x >= (unsigned HOST_WIDE_INT) 1 << (t + 64)) + t += 64; +@@ -568,7 +565,6 @@ floor_log2 (unsigned HOST_WIDE_INT x) + t += 2; + if (x >= ((unsigned HOST_WIDE_INT) 1) << (t + 1)) + t += 1; +-#endif + + return t; + } +@@ -581,14 +577,10 @@ exact_log2 (unsigned HOST_WIDE_INT x) + { + if (x != (x & -x)) + return -1; +-#ifdef CTZ_HWI +- return x ? CTZ_HWI (x) : -1; +-#else + return floor_log2 (x); +-#endif + } + +-#endif /* GCC_VERSION < 3004 || !defined (__cplusplus) */ ++#endif /* GCC_VERSION < 3004 */ + + /* Handler for fatal signals, such as SIGSEGV. These are transformed + into ICE messages, which is much more user friendly. In case the +diff --git a/gcc/toplev.h b/gcc/toplev.h +index e62aa727..cca6867 100644 +--- gcc/toplev.h ++++ gcc/toplev.h +@@ -169,14 +169,17 @@ extern void decode_d_option (const char *); + extern bool fast_math_flags_set_p (void); + extern bool fast_math_flags_struct_set_p (struct cl_optimization *); + ++/* Inline versions of the above for speed. */ ++#if GCC_VERSION < 3004 ++ + /* Return log2, or -1 if not exact. */ + extern int exact_log2 (unsigned HOST_WIDE_INT); + + /* Return floor of log2, with -1 for zero. */ + extern int floor_log2 (unsigned HOST_WIDE_INT); + +-/* Inline versions of the above for speed. */ +-#if GCC_VERSION >= 3004 ++#else /* GCC_VERSION >= 3004 */ ++ + # if HOST_BITS_PER_WIDE_INT == HOST_BITS_PER_LONG + # define CLZ_HWI __builtin_clzl + # define CTZ_HWI __builtin_ctzl +@@ -188,17 +191,18 @@ extern int floor_log2 (unsigned HOST_WIDE_INT); + # define CTZ_HWI __builtin_ctz + # endif + +-extern inline int ++static inline int + floor_log2 (unsigned HOST_WIDE_INT x) + { + return x ? HOST_BITS_PER_WIDE_INT - 1 - (int) CLZ_HWI (x) : -1; + } + +-extern inline int ++static inline int + exact_log2 (unsigned HOST_WIDE_INT x) + { + return x == (x & -x) && x ? (int) CTZ_HWI (x) : -1; + } ++ + #endif /* GCC_VERSION >= 3004 */ + + /* Functions used to get and set GCC's notion of in what directory gajjanag commented on 2015-08-15 12:57 Thanks a lot for maintaining this old version. @agwblack, dieghen89: Built and tested successfully with oldest ARM (arch rollback machine) version of GCC, namely 4.8.1 (modulo the linker errors, see the recent discussion). agwblack commented on 2015-07-20 14:37 I have the following error: ../../gcc/toplev.c: At top level: ../../gcc/toplev.c:536:1: error: redefinition of ‘floor_log2’ floor_log2 (unsigned HOST_WIDE_INT x) ^ In file included from ../../gcc/toplev.c:58:0: ../../gcc/toplev.h:190:1: note: previous definition of ‘floor_log2’ was here floor_log2 (unsigned HOST_WIDE_INT x) ^ ../../gcc/toplev.c:571:1: error: redefinition of ‘exact_log2’ exact_log2 (unsigned HOST_WIDE_INT x) ^ In file included from ../../gcc/toplev.c:58:0: ../../gcc/toplev.h:196:1: note: previous definition of ‘exact_log2’ was here exact_log2 (unsigned HOST_WIDE_INT x) ^ Makefile:2470: recipe for target 'toplev.o' failed make[3]: *** [toplev.o] Error 1 dieghen89 commented on 2015-07-10 20:13 I have the following error: ../../gcc/tree-call-cdce.c: At top level: ../../gcc/tree-call-cdce.c:928:3: warning: enum constant defined in struct or union is not visible in C++ [-Wc++-compat] GIMPLE_PASS, ^ In file included from ../../gcc/tree-dump.h:26:0, from ../../gcc/tree-call-cdce.c:39: ../../gcc/tree-pass.h:102:5: note: enum constant defined here GIMPLE_PASS, ^ rm gcc.pod It seems something related to gcc5, do somebody have this problem too? xyproto commented on 2013-11-08 15:10 Updated the package and added 'staticlibs' to the options array. Thanks, mauro2. mauro2 commented on 2013-11-07 22:48 I needed to add 'staticlibs' to options. See pelluch commented on 2013-05-06 17:41 Nice! Thanks for the work on keeping this package up-to-date and functional, as some software just refuses to update to a new version of gcc. xyproto commented on 2013-05-06 11:41 Updated the package to enable libgomp. xyproto commented on 2013-05-05 00:04 pelluch, just leftovers from when I adopted the package, I will look at it. pelluch commented on 2013-05-04 00:28 Just curious, any reason as to why you use the flag "--disable-libgomp"? This caused a few builds that use omp to fail for me in matlab. Using "--enable-libgomp" works for me, although I have to add rm -rf "$pkgdir/usr/share/info" in the package() function in order to avoid conflicts with the files already there. xyproto commented on 2013-04-25 17:54 Updated the PKGBUILD, thanks kristianlm2 and progtologist! xyproto commented on 2013-04-25 17:39 Updated the PKBUILD. It doesn't compile here, even after adding '!buildflags', unfortunately. But, if it works for some people, it's a start. progtologist commented on 2013-04-25 14:10 Fix for PKGBUILD is: options=('!libtool' '!buildflags') Anonymous comment on 2013-04-23 19:44 I removed the fortify-source in /etc/makepkg.conf and it seems to be working now. I did not edit this file myself, so I guess a lot of people will be hitting this. Don't know how to fix from PKGBUILD, though. --- /etc/makepkg.conf +++ #<buffer makepkg.conf> @@ -29,7 +29,7 @@ -CPPFLAGS="-D_FORTIFY_SOURCE=2" +CPPFLAGS="" xyproto commented on 2013-04-15 14:52 If someone is desperate to compile gcc44, it might work if gcc45 or gcc46 is used to compile it with, as a workaround. xyproto commented on 2013-04-12 12:50 gcc44 doesn't compile here either. Not entirely sure how to make it compile again. Anonymous comment on 2013-04-11 20:12 I have the same problem in either gcc44 or gcc45. I've upgraded my system to gcc48 with some other packages, not sure it is the factor. When I built gcc45, I forced it to include some headers, like limit.h, sys/state.h ...etc in the specified in "configure"(depend on some FLAGS in default, and --enable-language=c,c++ since object C may cause some error). I also downgrade some packages upgraded together with gcc48. It can pass these errors, but it finally failed since no enough space, and I didn't try again since I need gcc44, not gcc45. I try to do the same thing with gcc44, but different configure file... xyproto commented on 2013-04-11 13:29 Will try to reproduce the problem. tritonas00 commented on 2013-04-11 13:18 same problem here, as urys xyproto commented on 2013-04-09 20:14 urys, do you have weird CFLAGS or CXXFLAGS? Have you edited /etc/makepkg.conf? xyproto commented on 2013-04-09 20:13 urys, do you have weird CFLAGS or CXXFLAGS? Anonymous comment on 2013-04-09 15:28 ../../libiberty/fibheap.c: In function ‘fibheap_delete_node’: ../../libiberty/fibheap.c:38:24: error: ‘LONG_MIN’ undeclared (first use in this function) #define FIBHEAPKEY_MIN LONG_MIN ^ ../../libiberty/fibheap.c:258:36: note: in expansion of macro ‘FIBHEAPKEY_MIN’ fibheap_replace_key (heap, node, FIBHEAPKEY_MIN); ^ ../../libiberty/fibheap.c:38:24: note: each undeclared identifier is reported only once for each function it appears in #define FIBHEAPKEY_MIN LONG_MIN ^ ../../libiberty/fibheap.c:258:36: note: in expansion of macro ‘FIBHEAPKEY_MIN’ fibheap_replace_key (heap, node, FIBHEAPKEY_MIN); ^ ../../libiberty/fibheap.c: In function ‘fibheap_delete’: ../../libiberty/fibheap.c:269:5: warning: incompatible implicit declaration of b../../libiberty/fibheap.c:360:3: warning: incompatible implicit declaration of uilt-in function ‘memset’ [enabled by default] make[3]: *** [fibheap.o] Error 1 make[3]: Leaving directory `/tmp/yaourt-tmp-urys/aur-gcc44/src/gcc-4.4.7/build/ ibiberty' make[2]: *** [all-stage1-libiberty] Error 2 make[2]: Leaving directory `/tmp/yaourt-tmp-urys/aur-gcc44/src/gcc-4.4.7/build' make[1]: *** [stage1-bubble] Error 2 make[1]: Leaving directory `/tmp/yaourt-tmp-urys/aur-gcc44/src/gcc-4.4.7/build' make: *** [all] Error 2 what's wrong? xyproto commented on 2013-03-25 13:26 Thank you, ReedWood. Updated the package to depend on elfutils instead. ReedWood commented on 2013-03-25 12:07 libelf is not available in the aur. Installing elfutils and removing libelf from the dependency array in the packagebuild of gcc44 solved it for me. If this is the correct fix, please update the package. GordonGR commented on 2013-03-18 13:10 That would make sense, but no. I generally use aurinstaller unless something wrong (or unless it's a heavy build) in which case I do it manually. gnloch commented on 2013-03-18 10:16 @GordonGR Are you building with yaourt? It uses the /tmp, which usually is limited to 2.1GB GordonGR commented on 2013-03-17 01:09 It took two full hours, but it completed successfully with the use of one thread. I guess it is a memory issue after all. I'll run a memcheck. Thank you! xyproto commented on 2013-03-16 22:11 Does it work if you build it with "make -j1" instead of just "make"? This uses only one thread and much less memory, if should be a memory issue. It's worth a shot? It will take ages to compile, though. xyproto commented on 2013-03-16 22:10 Does it work if you build it with "make -j1" instead of just "make"? This uses only one thread and also uses less memory, if it's a memory issue. It's worth a shot. It will take ages to compile, though. :) GordonGR commented on 2013-03-16 22:05 It is. I can build all kind of programs, apart from gcc. I will reinstall base-devel and run a memory check tomorrow, for good measure, but I seriously doubt that's the issue. Thanks for looking into it, in any case. xyproto commented on 2013-03-16 22:01 And base-devel is installed too? Are you able to test it on a different installation? GordonGR commented on 2013-03-16 21:53 I doubt it. Just to be sure, I rebooted to a clean session, opened a console and nothing else and ran makepkg. It failed to the same point. xyproto commented on 2013-03-16 15:27 GordanGR, are you out of memory? GordonGR commented on 2013-03-16 13:56 Not building for me -- log here: I think it's the same problem I have with gcc44 multilib, by the way. Both packages used to build fine until some time ago, I can't understand :S xyproto commented on 2013-03-15 10:57 Thank you, vladmihaisima. Updated the package. Anonymous comment on 2013-03-14 22:46 With texinfo 5.0-1 it fails with the following error: if [ xinfo = xinfo ]; then \ makeinfo --split-size=5000000 --split-size=5000000 --split-size=5000000 --no-split -I . -I ../../gcc/doc \ -I ../../gcc/doc/include -o doc/cpp.info ../../gcc/doc/cpp.texi; \ fi ../../gcc/doc/cppopts.texi:761: @itemx must follow @item make[3]: *** [doc/cpp.info] Error 1 Workaround (will not generate some docs): - add the following line in the PKGBUILD before configure (currently line 43) sed -i 's/BUILD_INFO=info/BUILD_INFO=/' ../gcc/configure - comment or remove the for loop in package() (currently lines 70-73) xyproto commented on 2012-11-14 01:39 Updated to 4.4.7. xyproto commented on 2012-02-26 14:43 TamCore, works fine here. Have you installed base-devel first? Are all packages up to date? TamCore commented on 2012-02-26 11:38 .a when searching for -lgcc /usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-unknown-linux-gnu/4.4.6/libgcc.a when searching for -lgcc /usr/bin/ld: cannot find -lgcc When compiling an 32bit project. Any suggestions? xyproto commented on 2012-02-12 16:42 Updated to gcc 4.4.6. xyproto commented on 2011-12-02 15:54 Since this package has been an orphan for three weeks now, I'll adopt it. GordonGR commented on 2011-11-21 14:21 Cloog or no cloog, it doesn't compile for me either. After a lot of building I get ar: libbackend.a: No space left on device and it stops :( xyproto commented on 2011-11-10 12:57 This package has been flagged out of date from 22 Feb 2011 to 10 Nov 2011. Fixed the PKGBUILD and orphaned the package. Sincerely yours, a random TU. xyproto commented on 2011-11-10 12:28 Please depend on cloog instead of cloog-ppl>=0.15.8 Anonymous comment on 2011-11-05 18:40 Moreover: [herbert@myhost ~]$ gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-unknown-linux-gnu/4.6.2/lto-wrapper Target: x86_64-unknown-linux-gnu Configured with: /build/src/gcc-4.6 --enable-multilib --disable-libssp --disable-libstdcxx-pch --enable-checking=release Thread model: posix gcc version 4.6.2 (GCC) [herbert@myhost ~]$ gcc-4.4 -v Using built-in specs. Target: x86_64-unknown-linux-gnu Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --libdir=/usr/lib --libexecdir=/usr/lib --program-suffix=-4.4 --enable-shared --enable-languages=c,c++,fortran,objc,obj-c++ --enable-__cxa_atexit --disable-libstdcxx-pch --disable-multilib --disable-libgomp --disable-libmudflap --disable-libssp --enable-clocale=gnu --with-tune=generic --with-cloog --with-ppl --with-system-zlib Thread model: posix gcc version 4.4.4 (GCC) [herbert@myhost ~]$ So obviously I have gcc 4.4 -- which I didn't have before -- now, and I also have 4.6. Anonymous comment on 2011-11-05 18:37 I was stupid enough to close the terminal in which makepkg was compiling, so I couldn't check if it compiled correctly. Anyhow, pacman seemed to think so: [herbert@durk gcc44]$ sudo pacman -U gcc44-4.4.4-2-x86_64.pkg.tar.xz resolving dependencies... looking for inter-conflicts... Targets (1): gcc44-4.4.4-2 Total Download Size: 0.00 MB Total Installed Size: 65.60 MB Proceed with installation? [Y/n] Y (1/1) checking package integrity [############################################################################] 100% (1/1) checking for file conflicts [############################################################################] 100% (1/1) installing gcc44 [############################################################################] 100% [herbert@durk gcc44]$ So removing the dependency seems to work PLEASE NOTE: THIS IS NOT FROM ANYONE WHO IS IN ANYWAY OFFICIALLY ASSOCIATED WITH ARCH OR AUR! I AM JUST AN ARCH-USER. Anonymous comment on 2011-11-05 17:43 I could not find anything like cloog-ppl either. I have cloog from the community supported repo's installed: [root@durk herbert]# pacman -Ss cloog core/cloog 0.16.3-1 [installed] Library that generates loops for scanning polyhedra [root@durk herbert]# To install gcc44, I removed the 'cloog-ppl>=0.15.8' entry from 'depends' variable in the PKGBUILD-file, and makepkg is compiling it now. No errors so far. Could someone from the (arch intersect gcc)-community have a look at this? Thank you very much for reading and I will get back on whether makepkg compiled gcc44 successfully! mrbit commented on 2011-10-28 17:57 cloog-ppl ??????????????? unknown commented on 2011-09-16 07:43 Please orphan so we can update the pkgbuild ourselves. Anonymous comment on 2011-07-03 08:47 I have large problems installing this. I removed the --disable-libgomp flag, as I need OpenMP, but I get the error: Configuring stage 2 in ./intl configure: creating cache ./config.cache checking whether make sets $(MAKE)... yes checking for a BSD-compatible install... /bin/install -c x86_64-unknown-linux-gnu-gcc... /tmp/yaourt-tmp-martin/aur-gcc44/src/gcc-4.4.4/build/./prev-gcc/xgcc -B/tmp/yaourt-tmp-martin/aur-gcc44/src/gcc-4.4.4/build/./prev-gcc/ -B/usr/x86_64-unknown-linux-gnu/bin/ checking for C compiler default output file name... configure: error: in `/tmp/yaourt-tmp-martin/aur-gcc44/src/gcc-4.4.4/build/intl': configure: error: C compiler cannot create executables See `config.log' for more details. make[2]: *** [configure-stage2-intl] Error 77 make[2]: Leaving directory `/tmp/yaourt-tmp-martin/aur-gcc44/src/gcc-4.4.4/build' make[1]: *** [stage2-bubble] Error 2 make[1]: Leaving directory `/tmp/yaourt-tmp-martin/aur-gcc44/src/gcc-4.4.4/build' make: *** [all] Error 2 ======================================================================================== The config.log from the intl folder can be found on: My guess is that the host=x86_64-unkown-linux-gnu might be the problem, but have no idea how to fix it. I also get the exact same error without removing the -disable-libgomp flag, or trying to compiler gcc43 from aur. Anonymous comment on 2011-06-16 15:13 Yes, this works fine by just changing the depend from 'cloog-ppl' to 'cloog' in the PKGBUILD Anonymous comment on 2011-06-14 23:03 you can just use cloog from core Void-995 commented on 2011-06-08 16:21 It works well without cloog-ppl (with little tricks around PKGBUILD), but it would be better if cloog-ppl become avaible on AUR. sl1pkn07 commented on 2011-04-22 22:39 cloog-ppl not longer available on AUR Barthalion commented on 2011-03-19 09:28 For you maybe yes, but newest version (4.5) have much worst performance and optimization, and I'm using this to compiling everything. Anonymous comment on 2011-03-01 10:21 I guess we need to see what gcc is needed for the new cuda-sdk 4.0 before anything else. sl1pkn07 commented on 2011-02-23 00:48 barthalion. this is for cuda-sdk. only compile with this Barthalion commented on 2011-02-22 11:34 4.4.5 was released 4 months ago. maxi_jac commented on 2010-08-24 23:37 It worked perfectly for me. Thanks Anonymous comment on 2010-07-09 15:12 Not building on my machine with the message: lib/libppl_c.so: undefined reference to `std::ctype<char>::_M_widen_init() const@GLIBCXX_3.4.11
https://aur.archlinux.org/packages/gcc44/?setlang=ru&comments=all
CC-MAIN-2017-09
refinedweb
3,522
59.09
16 April 2012 17:10 [Source: ICIS news] WASHINGTON (ICIS)--US retail sales rose by 0.8% in March from February to $411.1bn (€312.4bn), the Commerce Department said on Monday, a positive sign for the ?xml:namespace> In its monthly report, the department noted that while the pace of spending growth last month was less than the 1% gain reported for February, it was still better than the more modest 0.6% expansion in January. The March level of spending by households also marked a solid 6.5% improvement compared with the same month of 2011. However, some of the March gain in consumer spending was in part attributed to a nearly 15% increase in outlays for gasoline, which reflects recent advances in US fuel costs. But last month also saw strong 14% improvements in household spending for building materials along with garden equipment and supplies as homeowners geared up for the North American spring season. Consumer spending is the principal driving force of the Any improvement in the rate of consumer spending has near-term positive impact on the nation’s economy, and the successive improvements in January through March are seen as evidence that consumers are gaining more confidence in the economy’s long-term prospects. (
http://www.icis.com/Articles/2012/04/16/9550854/us-retail-sales-rise-0.8-in-march-a-positive-boost-for-economy.html
CC-MAIN-2014-35
refinedweb
210
59.84
@echo off echo compiling C++ using -ansi -pedantic-errors -Wall g++ -ansi -pedantic-errors -Wall %1 %2 %3If you want this batch file, simply copy it from this page and save it on your system in a file named gccp.bat. The file shold be located in a directory that is part of your path satement. #include <iostream> using namespace std; int main() { cout << "Hello World!" << endl; return 0; }Again, you should be able to simply copy this little program from this page, and use a text editor to save it as temp.cpp. gccp temp.cppIf you have not made any errors, the program will compile uneventfully, and a program file named a.exe will be created in the same directory. [Save][SystemComKeep:gccp [arach_filename]]Press Apply and the macro becomes available (and is saved by Arachnophilia for future use)..
https://arachnoid.com/cpptutor/setup_windows.html
CC-MAIN-2017-47
refinedweb
141
69.21
I would like to propose a __metafunc__ that would work similar to metaclasses but for functions. My use case is for duck punching all functions defined in a module. This tends to be difficult as a postprocess after __import__ (you have to modify method.im_func and deal with closures and functions in the module namespace that might be references to functions defined elsewhere - it's a mess). If instead when defining a function it did a lookup of __metafunc__ and called it (like a decorator) after a function was defined then I could import the module in a globals that includes my __metafunc__. Feel free to tear this idea apart, improve it, or dash it on the rocks. Perhaps some of you will also see use cases for this or something like it? -- Zachary Burns (407)590-4814 Aim - Zac256FL Production Engineer (Digital Overlord) Zindagi Games -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://mail.python.org/pipermail/python-ideas/2010-March/006923.html
CC-MAIN-2016-50
refinedweb
154
69.31
Raimond posted this comment asking a question about my approach to XSD versioning: I see how this model can solve a bunch of the problems, but how to deal with removal of allowed elements? Lets say we have a V1: <Customer> <Name>John Doe</Name> </Customer> In V2 it is discovered that it should actually be <Customer> <FirstName>John</FirstName> <LastName>John</LastName> </Customer> I don't see how this case would fit with the proposed model. He's right, the model I describe does not deal with removal of elements. I divide the versioning problem into three types of changes: You can do (1) without concern. You can do (2) using the model I described. For (3), you typically need to define a new set of types in a new namespace (there might be some serializer-level trickery to avoid this, but this is the general rule). I'm okay with this breakdown for two reasons. First, I think that 1 and 2 are far more common. Second, I have a way to handle 3 using a new targetNamespace but multiplexing both the old and the new namespace to a single object model. That would deal with the removal problem. I'll post code to do that too. There is another way around this problem too. Raimond asked about the removal of “allowed” elements, not “required” elements. I'm a big fan of making all elements optional, with the common exception of a required identity element. In other words, if you're designing a schema for use by multiple services and you don't know which services will be able to provide which pieces of data, you're better of designing a very loose schema that focuses on describing the shape of the data if it's present. If the <name> element were optional (allowed but not required) in Raimond's example, then not using it in V2 is not a problem. In other words, you could move from this V1 definition: <complexType name=”CustomerType”> <sequence> <element name=”Name” type=”string” minOccurs=”0” /> </sequence></complexType> to this V2 definition: <complexType name=”CustomerType”> <sequence> <element name=”Name” type=”string” minOccurs=”0” /> <element name=”FirstName” type=”string” minOccurs=”0” /> <element name=”LastName” type=”string” minOccurs=”0” /> </sequence></complexType> For this to work, you have to define the rules for which elements take precedence and which are ignored if both are present. Yes, I might go that far to avoid a sweeping schema change. AayQmh <a href="kocmzksocuah.com/.../a>, [url=]rkaklpqmzqyr[/url], [link=]kixyqoygkqpp[/link],
http://www.pluralsight.com/community/blogs/tewald/archive/2006/04/19/22111.aspx
crawl-002
refinedweb
424
60.45
Hi, i am new to cprogramming and i want to do this thing. all in all i have 2 problems 1)how do you add answers? Code:A+B=ans1 C+D=ans2 ans1+ans2=ans3//i cant do this part 2)Next problem is that i cant add switches is there something i am doing wrong? i wanted to add cases. Code:#include <cstdlib> #include <iostream> using namespace std; int main(int argc, char *argv[]) { int A,B,C; double total; int cash,product; double change; switch (product){ case 1: printf("how many product A: \n"); cin>>A; break; case 2: printf("how many product B: \n"); cin>>B; break; case 3: printf("how many product C: \n"); cin>>C; break; } total=(case 1*10)+(case 2*20)+(case 3*30); \\ unit price of product A is 10, product B is 20, product C is 30 printf("total amout due is: %lf \n", total); printf("enter amount paid by customer: \n"); cin>>cash; do{ change=total-cash;\\still not working printf("total change will be: \n", change); } while(total>=cash); system("PAUSE"); return EXIT_SUCCESS; }
https://cboard.cprogramming.com/cplusplus-programming/125450-adding-problem.html
CC-MAIN-2017-04
refinedweb
185
58.35
Couple of questions. I want to create a service that returns a number of objects with a number of sub-objects associated with it. As an example. All the orders for a customers with each order having many order lines items. I'm able to call the service and get all the orders, but I have to comment out the line items because the service indicates that it can't retrieve the metadata when the items are un-commented. Here is what I have and the 2 questions. How would this be coded (client/server)? Does the List in the orders have to be an array? Will Price be calculated and passed to the client? [DataContract] public class Order { [DataMember] public int OrderID; [DataMember] public int CustID; [DataMember] List <OrderLineItem> OrderLineItems; } [DataContract] public class OrderLineItem { [DataMember] View Complete Post SQL Server Management Objects offer developers a robust toolset for backing up and restoring databases, and issuing DDL commands, as John Papa explains. John Papa MSDN Magazine June 2007 In this column see how to bind a custom list of business entities using the binding tools in the .NET Framework 2.0. MSDN Magazine February 2007 There Paul DiLascia MSDN Magazine December 2004
http://www.dotnetspark.com/links/45558-representing-objects-within-objects.aspx
CC-MAIN-2017-22
refinedweb
202
56.15
In this tutorial I show you the if statement, relational operators, logical operators, ternary operator and the switch statement. If you haven’t watched part 1 watch it first here Java Video Tutorial. As you watch this video, you will be able to follow it easier if you have the commented code in front of you. It follows the video below. While this tutorial has been slow it will get much more complicated quickly. So, make sure you understand everything I cover in the beginning. If you like videos like this share it Code From the Video public class LessonThree { public static void main(String[] args) { // Creates a random number between 0 and 50 int randomNumber = (int) (Math.random() * 50); /* Relational Operators: * Java has 6 relational operators * > : Greater Than * < : Less Than * == : Equal To * != : Not Equal To * >= : Greater Than Or Equal To * <= : Less Than Or Equal To */ // If randomNumber is less than 25, execute the code between {} and then stop checking if (randomNumber < 25) { System.out.println("The random number is less than 25"); } // If randomNumber wasn't less than 25, then check if it's greater than it. If so, execute the code between {} and then stop checking else if (randomNumber > 25) { System.out.println("The random number is greater than 25"); } // Checks if randomNumber equals 25 else if (randomNumber == 25) { System.out.println("The random number is equal to 25"); } // Checks if randomNumber is not equal to 25 else if (randomNumber != 15) { System.out.println("The random number is not equal to 15"); } // Checks if randomNumber is less than or equal to 25 else if (randomNumber <= 25) { System.out.println("The random number is less than or equal to 25"); } // Checks if randomNumber is greater than or equal to 25 else if (randomNumber >= 25) { System.out.println("The random number is greater than or equal to 25"); } // If none of the above were correct print out the random number else { System.out.println("The random number is " + randomNumber); } // Prints out the random number System.out.println("The random number is " + randomNumber); /* Logical Operators: * Java has 6 (!(false)) { System.out.println("I turned false into true"); } if ((false) && (true)) { System.out.println("\nBoth are true"); } // There is also a & logical operator it checks the second boolean result even if the first comes back false if ((true) || (true)) { System.out.println("\nAt least 1 are true"); } // There is also a | logical operator it checks the second boolean result even if the first comes back true if ((true) ^ (false)) { System.out.println("\n1 is true and the other false"); } int valueOne = 1; int valueTwo = 2; // The Conditional or Ternary Operator assigns one or another value based on a condition // If true valueOne is assigned to biggestValue. If not valueTwo is assigned int biggestValue = (valueOne > valueTwo) ? valueOne : valueTwo; System.out.println(biggestValue + " is the biggest\n"); char theGrade = 'B'; /* When you have a limited number of possible values a switch statement makes sense * The switch statement checks the value of theGrade to the values that follow case * If it matches it executes the code between {} and then break ends the switch statement * default code is executed if there are no matches * You are not required to use the break or default statements * The expression must be an int, short, byte, or char */ switch (theGrade) { case 'A': System.out.println("Great Job"); break; // Ends the switch statement case 'b': // You can use multiple case statements in a row case 'B': System.out.println("Good Job, get an A next time"); break; case 'C': System.out.println("OK, but you can do better"); break; case 'D': System.out.println("You must work harder"); break; default: System.out.println("You failed"); break; } } } nice tutorial, keep it up… Another one will be done tomorrow if possible, please add some videos for java frameworks(i.e Struct’s, spring etc.) in this java video tutorial series.. Thank you System.out.println(‘Yeah’); //doesnt work with single quotes. Why so? When you use single quotes you can only use one character. That’s just the way the language works. I don’t particularly like it either In java once and for all a character is delimited by 2 quotes (”) and a string of characters is delimited by 2 double quotes (“”)… I know related to other languages it can be confusing… I have to just drop a comment to say thanks for these videos. They have taught me more in a month than I have learned from any book with the last year. I really appreciate that you go beyond the basics and cover enough material to help both a beginning programmer and an experienced programmer. You’re very welcome 🙂 It makes me very happy to hear that I’m helping so many people. Many more videos are in the works. I won’t be done with Java for at least another year Love the videos … Learning as we speak. I did notice that this will not call unless you use an if statement instead of else if inside the the original if loop // Checks if randomNumber equals 25 031 else if (randomNumber == 25) 032 { 033 System.out.println(“The random number is equal to 25”); 034 } I just made this stream of code to talk about what each of the conditionals do. I guess it just comes naturally to talk while writing the code? Talk it out … 🙂 works for me ! Great vid! I have more questions! Is it necessary to have a break; in the default case? Or any of the other cases for that matter? I’d assume it would continue to check the other cases and nothing would happen. Would having the switch do those unnecessary checks be something that would affect the performance of the program overall? Are Conditional operators better for a program’s performance? If I was programming I would have been thinking to write If (valueOne > valueTwo) { int biggestValue = valueOne; } else { int biggestValue = valueTwo; } rather than int biggestValue = (valueOne > valueTwo) ? valueOne : valueTwo; Yes you don’t need the break and you are correct in regards to what would happen without it. The compiler is normally smart enough to handle any performance issues that come up depending on which conditional tools you use. Great video! I have question, a lot of people said that in order to become master in programming you have to practice coding a lot, and make a program. and I want to practice this, but I dont have any idea what to program, or what to build with the basic java that I have gained from your tutorial. I really want to improve my programming skill. Do you have any suggestion ? thank you! Thank you 🙂 You may benefit from watching my object oriented design tutorial. It covers how to turn a problem into finished code. You don’t have to completely understand the code. What is more important is understanding the mindset of programming. Feel free to ask more questions. int randomNumber = (int) (Math.random() * 50); Can you explain how does this represent numbers 1-49? I thought it would take a random number and multiply it by 50. Math.random returns a number between 0.0 and less than 1.0. Then I force it into being an int which gives me a number between 0 and 50 Hi, really nice tutorials, still learning. I noticed that on this website code is a little bit weird, comments really don’t work the best, almost all code i green. Is it the website or is it a mistake? But still great work. Keep it up. Thank you 🙂 I’m not sure what you mean by the code is green. I’ll do my best to help. So, would I need papers, a calculator, and patience when coding? Java acts like a pet dog to you, and to me it is like deciphering binary 1’s and 0’s. Also, what can I do to practice? Any questions or practice problems somewhere? It will come with practice. It is best to print out the code and take notes in your own words. Write down questions you have and I’ll answer them. It may help you dramatically to learn UML. I have a UML tutorial. Many years ago after learning UML everything started to make sense. Also in the beginning you’ll copy and paste code and heavily use code completion. You’ll get it 🙂
http://www.newthinktank.com/2011/12/java-video-tutorial-3/?replytocom=25791
CC-MAIN-2020-24
refinedweb
1,404
64.81
In C A function pointer is a type of pointer. When dereferenced, a function pointer can be used to invoke a function and pass it arguments just like a normal function.. Function pointers are often used to replace switch statements. In the following program I'll show you how to do this job. #include <stdio.h> #include <stdlib.h> #include <math.h> void runOper2(float a, float (*func)(float)); void runOper(float a, char oper); float Sqr(float a); float Sqrt(float a); float Log(float a); int main(int argc, char **argv); float Sqr(float a) { return a*a; } float Sqrt(float a) { return sqrt(a); } float Log(float a) { return log(a); } void runOper(float a, char oper) { float result; switch(oper) { case 's' : result = Sqr(a) break; case 'q' : break; case 'l' : result = Log(a) break; } printf("result = %f\n", result); } void runOper2(float a, float (*func)(float)) { float result = func(a); printf("result = %f\n", result); } // Main program int main(int argc, char **argv) { runOper(2, 's'); runOper2(2, &Sqr); runOper(2, 'q'); runOper2(2, &Sqrt); runOper(2, 'l'); runOper2(2, &Log); return(EXIT_SUCCESS); } The two calls runOper and runOper2 are equivalent but using runOper2 you shall write less code. Callback Functions Another use for function pointers is setting up "listener" or "callback" functions that are invoked when a particular event happens. One example is when you're writing code for a. Gg1
http://www.xappsoftware.com/wordpress/2011/07/11/function-pointers-in-c/
CC-MAIN-2020-10
refinedweb
237
57.4
Real Time Pitch Shift on iOSAuthentic7 Oct 28, 2013 6:59 AM Hello Everyone, I am building a video/audio playback application for iOS and am having some trouble. I have already worked through most of my problems and now am stuck on something that is just killing me. I am using Andre Michelle's pitch shift for managing the audio samples used on my application. Unfortunately when playing back on an iOS device I am getting extrememly bad sound playback. On the web and on the PC playback is smooth and clean. However when I play it back on an iOS device, I am getting a studdering effect (processor power?). I have read elsewhere that bytearray may be the cause of the trouble as it does not perform well on iOS, but there has to be a way to fix this no? Any help on the subject would be very helpful. I have also read that using vector arrays would be an option, but, that would be a lot of reprogramming and to be honest, i dont understand the vector arrays well enough to implement them. You can see an example of the app in action (i will only leave it live for a the next month or till I get a result that works). kindesigns.com/venti - Click on the blue box to get to the problemed audio player. It works fine here, on the ipad not so much. Here is a link to the pitch shift code I am using... I have been at this project for months and now am stopped in place with no way to finish for my client. Thanks in advance for any help... really... thanks. 1. Re: Real Time Pitch Shift on iOSsinious Oct 28, 2013 10:39 AM (in response to Authentic7)1 person found this helpful Have you tried Adobe's pitch shift example? Aside that, Vectors are simply an array that is strongly typed. You tell it what kind of data the array can hold and because it expects to only ever hold this kind of data, the dynamic casting overhead of a typical 'Array' is removed. Here's a little info on Vector performance versus ByteArray: In that you can see the performance of ByteArray.writeDouble (or writeFloat) versus a Vector.<Number>. It's quite a performance leap. A vector is as easy to use as learning the simple syntax: var myVector:Vector.<Type> = new Vector.<Type>(); In your case you'd want to use floating point so you would use a Vector.<Number>. So: var myNumberVector.Vector.<Number> = new Vector.<Number>(); Now you can use it similar to an Array. myNumberVector.push(123.45678); Vectors are useful for type strictness. You will get an error if you put any other 'type' in it because it's not an Array: myNumberVector.push('Hello World'); // error The error is useful so you can track down if some invalid data is assigned. A simple huge loop can show you if the data is lengthy a vector under this extremely simple circumstance can be good compared to a (cheating) ByteArray performing a similar operation (I'm sure someone else has a better example test): New AS3 doc: import flash.utils.getTimer; import flash.utils.ByteArray; // references var v:Vector.<Number> = new Vector.<Number>(); var b:ByteArray = new ByteArray(); var i:int; // counter // start time var startTime:uint = getTimer(); // loop 10 million times pushing the number 123.456 into the vector for (i = 0; i < 10000000; i++) { v.push(123.456); } // perform an addition of index + index+1 on vector 10 million times // (stop at 9,999,999 because we're adding i+1) for (i = 0; i < 9999999; i++) { v[i] += v[i+1]; } trace("End of Vector time: " + (getTimer() - startTime) + "ms"); startTime = getTimer(); // same thing, but ByteArray for (i = 0; i < 10000000; i++) { b.writeFloat(123.456); } // value read at position var fVal:Number; for (i = 0; i < 9999999; i++) { b.position = i * 4; fVal = b.readFloat(); // since it's just a test and I know the value is the same // 4 bytes away I'll just re-use for a 'best case scenario' // but if I must reposition and readFloat twice performance // goes down b.writeFloat(fVal + fVal); } trace("End of ByteArray time: " + (getTimer() - startTime) + "ms"); Traces on my machine: End of Vector time: 1230ms End of ByteArray time: 1806ms By cheat I mean you can see the comment. I'm not even taking the time to read the next indexed number from the next 4 byte (32bit) position then resetting position back to add together the 2 different positions, I'm re-using the same value. If I do that, the time gets more distant: var fVal:Number; for (i = 0; i < 9999999; i++) { b.position = i * 4 + 4; fVal = b.readFloat(); b.position = i * 4; b.writeFloat(b.readFloat() + fVal); } Traces: End of Vector time: 1224ms End of ByteArray time: 2476ms 2. Re: Real Time Pitch Shift on iOSAuthentic7 Oct 28, 2013 11:13 AM (in response to sinious) Yea i think i found much of the same information but have no idea where I should implement it. I have seen a few people compare the speeds between vectors and arrays. I just couldnt understand how i would adjust my code without breaking andres package. I actually gave it a go but ended up making nothing work so I backed off and reverted. I also remeber that byteArray was needed to stream. I am a 3D artist doing app development, so I am not a "pro" by any means at writing AS3. Im getting there though =) this project has been a great teacher for me. lol. So, as far as I can tell, this... l0 = _target.readFloat(); r0 = _target.readFloat(); l1 = _target.readFloat(); r1 = _target.readFloat(); ...is where my problem lies.... Any idea what I would need to adjust to make it "vector"? Again this is all real time read/write as the pitch shift is adjustable on the fly. --Full function is below... full package is here : private function sampleData( event: SampleDataEvent ): void { //-- REUSE INSTEAD OF RECREATION _target.position = 0; //-- SHORTCUT var data: ByteArray = event.data; var scaledBlockSize: Number = BLOCK_SIZE * _rate; var positionInt: int = _position; var alpha: Number = _position - positionInt; var positionTargetNum: Number = alpha; var positionTargetInt: int = -1; //-- COMPUTE NUMBER OF SAMPLES NEED TO PROCESS BLOCK (+2 FOR INTERPOLATION) var need: int = Math.ceil( scaledBlockSize ) + 2; //-- EXTRACT SAMPLES var read: int = _mp3.extract( _target, need, positionInt ); var n: int = read == need ? BLOCK_SIZE : read / _rate; var l0: Number; var r0: Number; var l1: Number; var r1: Number; for( var i: int = 0 ; i < n ; ++i ) { //-- AVOID READING EQUAL SAMPLES, IF RATE < 1.0 if( int( positionTargetNum ) != positionTargetInt ) { positionTargetInt = positionTargetNum; //-- SET TARGET READ POSITION _target.position = positionTargetInt << 3; //-- READ TWO STEREO SAMPLES FOR LINEAR INTERPOLATION l0 = _target.readFloat(); r0 = _target.readFloat(); l1 = _target.readFloat(); r1 = _target.readFloat(); } //-- WRITE INTERPOLATED AMPLITUDES INTO STREAM data.writeFloat( l0 + alpha * ( l1 - l0 ) ); data.writeFloat( r0 + alpha * ( r1 - r0 ) ); //-- INCREASE TARGET POSITION positionTargetNum += _rate; //-- INCREASE FRACTION AND CLAMP BETWEEN 0 AND 1 alpha += _rate; while( alpha >= 1.0 ) --alpha; } //-- FILL REST OF STREAM WITH ZEROs if( i < BLOCK_SIZE ) { while( i < BLOCK_SIZE ) { data.writeFloat( 0.0 ); data.writeFloat( 0.0 ); //trace("writing 0's"); ++i; } } //-- INCREASE SOUND POSITION _position += scaledBlockSize; } as you can see its a rather complicated function for a noobie =) I cant even tell where it is told to actually make the sound play. I know its there, but for the life of me, i cant tell when the data stream is actually converted to an audio stream. 3. Re: Real Time Pitch Shift on iOSsinious Oct 28, 2013 12:47 PM (in response to Authentic7)1 person found this helpful If you read the SampleDataEvent documentation you'll see that when this handler is used Flash will turn a Sound object (the mp3) into a stream that is fed through the handler function. In every event your handler function receives you have a chance to read and alter the data before Flash plays it through the Sound class. The data is supplied as a ByteArray to begin with. Read the description here to get a better understanding: Event.html The change occurs above when you see the ByteArray being read, those values are altered and then written back (right under the comment "//-- WRITE INTERPOLATED AMPLITUDES INTO STREAM". A computer is just a calculator. It doesn't understand what it's processing is actually audio. Via the Sound class, it does know how to interperate the ByteArray into actual sound however. So your job is to alter the values of the sound properly. Note that the block size can have performance changing characteristics. They recommend using a larger block size like 8192 so the code executes less times over all the data. You can try adjusting the BLOCK_SIZE value of that class between 2048, 4096 and 8192 to see if it helps remove some of your performance issues. Being the format given by the Sound class works with is ByteArray and the work being done to the stream is extremely simple I think converting the data to vector just to perform the same operation would actually hurt your performance. If you're just doing pitch I don't believe it will help you. If the Sound class could hand the data over in a Vector then it would be a different story.. Here's a bit more information on the sample file posted above and how Adobe made the pitch seem higher (by reducing samples): 4. Re: Real Time Pitch Shift on iOSAuthentic7 Nov 3, 2013 5:22 AM (in response to sinious) So, still at it. some but little progress. I am changing the block size and as you suggested, it does in fact change the playback quality. Unfortunately, it does not seem to erradicate the problem entirely. The "pops" happen farther away when dealing with a larger block size when I up the Block Size to 8192. I have also tried to change the bit rate of the audio samples from 192 to 92 figuring less data would help. Unfortunately it didnt really do much of anything. I was really hopeful that it would have a substantial effect, but alas, it did not. Bummer. The last point you made... ." Can you explain this a bit further? I am not sure what you mean. Thanks a billion. 5. Re: Real Time Pitch Shift on iOSsinious Nov 4, 2013 9:22 AM (in response to Authentic7)1 person found this helpful Seems to me while the sound got a little better the extra CPU it freed up might be utilized by something else, putting you right back in the same position. You might want to try an isolated example project with nothing but the sound in there to see if the CPU is really overworked and causing this. If you only process sound and it doesn't have artifacts in that case you know you just need to look for a way to free up the processor more during pitch shifts in your app. What I mean by instantiation is every time you use the function it instantiates a new ByteArray object. Little things like that can be removed by moving that to a class-member that's re-used each time the function is fired since it's fired so rapidly. e.g. var _soundByteArray:ByteArray = new ByteArray(); function sampleData(e:SampleDataEvent):void { // just assign an already existing ByteArray, don't create _soundByteArray = event.data; // continue.. } If a function is fired rapidly then you'll want to do everything possible to cut down on the amount it does. One of those things is unnecessary object instantiation. Every little bit helps. 6. Re: Real Time Pitch Shift on iOSHoward Wang4 Jul 8, 2014 3:41 PM (in response to sinious) Hi, The Adobe example is not really pitch shift. It's time scale to make the song play faster or slower. Audio time-scale/pitch modification - Wikipedia, the free encyclopedia A clear definition is pitch shift is in the following wiki: Pitch shift - Wikipedia, the free encyclopedia The complete example of pitch shift is here That's a very CPU intensive job. Regards, Howard
https://forums.adobe.com/thread/1323318
CC-MAIN-2018-39
refinedweb
2,042
72.05
Hash: SHA256 Hello, I'm interested why we still don't have this flag in our CFLAGS? It seems that other distributions like openSUSE enable it by default and it helps in many cases to avoid over-linking (for example, see thread about poppler). Are there any reasons not to add it? Re: RFC: -Wl,--as-needed by defaultBy =?UTF-8?Q?Bj=c3... at 11/13/2017 - 06:52 Am Montag, den 13.11.2017, 11:02 +0100 schrieb Igor Gnatenko: Hello Igor, that specific flag should be in LDFLAGS, but there are reasons, we do NOT have it in there, because it will likely break any binaries built from or containing FORTRAN sources. They will simply SEGFAULT, because `-Wl,--as-needed` causes some needed runtime libraries NOT being linked with them, because the linker thinks they are not needed / would over link. Cheers, Björn Re: RFC: -Wl,--as-needed by defaultBy =?UTF-8?Q?Tomas... at 11/13/2017 - 18:01 On 13 November 2017 at 10:52, Björn 'besser82' Esser < ... at fedoraproject dot org> wrote: [..] If it is still any issue related to the Fortran it is nothing more than just plain bug. However AFAIK only reason of any issues related to use -Wl,--as-needed is using WRONG list -l<foo> parameters (lack of some -l<foo>) and this needs to be not treated by apply some workaround but but by apply necessary fixes in -l<foo> linking parameters. I've been personally involved in fixing many such problems in more than last decade and I was one of the many similar people fixing similar issues across many packages. If it is still something left it will be IMO not more than few packages in all Fedora packages. In some packages using --as-needed will cause reduction of SONAME dependencies by more than half!!! In other words -Wl,--as-needed should be used everywhere WITHOUT exceptions. Easiest way apply this globally in Fedora is add --as-needed in /usr/lib/rpm/redhat/redhat-hardened-ld spec file by apply patch: --- /usr/lib/rpm/redhat/redhat-hardened-ld.orig +++ /usr/lib/rpm/redhat/redhat-hardened-ld @@ -2,4 +2,4 @@ + %{!static:%{!shared:%{!r:-pie}}} *link: -+ -z now ++ -z now --as-needed BTW: many packages are fiddling with LDFLAGS adding redundantly -Wl,-z,now. All those modifications can be now dropped. As well all packages which already are injecting -Wl,--as-needed can be cleaned. [tkloczko@domek SPECS.fedora]$ grep LDFLAGS *| grep -c -- --as-needed 150 [tkloczko@domek SPECS.fedora]$ grep LDFLAGS *| grep -c -- -z,now 106 On top of this IMO it would be good to add in binutils ld modification which will be reporting all dropped during linking -l<foo> as a warning Such warnings could be used on keep clean build frameworks. kloczek Re: RFC: -Wl,--as-needed by defaultBy Dridi Boukelmoune at 11/27/2017 - 04:21 On Mon, Nov 13, 2017 at 11:01 PM, Tomasz Kłoczko <kloczko. ... at gmail dot com> wrote: Off the top of my head I can see glibc leading to "over" linking: if upstream is not GNU/Linux-only it may need -lm, -lrt, -lsocket, -lnls etc for other platforms. For autotools-based projects depending on how you do the functions detection it may or may not add the -l* flags to $LIBS if provided by the libc. Dridi Re: RFC: -Wl,--as-needed by defaultBy Jakub Jelinek at 11/27/2017 - 04:46 On Mon, Nov 27, 2017 at 09:21:16AM +0100, Dridi Boukelmoune wrote: There are several problems with forceful --as-needed: 1) forcing it everywhere is a workaround to broken tools that add -l* options just in case (like auto*, libtool, pkg-config) 2) until recently, there was no way to enable the option for just one library and restore it afterwards, so many spots, including, but not limited to, gcc driver itself, do --as-needed -lgcc_s --no-as-needed, which if somebody forces --as-needed at the start of the linker command line has the effect that everything up to the -lgcc_s on the command line behaves --as-needed, and everything after it doesn't. Since 3 years ago ld offers the --push-state/--pop-state options, so one can use --push-state --as-needed -lgcc_s --pop-state instead and that at the end restores the previous --as-needed behavior. So, before turning it by default, you'd need to check everywhere where --no-as-needed is used and see if it is something where --push-state/--pop-state should be used. 3) forcing --as-needed breaks some legitimate uses where some work is done e.g. in the constructors of some library, but there is no need to use any symbols, or if you know some library needs to be initialized early but will be used only from dlopened libraries (e.g. AddressSanitizer and ThreadSanitizer can be examples of this), or the library extends the binary and provides necessary symbols for a plugin that will be dlopened. Jakub Re: RFC: -Wl,--as-needed by defaultBy Adam Jackson at 11/27/2017 - 17:30 pkg-config isn't broken here. Individual pc files might be. It'd be pleasant if ld could warn you about excess linkage. libtool will throw away your attempts to use --as-needed and --push- state anyway, because libtool is trash. - ajax Re: RFC: -Wl,--as-needed by defaultBy =?UTF-8?Q?Tomas... at 11/13/2017 - 18:46 Phil K at 11/13/2017 - 22:54 One concern is that -Wl,--as-needed requires greater accuracy with the ordering of objects and libraries as you link. Also, if a package uses a library indirectly, i.e. A uses C via B: A -> B -> C,--as-needed will peel away C and break A unless A explicitly mentions its need for C. Of course that should be the case already, but you're going to see a number of weaknesses in the variouspackages revealed by adding --as-needed. On Monday, November 13, 2017 5:48 PM, Tomasz Kłoczko <kloczko. ... at gmail dot com> wrote: Florian Weimer at 11/14/2017 - 03:07 On 11/14/2017 03:54 AM, Philip Kovacs wrote: I think ld no longer links against symbols in indirect dependencies. #include <openssl/evp.h> int main() { return (int) &EVP_rc4; } /usr/bin/ld: /tmp/ccV4cmYY.o: undefined reference to symbol 'EVP_rc4@@OPENSSL_1_1_0' //usr/lib64/libcrypto.so.1.1: error adding symbols: DSO missing from command line collect2: error: ld returned 1 exit status Thanks, Florian Re: RFC: -Wl,--as-needed by defaultBy King InuYasha at 11/14/2017 - 07:32 On Tue, Nov 14, 2017 at 2:07 AM, Florian Weimer < ... at redhat dot com> wrote: I've only seen this with OpenSSL, so I think it's specific to that library. Re: RFC: -Wl,--as-needed by defaultBy Panu Matilainen at 11/14/2017 - 07:45 On 11/14/2017 01:32 PM, Neal Gompa wrote: Implicit linking is not openssl specific at all, this is fairly old news by now as this change occurred in Fedora 13: <a href="" title=""></a> <a href="" title=""></a> Re: RFC: -Wl,--as-needed by defaultBy King InuYasha at 11/14/2017 - 07:50 On Tue, Nov 14, 2017 at 6:45 AM, Panu Matilainen < ... at redhat dot com> wrote: Then something isn't working correctly, because then libcomps builds should be failing in Fedora. It doesn't. It fails in *every other Linux distribution* that I've built it for (Mageia, openSUSE, OpenMandriva, Solus, and others...) unless I patch it to deliberately link with zlib (it fails with similar issues noted with OpenSSL). Re: RFC: -Wl,--as-needed by defaultBy Florian Weimer at 11/14/2017 - 08:08 On 11/14/2017 12:50 PM, Neal Gompa wrote: That's likely something else entirely. I don't even see anything in Fedora's libcomps which would need zlib. Thanks, Florian Re: RFC: -Wl,--as-needed by defaultBy King InuYasha at 11/14/2017 - 08:14 On Tue, Nov 14, 2017 at 7:08 AM, Florian Weimer < ... at redhat dot com> wrote: It's something related to libxml2, if I remember correctly. I couldn't work out exactly what was going wrong, but linking zlib directly fixes the link errors. Re: RFC: -Wl,--as-needed by defaultBy Igor Gnatenko at 11/14/2017 - 08:39 Hash: SHA256 On Tue, 2017-11-14 at 07:14 -0500, Neal Gompa wrote: Then it is problem of libxml2 or other libraries, it has nothing to do with libcomps. Itself it doesn't use anything zlib-related. Re: RFC: -Wl,--as-needed by defaultBy Daniel P. Berrange at 11/13/2017 - 07:06 On Mon, Nov 13, 2017 at 11:52:14AM +0100, Björn 'besser82' Esser wrote: What % of our distro involves fortran though ? Could this be as simple as enabling it by default, but having an easy way via an RPM macro to opt-out of it in the handleful of packages that matter wrt fortran. Regards, Daniel Re: RFC: -Wl,--as-needed by defaultBy Michael Catanzaro at 11/13/2017 - 08:21 On Mon, Nov 13, 2017 at 5:06 AM, Daniel P. Berrange < ... at redhat dot com> wrote: If Debian/Ubuntu/openSUSE (didn't know about openSUSE) can all handle it, I'm sure Fedora can find a way. I'll just add: it's pretty annoying that, right now, when I touch linker flags, there's no easy way to know if my application will link or not on other distros, because Fedora is more permissive. Michael Re: RFC: -Wl,--as-needed by defaultBy King InuYasha at 11/13/2017 - 08:27 On Mon, Nov 13, 2017 at 7:21 AM, Michael Catanzaro <mike. ... at gmail dot com> wrote: In Mageia, we have probably some of the strictest linker flags of all the distributions (that I know of). Many applications that build or link fine in Fedora need fixing in Mageia because we don't permit underlinking[1] or overlinking[2], and we apply several fixers in configure and brp scripts. We've also designed our flags setup so that you can set a macro to override as needed, so for things like FORTRAN, you can change this easily. I'd like to see these make their way into Fedora. [1]: <a href="" title=""></a> [2]: <a href="" title=""></a>
http://www.devheads.net/linux/fedora/development/rfc-wl-needed-default.htm
CC-MAIN-2018-17
refinedweb
1,731
58.42
#include <db.h> int DB->get_type(DB *db, DBTYPE *type); The DB->get_type function stores the type of the underlying access method (and file format) into the memory referenced by type. The returned value is one of DB_BTREE, DB_HASH, DB_RECNO, or DB_QUEUE. This value may be used to determine the type of the database after a return from DB->open with the type argument set to DB_UNKNOWN. The DB->get_type interface may be called only after the DB->open interface has been called. The DB->get_type function may fail and return a non-zero error for the following conditions: Called before DB->open was called. The DB->get_type function may fail and return a non-zero error for errors specified for other Berkeley DB and C library or system functions. If a catastrophic error has occurred, the DB->get_type function may fail and return DB_RUNRECOVERY, in which case all subsequent Berkeley DB calls will fail in the same way.
http://pybsddb.sourceforge.net/api_c/db_get_type.html
crawl-001
refinedweb
159
58.72
Details - Type: Improvement - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: None - Fix Version/s: Scala 2.11.0-M2 - Component/s: Misc Library - Description Currently, to use Scala as a scripting engine, one has to do: import scala.tools.nsc.*; Interpreter n=new Interpreter(new Settings()); n.bind("label", "Int", new Integer(4)); n.interpret("println(2+label)"); // didn't event try to check success or error n.close(); It would be nice if one could do instead: import javax.script.*; ScriptEngine e = new ScriptEngineManager().getEngineByName("scala"); e.getContext().setAttribute("label", new Integer(4), ScriptContext.ENGINE_SCOPE); try { engine.eval("println(2+label)"); } catch (ScriptException ex) { ex.printStackTrace(); } Plus, as was pointed elsewhere, the compiler used in the background by the interpreter should not need a filesystem (for instance if scripting is to be used on handheld devices etc.) Activity Initial progress here: there's this: (though I haven't tried it) Replying to [comment:18 SethTisue]: > there's this: > (though I haven't tried it) That's the same on I mentioned here: This is currently out of scope for the Scala team. We don't see much demand for JSR-233; if there was a community library could fill the gap. I didn't know about JSR 223 before. It would be good to supply Scala as a Java scripting engine if possible. However, there is a question about whether it can pratically be done. Interpreter.bind takes a type as an argument. The setAttribute() call in the example code does not. Is there another setAttribute() call that could be used to pass in a type? If not, it requires some deep though to figure out how to get Scala to fit into what JSR 223 expects. Treating label as type "Any" is unkikely to give a good scripting experience. By the way, if you want to know whether an interpret attempt succeeds when using Interpreter.interpret(), simply look at the return value.
https://issues.scala-lang.org/browse/SI-874
CC-MAIN-2014-10
refinedweb
326
58.99
On Wed, Nov 29, 2000 at 09:14:52AM -0800, William A. Rowe, Jr. wrote: > Question... > > I'm again jumping betwixt and between 1.3 and 2.0 - and have > a really simple issue with SDBM... why are we namespace > protecting and modifing this code, given that we now have > apu_dbm as a wrapper? Because it is linked into the process, and the symbols are visible from other things that get pulled into Apache. We namespace protect everything that might be visible, whether it is intended for use by other modules or not. Since the stuff goes into the same process space as other, arbitrary code, we need to protect it (and protect others from it). > I'd like to see SDBM go back to a (relatively) pure and > original implementation, and have our wrapper add any namespace > protection and other features (pools, etc.) There is no such thing as a "pure and original implementation." The code was developed in the early 1990s, made bug free, and effectively abandoned to the Public Domain. I scrounged a copy from somewhere, integrated fixes from Perl, swapped fixes with Ralf (he uses it in mod_ssl), brought the code up to ANSI standards (you wouldn't believe some of the crap that was in there), and then APR-ized it when it went into Apache 2.0 (to get portable file mgmt, portable file locking, and the protection of pools for allocation). I don't see any purpose served by losing any of those mods. Cheers, -g -- Greg Stein,
http://mail-archives.apache.org/mod_mbox/httpd-dev/200011.mbox/%3C20001129095210.U25840@lyra.org%3E
CC-MAIN-2018-34
refinedweb
255
71.85
Summary. By John Arthorne, OTI August 23, 2002 Updated November 23, 2004 for Eclipse 3.0 The primary method for Eclipse plug-ins to be notified of changes to resources is by installing a resource change listener. These listeners are given after-the-fact notification of what projects, folders and files changed during the last resource changing operation. This provides a powerful mechanism for plug-ins to keep their domain state synchronized with the state of the underlying workspace. Since listeners are told exactly what resources changed (and how they changed), they can update their model incrementally, which ensures that the time taken by the update is proportional to the size of the change, not the size of the workspace. Listeners must implement the IResourceChangeListener interface, and are registered using the method addResourceChangeListener on IWorkspace. It is also important to remove your resource change listener when it is no longer needed, using IWorkspace.removeResourceChangeListener. During a resource change notification, the workspace is locked to prevent further modification while the notifications are happening. This is necessary to ensure that all listeners are notified of all workspace changes. Otherwise, a change made by one listener would have to be broadcast to all other listeners, easily creating the possibility of an infinite loop. There is a special exception to this rule for the PRE_BUILD and POST_BUILD event types that will be discussed later on. Before we get into the details, let's start with a simple example that shows how to add and remove a resource change listener: IWorkspace workspace = ResourcesPlugin.getWorkspace(); IResourceChangeListener listener = new IResourceChangeListener() { public void resourceChanged(IResourceChangeEvent event) { System.out.println("Something changed!"); } }; workspace.addResourceChangeListener(listener); //... some time later one ... workspace.removeResourceChangeListener(listener); So when exactly are these change events broadcasted? In our preliminary sketch, we said that they occur after a "resource changing operation". What does this mean? Certain methods in the resources plug-in API directly modify resources in the workspace. The most common examples are creating, copying, moving and deleting files and folders, and modifying file contents. Methods that change resources have the following key phrase in their API Javadoc: * This method changes resources; these changes will be reported * in a subsequent resource change event. Every method in the resources API that contains such a phrase will trigger the broadcast of a resource change event to all listeners. The only exception is when the operation doesn't actually change anything in the workspace, for example if the operation fails or is canceled before any real changes occur. In this case no change events are broadcast. It is important to note that the broadcast does not necessarily occur immediately after the method completes. This is because a resource changing operation may be nested inside of another operation. In this case, notification only occurs after the top-level operation completes. For example, calling IFile.move may trigger calls to IFile.create to create the new file, and then IFile.delete to remove the old file. Since the creation and deletion operations are nested inside the move operation, there will only be one notification. Clients of the resources API are strongly encouraged to follow this nested operation behavior, also called batched changes, for their own high-level operations. This is achieved by wrapping the operation code inside an instance of IWorkspaceRunnable, and passing it to IWorkspace.run(IWorkspaceRunnable). Wrapping high-level operations inside an IWorkspaceRunnable can lead to a substantial performance improvement, because it ensures that only one resource change broadcast occurs, instead of potentially thousands. Below is an example of an operation that is nested using the IWorkspaceRunnable mechanism. In this case, a single resource change event will be broadcast, indicating that one project and ten files have been created. To keep it simple, progress monitoring and exception handling have been omitted from this example. IWorkspace workspace = ResourcesPlugin.getWorkspace(); final IProject project = workspace.getRoot().getProject("My Project"); IWorkspaceRunnable operation = new IWorkspaceRunnable() { public void run(IProgressMonitor monitor) throws CoreException { int fileCount = 10; project.create(null); project.open(null); for (int i = 0; i < fileCount; i++) { IFile file = project.getFile("File" + i); file.create(null, IResource.NONE, null); } } }; workspace.run(operation, null); Since Eclipse 3.0, it is no longer guaranteed that an IWorkspaceRunnable will prevent notifications for the entire duration of an operation. The workspace can now decide to perform notifications during an operation to ensure UI responsiveness. This is particularly important when several workspace modifying operations are running simultaneously. The use of IWorkspaceRunnable is still strongly encouraged, functioning as a strong hint to the workspace that a set of changes is occurring that can be batched. Also in Eclipse 3.0, a background equivant to IWorkspaceRunnable was introduced. WorkspaceJob will batch a set of workspace changes that occur inside a Job running in the background. Read the Concurrency infrastructure documentation for more details on jobs. A very powerful feature of the resource change infrastructure is that listeners will even be notified of changes that occur outside the workspace API. If some external editor or tool makes changes to resources in the workspace directly from the filesystem, resource change listeners will still receive the same notification describing exactly what changed and how they changed. The drawback is that since most operating systems don't have such a resource change mechanism of their own, the eclipse workspace may not "discover" the change until later on. Specifically, the workspace will not send the notification until someone performs a IResource.refreshLocal operation on a resource subtree that has changed in the filesystem. After the refreshLocal operation, the workspace will send resource change notification to all listeners, describing everything that has changed since the last local refresh. Now that we know how to add listeners and when to expect them to be called, let's take a closer look at what these change events look like. The object passed to a resource change listener is an instance of IResourceChangeEvent. The most important bits of information in the event are the event type, and the resource delta. The event type is simply an integer that describes what kind of event occurred. Listeners are typically mainly interested in the POST_CHANGE event type, and that is the one we will focus on here. The resource delta is actually the root of a tree of IResourceDelta objects. The tree of deltas is structured much like the tree of IResource objects that makes up the workspace, so that each delta object corresponds to exactly one resource. The top-most delta object, provided by the event object, corresponds to the IWorkspaceRoot resource obtained by IWorkspace.getRoot. The resource delta hierarchy will include deltas for all affected resources that existed prior to the resource changing operation, and all affected resources that existed after the operation. Think of it as the union of the workspace contents before and after a particular operation, with all unchanged sub-trees pruned out. Each delta object provides the following information: In the case where a resource has moved, the delta for the destination also supplies the path it moved from, and the delta for the source supplies the path it moved to. This allows listeners to accurately track moved resources. To give an example of the structure of a resource delta, assume we begin with a workspace with the following contents: Now, say we perform a workspace operation that does all of the following changes: It is worth giving a bit more detail about what the delta change flags ( IResourceDelta.getFlags()), are all about. More than one flag may be applicable for a given resource, in which case the flag values are masked together to form a single flag integer. The following table summarizes the different flags and what they signify: Earlier on we said that the principal type of event is the POST_CHANGE event. However, there are some circumstances where listening to other event types is necessary. You can register for particular event types using the API method IWorkspace.addResourceChangeListener(IResourceChangeListener, int), where the supplied integer is a bit-mask of all event types you want your listener to receive. The first category of special events are pre-change notifications. Since the POST_CHANGE event is broadcast after-the-fact, some valuable information that the listener needs may be missing. In these cases, use the PRE_CLOSE and PRE_DELETE events, broadcast before a project is closed or deleted. The project in question can be obtained from IResourceChangeEvent.getResource. There is no resource delta for these event types. These events allow listeners to do important cleanup work before a project is removed from memory. These events do not allow listeners to veto the impending operation. The other special event types are associated with the workspace build mechanism. Incremental project builders sometimes require special initialization code to be executed before all builds happen, and/or special post-processing after all builds are complete. For these reasons, there are PRE_BUILD and POST_BUILD event types. These events are similar to POST_CHANGE, as they also provide a resource delta tree describing what has changed since the start of the operation. These events are broadcast periodically even when autobuild is turned off. Since Eclipse 3.0, these build events no longer occur in the same thread that modified the workspace. Instead, they always occur in the same thread in which the actual build occurs. Since autobuild occurs in a background thread in Eclipse 3.0, so do the surrounding pre- and post-build events. Another special characteristic about these events is that listeners are allowed to modify resources during the notification. This feature should be used sparingly, however, as changing resources during the change event sequence will add extra overhead to every operation. The event types that include resource deltas ( POST_CHANGE, PRE_BUILD, and POST_BUILD), notify listeners of all changes that have happened in the workspace between two discrete points in time. It is sometimes difficult to understand what time interval is covered by each of these three event types. Conceptually, you can think of the resources plug-in "remembering" what the workspace looked like at certain points in time, and a resource delta describes the differences between two of these points. The following outline of a workspace operation describes what time interval is covered by each of the event types: POST_CHANGElisteners of all changes between PRE_BUILDlisteners of all changes between POST_BUILDlisteners of all changes between POST_CHANGElisteners of all changes between It is important to note that the time intervals covered by the different event types are overlapping. It is generally not useful for a single listener to listen to more than one of these event types at once. Choose the event type that applies for your situation, and register for only that event type when adding your listener. If you do need to listen to more than one event type, keep in mind that the provided deltas will describe overlapping sets of changes. This is one of the reasons why POST_CHANGE is the most generally useful event type, because its delta will always describe all changes since the last POST_CHANGE event notification. PRE_BUILD and POST_BUILD events, on the other hand, will not always receive notification of every change. Performance. Listeners should be lightweight and fast. Change notifications can occur quite frequently during typical use of the platform, so your listener must do its work in a timely fashion. This is not an appropriate place, for example, to be contacting servers or performing disk I/O. If you have expensive operations that can be triggered as a result of resource changes, consider posting the work to a background thread, or queuing the work until a time that is more convenient for your users. For example, the "Java Development Tools" (JDT) plug-ins provide a search engine that indexes Java source and JAR files to allow for more efficient searches. When these files change, the search engine must rebuild the indexes for these files. The JDT plug-ins use a resource change listener to collect the list of changed resources, and then posts that list to a background thread that rebuilds the indexes. In the interest of making resource change listeners faster, some convenience methods exist on IResourceDelta and IResourceChangeEvent that allow you to do your updates faster. If you are only interested in changes to a single resource (or a very small set of resources), you can use IResourceDelta.findMember(IPath) to quickly locate the resource you are interested in updating. The supplied path is considered as relative to the path of the resource delta that it is called on. Another common case is resource change listeners that are only interested in processing changes to markers ( IMarker objects). These listeners can use IResourceChangeEvent.findMarkerDeltas to quickly collect all changed markers of a given type. There is also a visitor mechanism ( IResourceDelta.accept(IResourceDeltaVisitor)) for easily processing all changed resources in a given sub-tree. However, visitors should only be used where appropriate. Using a visitor to process two or three resources doesn't make sense, as the overhead of visiting the entire delta tree is incurred for no good reason. Since it's so easy to write a visitor, there is a tendency for programmers to use them too liberally, even in cases where they only want to process a well-defined subset of a tree. The return value from the visitor's visit method is used to indicate if that resource's children should be traversed. This can be used to short-circuit the traversal to avoid visiting sub-trees that you know you are not interested in. Thread safety. There are some multi-threading issues to keep in mind when writing listeners. First, you have no control over what thread your listener will run in. Workspace operations can occur in any thread, and resource change listeners will run in whatever thread that triggered the operation. So, if some of your update code must be run in a particular thread, you'll have to make sure your code gets posted to that thread. The most common example of this is UI updates. With the Standard Widget Toolkit (SWT), the UI toolkit that is included with Eclipse, there is only a single UI thread per display. If your resource change listener needs to update the UI, you will need to use the methods syncExec or asyncExec in class org.eclipse.swt.widgets.Display to post the update code to the UI thread. If any of your update code runs asynchronously (i.e., you used asyncExec or some similar mechanism to post your code to another thread), there is another consideration to keep in mind. The resource delta objects supplied to your listener are designed to "expire" when the resourceChanged method returns. So, if you pass references to IResourceDelta objects to another thread, they may cause failures if they are accessed after the listener method has returned back in the other thread. The reason for this resource delta "expiry date", is to ensure that listeners don't hold onto resource delta references indefinitely. These delta structures are potentially quite large, and if a listener holds onto them, it essentially causes a memory leak because these structures can no longer be garbage collected. Let's tie together everything we've learned so far with a working example. This example shows a resource change listener that listens for changes to text files in a particular project's documentation directory. The list of changed files is collected, and then an update is posted to a JFace TableViewer that contains an index of text files. The listener would normally process added and removed text files in a similar way, but for the sake of space we'll just deal with changed files here. public class DocIndexUpdater implements IResourceChangeListener { private TableViewer table; //assume this gets initialized somewhere private static final IPath DOC_PATH = new Path("MyProject/doc"); public void resourceChanged(IResourceChangeEvent event) { //we are only interested in POST_CHANGE events if (event.getType() != IResourceChangeEvent.POST_CHANGE) return; IResourceDelta rootDelta = event.getDelta(); //get the delta, if any, for the documentation directory IResourceDelta docDelta = rootDelta.findMember(DOC_PATH); if (docDelta == null) return; final ArrayList changed = new ArrayList(); IResourceDeltaVisitor visitor = new IResourceDeltaVisitor() { public boolean visit(IResourceDelta delta) { //only interested in changed resources (not added or removed) if (delta.getKind() != IResourceDelta.CHANGED) return true; //only interested in content changes if ((delta.getFlags() & IResourceDelta.CONTENT) == 0) return true; IResource resource = delta.getResource(); //only interested in files with the "txt" extension if (resource.getType() == IResource.FILE && "txt".equalsIgnoreCase(resource.getFileExtension())) { changed.add(resource); } return true; } }; try { docDelta.accept(visitor); } catch (CoreException e) { //open error dialog with syncExec or print to plugin log file } //nothing more to do if there were no changed text files if (changed.size() == 0) return; //post this update to the table Display display = table.getControl().getDisplay(); if (!display.isDisposed()) { display.asyncExec(new Runnable() { public void run() { //make sure the table still exists if (table.getControl().isDisposed()) return; table.update(changed.toArray(), null); } }); } } } Observe how the findMember convenience method is used to find the child delta for the documentation folder, and then a visitor is used to collect the changes in that sub-tree. It is safe to use asyncExec here, because we have already pulled the relevant information out of the resource deltas. You should always use the isDisposed check inside an asyncExec. Even if the table exists at the time the asyncExec is called, there may be another item in the event queue that will dispose the table before this event can be processed. When plug-in writers implement their first resource change listener, they often encounter a dilemma caused by Eclipse's lazy plug-in loading behavior. Since plug-in activation may occur well after the workspace has started up, there is no opportunity add a resource change listener when the workspace is first started. This causes a "blind spot" for listeners, because they cannot process changes that occur between the time of workspace creation and the time when their plug-in is activated. The solution to this problem is to take advantage of the save participant mechanism. Save participants implement the ISaveParticipant interface, and are installed using IWorkspace.addSaveParticipant. The main purpose of save participants is to allow plug-ins to save their important model state at the same time that the workspace saves its state. This ensures that the persisted workspace state stays synchronized with any domain model state that relies on it. Once a save participant is registered with the workspace, subsequent calls to addSaveParticipant will return an ISavedState object. By passing a resource change listener to ISavedState.processResourceChangeEvents, participants are given the opportunity to process the changes that have occurred since the last save occurred. This fills in the "blind spot" between workspace startup and activation of the plug-in that the listener belongs to. To find out about other facilities provided by the save participant mechanism, read the API Javadoc for ISaveParticipant, ISavedState, and ISaveContext. Builders are another mechanism provided by the platform core for processing resource changes. Where change listeners are intended as a light-weight update mechanism, builders are designed to be a more powerful and flexible way of processing resource changes. Although builders are discussed in greater detail in a companion builder article, it is useful to know about the major differences between resource change listeners and builders: These differences aside, the principle behind project builders and resource change listeners is the same. Builders are provided with a similar IResourceDelta hierarchy that describes what has changed since the last time that builder was called. This gives builders a chance to incrementally update the resources they operate on in response to changes made by others. For more details on builders, see the eclipse.org article Project Natures and Builders. The Eclipse workspace provides a powerful suite of tools to allow plug-ins to keep notified and up to date when resources change. By installing a resource change listener, plug-ins are incrementally notified after any set of changes to the workspace, and are supplied with a resource delta tree that describes all the changes that have happened. Resource change listeners can also be notified when projects are about to be deleted or closed, or before and after auto-builds happen. The save participant mechanism can be used for notification about what happened before your plug-in was activated. The builder framework, discussed in more detail in another article, provides a more powerful mechanism for processing changed resources in a project. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.
http://www.eclipse.org/articles/Article-Resource-deltas/resource-deltas.html
crawl-002
refinedweb
3,428
53.51
csFrameDataHolder< T > Class Template Reference Helper template to retrieve an instance of some type that has not yet been used in a frame. More... #include <cstool/framedataholder.h> Detailed Description template<class T> class csFrameDataHolder< T > Helper template to retrieve an instance of some type that has not yet been used in a frame. Retrieval in subsequent frames will reuse already created instances, if appropriate (that is, the associated frame number differs from the provide current frame number). Definition at line 38 of file framedataholder.h. Member Function Documentation Remove all allocated instances. - Remarks: - Warning! Don't use if pointers etc. to contained data still in use! - By default. does not clear the allocated data instantly but rather upon the next frame (ie when the frameNumber parameter passed to GetUnusedData() differs from the last). To change this behaviour, set instaClear to true. Definition at line 141 of file framedataholder.h. Retrieve an instance of the type T whose associated frame number differs from frameNumber. In created, it is returned whether a new instance was created (value is true) or an existing one was reused (value is false). Can be used to e.g. determine whether some initialization work can be saved. Definition at line 97 of file framedataholder.h. The documentation for this class was generated from the following file: - cstool/framedataholder.h Generated for Crystal Space 2.1 by doxygen 1.6.1
http://www.crystalspace3d.org/docs/online/api/classcsFrameDataHolder.html
CC-MAIN-2016-26
refinedweb
234
58.99
Warning: Cannot install find, don't know what it is. Try the command i /find/ to find objects with matching identifiers. Warning: Cannot install snmp, don't know what it is. Try the command i /snmp/ [download] Cheers, JohnGG /me belatedly hangs up a "Please don't feed the trolls." sign. The cake is a lie. The cake is a lie. The cake is a lie. The cake is a lie. Been playing too much Portal lately? ...so I think people are starting to come around to my way of computer hacking. Methinks you have confused the terms hacking(verb) and hack(noun) Hacking in the 1960s and 1970s was used to describe an individual working with computers who was technically gifted. Many people in the industry still adhere to this definition, though it is not what is generally meant when used in the media or by those outside of the industry. Hack describes an unskilled person or one who lacks technical acumen. It can also be used to describe a particularly ugly piece of code meant to be used as a temporary measure. I realize that they're very similar, so it's an easy mistake to make. Unless of course, you are using the newfangled definition of hacking which refers to computer related crime - in which case is this some kind of admission of guilt? </troll_food> Try the command i /snmp/ [download] For those who have not caught on, Kevin_Raymer is one of our pet trolls. He shows up every now and then just to amuse us. My guess is that *hacking* is just his day job, but he really dreams of making it big as a comedian. Personally, I'd rather have Kevin_Raymer show up from time to time with mildly amusing posts like this than have Cop return (e.g. see Re: Perl is dead). 1. Keep it simple 2. Just remember to pull out 3 in the morning 3. A good puzzle will wake me up Many. I like to torture myself 0. Socks just get in the way Results (289 votes). Check out past polls.
http://www.perlmonks.org/?node_id=654862
CC-MAIN-2016-44
refinedweb
351
75.5
Building SwiftUI Video Game DB App using IGDB Remote API Published at Aug 9, 2019 SwiftUI is a new UI framework introduced by Apple at WWDC 2019 which can be used to build user interface for all Apple platforms from watchOS, tvOS, iOS, and macOS using single unified API and tools. It is using declarative syntax to describe the user interface. The smallest component, View is a protocol to conform for building custom view so it can be reused and composed together to build layout. It has many built in View from List, Stacks, NavigationView, TabView, Form, TextField, and many more that can be used out of the box to create app. One of the coolest feature for developers even UX designers is the new built in Xcode Live design tools that can be used to build live UI as we type the code, the compiler instantly recompiles the application to reflect the change in the UI. It also supports drag and drop of the View component to build your user interface. Preview is also configurable, from fonts, localization, and Dark Mode. Right now, this feature is only supported when you run Xcode 11 on macOS 10.15 Catalina. What we will learn to build In this article, we will be building a real life practical SwiftUI app using IGDB API with the following main features: - Fetch latest games from IGDB API so it can be displayed into a scrollable list. - Display the title, information, and image thumbnail of each game in each row of the list - Navigate to game detail containing the detailed information and screenshoots of the game. We will learn about how SwiftUI handles the flow of data along as we build the app. To be able to use the IGDB API, you need to sign up for the API key for free from the link at IGDB: Free video Game Database API Prerequisites to be able to build and run the app: - Latest installation of Xcode 11 Beta (Beta 5 as of this article is written). - macOS 10.14.6 or latest. I already setup a starter project to download, it has several components: - IGDB-SWIFT-API pod. IGDB wrapper for Swift used to make API request to IGDB API. - GameService/GameStore. The API that we will use to request latest games and game detail using IGDB wrapper. - Game. The model that represent a video game title. - Platform. An enum that represent video game platform such as PS4, Xbox One, and Nintendo Switch. You can download it from the GitHub repository link at alfianlosari/SwiftUI-GameDB-Starter After you download it, navigate to the project directory using your Terminal and run pod install . Open the project xcworkspace, then open the GameStore.swift file and paste your IGDB API key in line 19 inside the $0.userKey variable assignment. That’s all for the setup, let’s begin to build our app! Building Game List View Let’s build our first SwiftUI View, the GameListView. Create a new file called GameListView.swift . In the body, we use NavigationView to embed the List . For starting, we put Text containing Hello List. struct GameListView: View { var body: some View { NavigationView { List { Text("Hello List") } } } } Go to SceneDelegate.swift , update the initialization of UIHostingController by passing the GameListView in the rootView parameter. This will uses GameListView as the root view of the app. Try to build and run the app in simulator to see your SwiftUI app running! ...func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) { ... window.rootViewController = UIHostingController(rootView: GameListView()) ... } Flow of Data in SwiftUI Introduction The flow of data in SwiftUI is different than the flow of data that we usually use in imperative programming (MVC, VIPER, etc). The SwiftUI is more similar to MVVM, which observes states of your data model and update the View whenever the states value is updated. It also uses Bindings for bidirectional communications, for example we can bind the state of variable with boolean value to a Switch control so the whenever user the switch the value of the variable will get updated. SwiftUI uses property wrapper for variable state declaration that will be stored and connected to the SwiftUI View. Whenever those properties get updated, View will update its appearance to match the state of the data model in the variable. Here are several type of basic property wrappers for data available to use in SwiftUI: - @State . Storage of the property will be managed by SwiftUI. Whenever the value updates, the View will update the appearance inside the declaration of body. It’s the source of truth for the data inside a View. - @Binding. A binding property from a state, it can be used to pass value of a state to other View down in the hierarchy using the $ prefix operator. The View that gets the binding will be able to mutate the value and the View that has the state will be updated to reflect the changes in the data. Example, passing down the bind of boolean state to a Switch control that will be updated whenever the user toggles the switch on/off. - @ObservedObject . This is a property that represent the model in the View. The class needs to conform to ObservableObject protocol and invoke objectWillChange whenever the properties are updated. SwiftUI will observes the changes in the model and update the appearance inside the declaration of the body. In Xcode 11 Beta 5, we can declare the properties with @Published property wrapper for the object to magically publish the update to the View. - @EnvironmentObject. It acts just like the @ObservedObject , but in this case we can retrieve the object from the deepest child View up to the top ancestor/root View. Fetching List from IGDB API and Observe with SwiftUI Next, we will fetch the list of games from IGDB API. Let’s create a GameList class that conforms to ObservableObject protocol. We declare several properties such as isLoading for the View to display a loading indicator if the value is true and games array containing the list of games that we will retrieve using IGDB API. Those properties are declared using the @Published property wrapper so it can notify the View when the value is updated. The gameService property will be used to retrieve the game list from IGDB API. Next, we declare a single method called reload that accepts a parameter for the platform that we will use to fetch the video game. It has ps4 as its default value. In the implementation, we set the games array to empty and set isLoading state to true, then invoke the fetchPopularGames method from gameService passing the platform. In the completion handler, we set isLoading to false and check the result enum. If it is success, we retrieve the associated value containing the games and assign it the games instance property. class GameList: ObservableObject { @Published var games: [Game] = [] @Published var isLoading = false var gameService = GameStore.shared func reload(platform: Platform = .ps4) { self.games = [] self.isLoading = true gameService.fetchPopularGames(for: platform) { [weak self] (result) in self?.isLoading = false switch result { case .success(let games): self?.games = games case .failure(let error): print(error.localizedDescription) } } } } Next, let’s integrate the GameList model into the GameListView . We simply add a property using the @ObservedObject property wrapper for the gameList. Inside the Group layout, we check if the gameList isLoading is set to true, then we simply display a Text with loading as the message. Otherwise, we initialize List passing the games array from the gameList . Game model already conforms to Identifiable protocol so it can be used as a diffing whenever the array gets updated. For now, we just display the name of the game using Text in each row of the list. struct GameListView: View { @ObservedObject var gameList: GameList = GameList() var body: some View { NavigationView { Group { if gameList.isLoading { Text("Loading") } else { List(gameList.games) { game in Text(game.name) } } } }.onAppear { self.gameList.reload() } } } To trigger the loading of the API, we use the onAppear modifier in the NavigationView . The closure inside will be invoked, only when the view gets rendered on the screen. Try to build and run the app to see the list of games fetched from the remote API and displayed on the list! Building LoadingView by hosting UIKit UIActivityIndicatorView in SwiftUI As of right now, SwiftUI doesn’t have an activity indicator View that we can use to show loading indicator like the UIActivityIndicatorView in UIKit.We can use UIKit component in our app by conforming to UIViewControllerRepresentable to wrap UIViewController and UIViewRepresentable for UIView. Let’s create LoadingView wrapper for UIActivityIndicatorView. This LoadingView conforms to UIViewRepresentable and need to implement 2 methods. In the makeUIView: , we initialize UIActivityIndicatorView and return it. Then in updateUIView, we start animating the activity indicator. It’s using associatedtype under the hood for the UIView generic placeholder inside the protocol. import SwiftUI import UIKit struct LoadingView: UIViewRepresentable { func makeUIView(context: UIViewRepresentableContext<ProgressView>) -> UIActivityIndicatorView { let activityIndicator = UIActivityIndicatorView(style: .medium) return activityIndicator } func updateUIView(_ uiView: UIActivityIndicatorView, context: UIViewRepresentableContext<ProgressView>) { uiView.startAnimating() } } Just replace the Text with loading message with our new LoadingView to see it is working in perfectly! //... if gameList.isLoading { LoadingView() } else { //... } Fetch Remote Image from URL with SwiftUI Before we start to improve the UI of the row in the List, we need to create an ImageLoader ObservableObject for loading image in the View. It’s has a static NSCache dictionary that we can use to create a simply image caching mechanism using the url string as the key and UIImage as the value. The only @Published property that we use is the image property that exposes the UIImage to be observed. In the downloadImage(url:) method, we retrieve the absolute string of the url, then retrieve the image from the cache using the url if it is exists. If not exists, we just use URLSession to retrieve the image using the url, assign it to the cache, then assign it to the image property. class ImageLoader: ObservableObject { private static let imageCache = NSCache<AnyObject, AnyObject>() @Published var image: UIImage? = nil public func downloadImage(url: URL) { let urlString = url.absoluteString if let imageFromCache = ImageLoader.imageCache.object(forKey: urlString as AnyObject) as? UIImage { self.image = imageFromCache return } URLSession.shared.dataTask(with: url) { (data, res, error) in guard let data = data, let image = UIImage(data: data) else { return } DispatchQueue.main.async { [weak self] in ImageLoader.imageCache.setObject(image, forKey: urlString as AnyObject) self?.image = image } }.resume() } } Building Game Row View Next, we create a GameRowView that will represent each row of the game in our List. It has 2 property, one is the game property and the other one is the ImageLoader property that we will use with @ObserverdObject property wrapper. Inside the body, we use ZStack to overlay the views on top of another. The most bottom view will be the Image, then on top of that we use VStack to stack the Texts vertically. We display the name, release data, and publisher of the game. We assert the ImageLoader if the image is exists, then we display the Image and resize it properly using SwiftUI modifiers. Try to play around with the value of modifiers to customize the View!. At last, we add onAppear modifier in the row, so it will only fetch the image when the row is displayed on the screen. import SwiftUI struct GameRowView: View { var game: Game @ObservedObject var imageLoader: ImageLoader = ImageLoader() var body: some View { ZStack(alignment: .bottomLeading) { if (imageLoader.image != nil) { GeometryReader { geometry in Image(uiImage: self.imageLoader.image!) .resizable(resizingMode: Image.ResizingMode.stretch) .aspectRatio(contentMode: .fill) .frame(maxWidth: geometry.size.width) .clipped() } } VStack(alignment: .leading) { Text(game.name) .font(.headline) .foregroundColor(Color.white) .lineLimit(1) Text(game.releaseDateText) .font(.subheadline) .foregroundColor(Color.white) .lineLimit(2) Text(game.company) .font(.caption) .foregroundColor(Color.white) .lineLimit(1) } .frame(maxWidth: .infinity, alignment: .bottomLeading) .padding(EdgeInsets.init(top: 16, leading: 16, bottom: 16, trailing: 16)) .background(Rectangle().foregroundColor(Color.black).opacity(0.6).blur(radius: 2.5)) } .background(Color.secondary) .cornerRadius(10) .shadow(radius: 20) .padding(EdgeInsets.init(top: 8, leading: 0, bottom: 8, trailing: 0)) .frame(height: 300) .onAppear { if let url = self.game.coverURL { self.imageLoader.downloadImage(url: url) } } } } Replace the Text inside the List with GameRowView passing the game as the parameter for initalizer. //.. List(self.gameList.games) { (game: Game) in GameRowView(game: game) } Finally, remove the GameRowView_Previews right now so our project can be build and run!. We do this because we don’t have a stub game to pass in Debug for the live preview to render. Building and Navigate to Game Detail View Next, we create GameDetailView . This view will display the full image poster of the game, summary, storyline, and other additional information. It will accept the id of the game as the parameter so we can use it to fetch the game metadata from the API when the view get pushed on the navigation stack. For now let’s display the id in the screen with Text . Also, just remove the GameDetailView_Previews for now so it can be build and run. struct GameDetailView: View { var gameId: Int var body: some View { Text(String(gameId)) } } Next, we add NavigationLink and pass the GameDetailView to the destination parameter inside the list in GameListView like so. //... List(self.gameList.games) { (game: Game) in NavigationLink(destination: GameDetailView(gameDetail: GameDetail(gameService: GameStore.shared), gameId: game.id)) { GameRowView(game: game) } } Try to build and run the app, then tap on the row in the list to navigate to GameDetailView successfully 😋! Fetching Game Detail from IGDB API and Observe with SwiftUI Just like the GameListView , we need to fetch the detail of the game from the IGDB API. We create GameDetail object and conforms to ObservableObject protocol. We’ll declare 2 properties using @Published property wrapper, one for loading state and the other one for storing the fetched game. In the reload method, we pass the id of the game as the parameter. We fetch the detail of the game using the GameService object. In the completion, we assign the game property with the game we successfully fetched from the API. class GameDetail: ObservableObject { @Published var game: Game? = nil @Published var isLoading = false var gameService: GameService = GameStore.shared func reload(id: Int) { self.isLoading = true self.gameService.fetchGame(id: id) {[weak self] (result) in self?.isLoading = false switch result { case .success(let game): self?.game = game case .failure(let error): print(error.localizedDescription) } } } } Improving Game Detail View UI Let’s put the GameDetail model in use inside the GameDetailView . We simply declare @ObservedProperty to store the GameDetail , also we need to the ImageLoader as well to load the poster image or the game. struct GameDetailView: View { @ObservedObject var gameDetail = GameDetail() @ObservedObject var imageLoader = ImageLoader() var gameId: Int //... } In the body of the view, we create a List and put the Image on the top followed by a section containing all of the texts for storyline, summary, etc. We only load the image when the game is successfully fetched. You can see all the declarative implementation of the UI from the snippet below. In this snippet you can see that we can create composable small views to build a larger view. The poster view and game section are views. struct GameDetailView: View { @ObservedObject var gameDetail = GameDetail() @ObservedObject var imageLoader = ImageLoader() var gameId: Int var body: some View { Group { if (self.gameDetail.game != nil) { List { PosterView(image: self.imageLoader.image) .onAppear { if let url = self.gameDetail.game?.coverURL { self.imageLoader.downloadImage(url: url) } } GameSectionView(game: self.gameDetail.game!) } } else { ProgressView() } } .edgesIgnoringSafeArea([.top]) .onAppear { self.gameDetail.reload(id: self.gameId) } } } struct PosterView: View { var image: UIImage? var body: some View { ZStack { Rectangle() .foregroundColor(.gray) .aspectRatio(500/750, contentMode: .fit) if (image != nil) { Image(uiImage: self.image!) .resizable() .aspectRatio(500/750, contentMode: .fit) } } } } struct GameSectionView: View { var game: Game var body: some View { Section { Text(game.summary) .font(.body) .lineLimit(nil) if (!game.storyline.isEmpty) { Text(game.storyline) .font(.body) .lineLimit(nil) } Text(game.genreText) .font(.subheadline) Text(game.releaseDateText) .font(.subheadline) Text(game.company) .font(.subheadline) } } } Build and run the app to see the game information in its glory 🎮😁 One more thing! TabView in SwiftUI For the grand finale, we will embed 3 GameListView into a TabView . Each of the view will display games for specific platform such as PS4, Xbox One, and Nintendo Switch. Before we add the view, let’s refactor GameListView to accept platform as the parameter in the initializer. Also in the onAppear , we pass the platform to the GameList reload method so it can fetch the game list for specific platform. struct GameListView: View { @ObservedObject var gameList: GameList = GameList() var platform: Platform = .ps4 var body: some View { NavigationView { Group { if gameList.isLoading { ProgressView() } else { List(self.gameList.games) { (game: Game) in NavigationLink(destination: GameDetailView(gameId: game.id)) { GameRowView(game: game) } } } } .navigationBarTitle(self.platform.description) } .onAppear { if self.gameList.games.isEmpty { self.gameList.reload(platform: self.platform) } } } } We create a GameRootView . In the body implementation, we use TabView , the use ForEach passing all of our platform enum cases , and inside the closure we declare the GameListView assigning the platform to the initializer. We also use modifiers to set the tag, tab item image asset name and text. PS: i already included the asset logo for each platform in xcassets. import SwiftUI struct GameRootView: View { var body: some View { TabView { ForEach(Platform.allCases, id: \.self) { p in GameListView(platform: p).tag(p) .tabItem { Image(p.assetName) Text(p.description) } } } .edgesIgnoringSafeArea(.top) } } Finally, go to SceneDelegate.swift and replace the assignment of root view for UIHostingController with GameRootView. ...func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) { ... window.rootViewController = UIHostingController(rootView: GameRootView()) ... } Build and run your project to see the tab view containing all the platforms! Conclusion That’s it fellow Swifters! our SwiftUI app is finished and ready to use to fetch your latest video games information. SwiftUI really makes building the user interface our app pretty simple and straightforward using it’s declarative syntax. The data is flowing using state, binding, and observable without us need to do it imperatively as we used to be when using UIKit. SwiftUI is also compatible with UIKit component using UIViewRepresentable and UIViewControllerRepresentable, so we can still use the good old UIKit component in our SwiftUI app!. Just one more final thing, i am currently waiting for Dragon Quest XI S, Death Stranding, and Final Fantasy VII Remake to be released 😋. Until then, let’s keep on gaming and keep the lifelong learning goes on!. Happy Swifting!!!. You can download the completed project from the GitHub repository at alfianlosari/SwiftUI-GameDB-Completed
https://www.alfianlosari.com/posts/building-swiftui-gamedb-app-igdbp-api/
CC-MAIN-2021-17
refinedweb
3,111
56.35
This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project. Hey, I'm having a little trouble understanding the conversion gcc is doing in a statement with the following types: int64 = int32 * uint32 This is running on a 64bit machine with gcc (GCC) 4.4.2 20091027 (Red Hat 4.4.2-7) Following simple code snippet: ? 1 #include <stdint.h> ? 2 #include <iostream> ? 3 ? 4 using namespace std; ? 5 ? 6 int main(int argc, char* argv[]) ? 7 { ? 8???? int32_t? neg? = -1; ? 9???? uint32_t mult = 64; ?10 ?11???? int64_t res? = neg * mult; ?12???? int32_t res2 = neg * mult; ?13 ?14???? cout << res << endl << res2 << endl; ?15 ?16???? uint64_t mult64 = 64; ?17 ?18???? res? = neg * mult64; ?19???? res2 = neg * mult64; ?20 ?21???? cout << res << endl << res2 << endl; ?22 ?23???? return 0; ?24 } with the output 4294967232 -64 -64 -64 When compiled it gives an understandable warning main.cpp: In function ‘int main(int, char**)’: main.cpp:19: warning: conversion to ‘int32_t’ from ‘uint64_t’ may alter its value still that operation will generate the output I expect. Is there some strange casting rule that I'm not following properly? cheers, Tony
http://gcc.gnu.org/ml/gcc/2010-01/msg00511.html
CC-MAIN-2015-35
refinedweb
195
78.85
# Clickhouse next to Zabbix or how to collect logs next to monitoring If you use Zabbix to monitor your infrastructure objects but have not previously thought about collecting and storing logs from these objects then this article is for you. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/08f/303/552/08f303552c57104a9fac02d6a97fc352.png)### Why collect logs and store them? Who needs it? Logging or, as they say, recording logs allows you to give answers to questions, what? where? and under what circumstances happened in your IT environment several hours, days, months, even years ago. A constantly appearing error is difficult to localize, even knowing when and under what circumstances it manifests itself, without having logs in hand. In the logs not only generic information about the occurrence of certain errors is written but also more detailed information about the errors that helps to understand the causes of these occurrences.  After you have launched your product and expect results from it, you need to constantly monitor the operation of the product in order to prevent failures before users can see them.  And how to analyze causes of failures of a server whose logs are no longer available (an intruder covering up the traces or any other force majeure circumstances)? - Only centralized storage of logs will help in this case.  If you are a software developer and for debugging you have to “walk” through and look at all the logs on the server then convenient viewing, analysis, filtering and search of logs in the web interface will simplify your actions and you will be able to concentrate more on the task of debugging your code. Analysis of logs helps to identify conflicts in the configuration of services, to determine the source of occurrence of errors, as well as to detect information security alerts.  Together with monitoring services, logging helps to significantly save engineers' time when investigating certain incidents.  ### Great variety of solutions for recording logs The advantages and disadvantages of popular solutions for collecting logs have already been described [elsewhere (in russian)](https://habr.com/ru/company/itsoft/blog/556194/). Our colleagues analyzed systems such as Splunk, Graylog, and ELK. They revealed good and not so good (of course, for themselves) sides in the presented solutions. To summarize their experience: * ELK turned out to be too cumbersome, resource-intensive, and very difficult to configure on large infrastructures. * Splunk lost users’ trust by simply leaving the Russian market without giving any reason. Splunk partners and customers in Russia had to find new solutions after the expiration of their quite expensive Splunk licenses. * The authors opted for Graylog and in their article give examples of installing and configuring this product. The article has not described a number of systems such as Humio, Loki, and Monq. But since Humio is a paid SaaS service, and Loki is a very simple log aggregator without parsing, enrichment and other useful features, we will consider only Monq. Monq is a whole set of components for support of IT infrastructure, which includes AIOps, automation, hybrid and umbrella monitoring, but today we are only interested in Monq Collector which is a free tool for collecting and analyzing logs. ### Logs plus infrastructure monitoring One of the features of this tool for working with logs is the ability to process the incoming data stream on the preprocessor using scripts and parsers that you create yourself, in the built-in Lua IDE. Another distinguishing feature is the product architecture. The product is built upon microservices. And the storage of those very logs is implemented in the columnar analytical DBMS ClickHouse. The key advantage of ClickHouse is the high speed of execution of SQL read queries (OLAP script), which is provided due to the following distinctive features: * vector computing, * parallelization of operations, * column data storage, * support for approximate calculations, * physical sorting of data by primary key. But one of the most important features of ClickHouse is a very efficient saving of storage space. The average compression ratio is 1:20 which is a very good number.  If you paid attention to the title of this article you would notice that initially we wanted to tell you what is good about using monq in companies where Zabbix is used as an infrastructure monitoring system.  Firstly, the product for collecting and analyzing logs is provided absolutely free of charge and without any restrictions on traffic and time.  Secondly, the monq Collector includes a built-in connector with Zabbix, which allows you to receive all events from Zabbix triggers and then view them on the same screen with the logs. To receive events from Zabbix and other systems such as Nagios, Prometheus, SCOM, Ntopng, system handlers are provided with the ability to customize the code or write your own handlers.  ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/3e8/d92/75d/3e8d9275d3b19fc4974758162bd6ca48.png)For example, you can convert the date format to your liking: ``` function is_array(t) local i = 0 for _ in pairs(t) do i = i + 1 if t[i] == nil then return false end end return true end function convert_date_time(date_string) local pattern = "(%d+)%-(%d+)%-(%d+)(%a)(%d+)%:(%d+)%:([%d%.]+)([Z%p])(%d*)%:?(%d*)"; local xyear, xmonth, xday, xdelimit, xhour, xminute, xseconds, xoffset, xoffsethour, xoffsetmin xyear, xmonth, xday, xdelimit, xhour, xminute, xseconds, xoffset, xoffsethour, xoffsetmin = string.match(date_string,pattern) return string.format("%s-%s-%s%s%s:%s:%s%s", xyear, xmonth, xday, xdelimit, xhour, xminute, string.sub(xseconds, 1, 8), xoffset) end function alerts_parse(result_alerts, source_json) for key, alert in pairs(source_json.alerts) do alert["startsAt"]=convert_date_time(alert["startsAt"]) alert["endsAt"]=convert_date_time(alert["endsAt"]) result_alerts[#result_alerts+1]=alert end end local sources_json = json.decode(source) result_alerts = {}; if (is_array(sources_json)) then for key, source_json in pairs(sources_json) do alerts_parse(result_alerts, source_json) end else alerts_parse(result_alerts, sources_json) end next_step(json.encode(result_alerts)) ``` Thirdly, if you install the trial version or the paid add-on monq AIOps, then a number of native features connecting Zabbix and monq would appear that allow you to: * bind Zabbix nodes and triggers to resource-service model in monq, * automatically create in monq correlation rules (synthetic triggers) with preset filters and rules for events coming from Zabbix, * work with low level discovery (LLD) in Zabbix. *Synthetic triggers allow you to handle any event that goes into the monq monitoring system. It is possible to create triggers from templates or customize them from scratch using scripts written in the Lua language in the IDE editor built into the web interface.* Using the Zabbix connector in monq, you can import the entire infrastructure and build a resource-service model of your whole information system. This is convenient when you need to quickly localize the problem and find the cause of the incident. ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/979/a88/afd/979a88afd09eabe37a8b5bbc1c321c5b.png)The question is brewing, is it possible to differentiate access to logs to certain categories of persons? For example, so that information security logs are available only to sysadmins? The answer is yes, it’s possible. To do this monq Collector has a very flexible role model for workgroups built in. System users can simultaneously have different roles in different workgroups. When connecting a data stream one can allow read or write access to certain roles and assign those roles to users.  ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/07e/02a/cc3/07e02acc366ae9239dc231a9a70b2a39.png)How does this help in collecting logs? -------------------------------------- Let's try to describe a small fictional use case. We have a cluster running Kubernetes. There are several rented virtual machines. All nodes are monitored using Zabbix. And there is an authorization service which is very important for us since several corporate services work through it. Zabbix periodically reports that the authorization service is unavailable, then it is available again in a minute or two. We want to find the reason for the drop in service and not flinch from every alert sound from the phone. Zabbix is ​​an excellent monitoring tool but it is not enough for investigating incidents. Errors can often be found in the logs. And for this we will collect and analyze logs with the monq Collector. *So, in order:* 1. We configure collection of logs from the nginx-ingress controller which is located in the cluster running Kubernetes. In this case, we will transfer logs to the collector using the fluent-bit utility. Here are the listings of configuration files:  ``` [INPUT] Name tail Tag nginx-ingress.* Path /var/log/containers/nginx*.log Parser docker DB /var/log/flb_kube.db Mem_Buf_Limit 10MB Skip_Long_Lines On Refresh_Interval 10 ``` Let's prepare parsers for nginx to convert raw data to JSON: ``` [PARSER] Name nginx Format regex Regex ^(?[^ ]\*) (?[^ ]\*) (?[^ ]\*) \[(?[^\]]\*)\] "(?\S+)(?: +(?[^\"]\*?)(?: +\S\*)?)?" (?`[^ ]*) (?[^ ]\*)(?: "(?< Time\_Key time Time\_Format %d/%b/%Y:%H:%M:%S %z .... [PARSER] Name nginx-upstream Format regex Regex ^(?.\*) \- \[(?.\*)\] \- (?.\*) \[(?[^\]]\*)\] "(?:(?\S+[^\"])(?: +(?[^\"]\*?)(?: +(?\S\*))?)?)?" (?`[^ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name nginx-error Format regex Regex ^(?\d{4}\/\d{2}\/\d{2} \d{2}:\d{2}:\d{2}) \[(?\w+)\] (?\d+).(?\d+): (?.\*), client: (?.\*), server: (?.\*) Time\_Key time Time\_Format %Y/%m/%d %H:%M:%S`` ``` 2. We also send logs to the collector using fluent-bit (monk collector will have its own extractor in August, I will write about it later)  ``` [OUTPUT] Name http Host monq.example.ru Match * URI /api/public/cl/v1/stream-data Header x-smon-stream-key 06697f2c-2d23-45eb-b067-aeb49ff7591d Header Content-Type application/x-ndjson Format json_lines Json_date_key @timestamp Json_date_format iso8601 allow_duplicated_headers false ``` 3. Infrastructure objects are monitored in Zabbix with parallel streaming of logs in monq.  4. Some time passes and an event from Zabbix comes to monq stating that the authentication service is not available.  5. Why? What happened? We open the monq primary events screen, and there is already a flurry of events from nginx-ingress-controller: ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/c2d/f5f/2de/c2df5f2de25de5d23498120466b74eb4.png)6. Using the analytical tools of the primary screen, we find out that we have a 503 error - in 30 minutes there are already 598 records with this errors, or 31.31% of the total number of logs from the nginx-ingress controller: ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/031/f8b/de9/031f8bde9673c7035235491cbbf596aa.png)7. From the same window we can filter events by clicking on the error value and selecting “add to filter”. As a result, “the source.code = 503” parameter will be added to the filter panel. Excessive log data will be removed from the screen. 8. We turn to the analysis of structured logs and see at which URL the error is returned, understand which microservice is failing and quickly solve the problem by recreating the container: ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/7e9/a96/ade/7e9a96adefaf7a17173b01f67c71897c.png)9. For the future, we start collecting logs from this container in monq, and next time we know much more information about the problem that can be passed on to developers to fix the code.  The use case that I gave above is more of a demonstration of general principles. In life, things are often not so obvious. Here’s an example from our practice. Year 2019. We were developing a system in which there are 50+ containers. In each release, our release master spent a lot of time figuring out why inter-microservice connections could break. Moving between microservices and logs, each time typing *kubectl get log*, in one window, the same thing in the second window, then in the third. And you also needed to tail the web server logs. Then you had to remember what was in the first window, etc. Also the team was growing, did you have to give everyone access via ssh to the server? No, you opted for an aggregator. We set up sending logs to ELK, collected everything that was needed, and then something happened. In ELK, each event source uses its own index and this is very inconvenient. The team kept growing, outsourcers appeared, we realized that we need to differentiate the rights of working groups. The number of logs began to grow strongly, some sort of clustering of the database by its own means began to be needed, without the use of a tambourine, a highload was needed. In ELK, for all these functions you have to pay. Instead we decided to switch to the monq Collector for collecting logs. Year 2021. A regular release, everything rolled out, Kuber launched containers, Zabbix sends alarms that authorization does not work. We go to the monq Collector screen, parse the fields, and see where the problem is. We see from the logs that the problem started at a certain time, after the rollout of a particular container. The problem was solved quickly and easily, and this is on a volume of more than 200 GB of logs per day with more than 30 people working with the storage system.
https://habr.com/ru/post/572356/
null
null
2,148
53.1
$ cnpm install @oerlikon/breadcrumbs Provides simple react components to control the breadcrumb functionality of the app frame. Disclaimer: This module depends on react-router-dom or @oerlikon/routing. If you indent to use a different routing system in your app, this module is not for you. Ensure the <BreadcrumbProvider> is placed high in your rendering tree, but insinde a routing provider ( <BrowserRouter> from react-router-dom or <Router /> from @oerlikon/routing). sort: function to sort the breadcrumbs (see gotcha). By default the BreadcrumbProvider sorts by path prefix and length, this behaviour can be changes by providing a different sort function or be deactivated by setting this prop to false Example with react-router-dom: import { BrowserRouter } from 'react-router-dom'; import { render } from 'react-dom'; import { BreadcrumbProvider } from '@oerlikon/breadcrumbs'; import App from './App'; render( <BrowserRouter> <BreadcrumbProvider> <App /> </BreadcrumbProvider> </BrowserRouter>, document.getElementById('root') ); deepLink: boolean to activate / deactive the automated deep link integration. Defaults to true. deepLinkPrefix: prefix for every generated deep link label. Defaults to an empty string. <Breadcrumb>component Use the <Breadcrumb> component by providing a path and optionally deepLink as props and a label as children. By setting deepLink to false, the deep link integration for this breadcrumb will be disabled. If you provide a string, that value will be used as the deep link label (defaults to children). import React from 'react'; import { Breadcrumb } from '@oerlikon/breadcrumbs'; export const MyOverviewPage = () => { return ( <> <Breadcrumb path="/overview">Overview</Breadcrumb> <h1>This is my overview</h1> </> ); }; useBreadcrumb()hook Use the useBreadcrumb() hook by providing a path, a label and optionally a deepLink flag as parameters. import React from 'react'; import { useBreadcrumb } from '@oerlikon/breadcrumbs'; export const MyOverviewPage = () => { useBreadcrumb({ path: '/overview', label: 'Overview', deepLink: false }); return <h1>This is my overview</h1>; }; Because react components can re-render a lot during the lifetime of your app, the breadcrumbs can end up in an unpredictable order when they get pushed to or popped from the "global" breadcrumbs array. Thats why the BreadcrumbProvider uses a sort algorithm that expect your paths to be created hierarchically. Lets assume you have a root page, a list page and a single view. The BreadcrumbProviders expect the following path hierarchy: /root /root/list /root/list/singleId Even when the list view re-renders and the list view paths gets pushed to the end of the breadcrumb array, the correct order is preserved. This should work for the most projects as it is a common schema. If you have a different approach on creating paths, you can provide your own sort function to the Provider.
https://developer.aliyun.com/mirror/npm/package/@oerlikon/breadcrumbs
CC-MAIN-2020-34
refinedweb
427
52.29
In this tutorial we will check how to use strings in our protobuf messages, using Nanopb and the Arduino core, running both on the ESP32 and on the ESP8266. Please check the previous tutorial for a detailed explanation on how to get started with Nanopb. As can be seen here, when defining the message type in the .proto file, string is one of the data types we can use. Nonetheless, when defining the message structure in the .proto file, we cannot specify the maximum length of the string. Although Nanopb can work with this scenario, the generated data structure after compiling the .proto file is more complex. So, we will check how we can specify a maximum length for a string field, so the generated struct attribute will be a simple char array.. We will define our message in a .proto file, like we did in the previous tutorial. After that, we will use the protoc compiler to generate the data structures that will be used by our code. Please refer to that tutorial for a detailed explanation on how to use protoc to generate the data structures needed, since in this section we will only cover the definition of the message type. So, we need to start out .proto file by specifying the syntax version we are using. We are going to use version 2. syntax = "proto2"; Then, we are going to define the message type. We will name it StringMessage. message StringMessage { //body of message } We are going to start by declaring an integer field called integer, which we will set as required. We will assign to it the unique number 1. required int32 integer = 1; Then, we will declare a string field called message, which will be also set to required. We will assign to it the unique number 2. required string message = 2; The final .proto file can be seen below. syntax = "proto2"; message StringMessage { required int32 integer = 1; required string message = 2; } We will call our file StringMessage.proto. Note that the name of the file will be the same name of the .h and the .c files that are generated after the protoc compilation. But before we proceed with the compilation, we still need to specify the maximum length of our string field. To do it, we simply need to create a .options file with the same name as our .proto file. So, the file should be called StringMessage.options. Inside that file, we simply need to specify the maximum size of our field, with the following syntax: #messageName#.#fieldName# max_size:#maximumSize# We will assume a maximum length of 128 characters. So, the file content should be: StringMessage.message max_size:128 Note that the .options file should be located in the same folder where the .proto file is. After finishing the content of both files, simply run the protoc compiler and you should obtain a StringMessage.pb.h file and a StringMessage.pb.c file. If you open the .h file, you should see the C struct representing the message we defined in our .proto. As shown in figure 1 below, the struct contains both the integer and the string fields and, in the case of the string, it corresponds to a char array with a length of 128 characters, like we defined in our .options file. Figure 1 – Generated header file after .protoc compilation. In order to be able to import the generated fields in the Arduino code, navigate to the folder where the Arduino sketch we are going to write is located, create there a folder called src and paste the two files inside the created folder. We will start our code by the includes. The first thing we need to include is the .h file that was generated from the compilation of the .proto file, since it has the definition of the struct that represents the message type we have created. As mentioned before, the .h and the .c files generated by the protoc should be in a src folder inside the Arduino sketch, so we need to consider that location in our include. #include "src/StringMessage.pb.h" Next, we need to Nanopb library includes, more precisely the pb_common.h, the pb.h and the pb_encode.h. #include "pb_common.h" #include "pb.h" #include "pb_encode.h" Moving on to the setup function, we will open a serial connection, so we can later output our serialized message. Serial.begin(115200); Then, we will need to declare a buffer that will hold the bytes of our serialized message. We will not worry about calculating the maximum length the message can have, so we will use a size of 200 bytes, which is more than enough to accommodate the message. uint8_t buffer[200]; Next, we will declare the message data structure and initialize it with the define that is generated automatically by Nanopb when compiling the .proto file. This will allow to set the members of the structure to their default values without needing to worry about setting each individual value. As covered in the previous tutorial, the syntax of the generated define is the following, where #MessageName# is the name we defined for our message type: #MessageName#_init_zero So, in our case, since our message type is named StringMessage, we have the following initialization: StringMessage message = StringMessage_init_zero; Next, we need to create the output stream for writing into our memory buffer. Recall from the previous tutorial that Nanopb uses the concept of streams to access data in encoded format. So, we simply need to call the pb_ostream_from_buffer function, which receives as first input our memory buffer and as second the size of the memory buffer. As output, this function returns a struct of type pb_ostream_t, which we will store in a variable. pb_ostream_t stream = pb_ostream_from_buffer(buffer, sizeof(buffer)); Now we will set the values of the fields of the message. Recall from the .proto definition that we had a field called integer, which is an int, and a field called message, which is a string. For the integer field, we will assign it the value 10. For the message field, we will set it to a “Hello World” string. We will use the strcpy function to copy the “Hello World” string to the message field, which is a char array. message.integer = 10; strcpy(message.message, "Hello World!"); Now that we have set all the fields of our message, we will encode it to the binary format using the pb_encode function. This function receives as first input the address of our pb_ostream_t structure, as second the auto-generated fields description array, and as third the address of our StringMessage struct. Since this method returns a Boolean value indicating if the serialization process occurred successfully, we will use it for error checking. bool status = pb_encode(&stream, StringMessage_fields, &message); if (!status) { Serial.println("Failed to encode"); return; } To finalize, we will print the total number of bytes written, which can be obtained from the bytes_written field of the pb_ostream_t structure. Followed by that, we will print the bytes of the serialized message in hexadecimal format. Serial.println(stream.bytes_written); for(int i = 0; i<stream.bytes_written; i++){ Serial.printf("%02X",buffer[i]); } The final code can be seen below. #include "src/StringMessage.pb.h" #include "pb_common.h" #include "pb.h" #include "pb_encode.h" void setup() { Serial.begin(115200); uint8_t buffer[200]; StringMessage message = StringMessage_init_zero; pb_ostream_t stream = pb_ostream_from_buffer(buffer, sizeof(buffer)); message.integer = 10; strcpy(message.message, "Hello World!"); bool status = pb_encode(&stream, StringMessage_fields, &message); if (!status) { Serial.println("Failed to encode"); return; } Serial.print("Message Length: "); Serial.println(stream.bytes_written); Serial.print("Message: "); for(int i = 0; i<stream.bytes_written; i++){ Serial.printf("%02X",buffer[i]); } } void loop() {} To test the code, simply compile it and upload it to your device. When the procedure finishes, open the Arduino IDE serial monitor. You should get an output similar to figure 2, which shows the message in hexadecimal format and the total number of bytes written. Figure 2 – Output of the program, showing the serialized message in hexadecimal format. Next, simply copy the hexadecimal representation of the message and paste it in this online tool. After decoding it, you should get an output similar to figure 3, which shows the values of the two fields, which match the ones we have assigned in the Arduino code. Figure 3 – Deconding the message using an online tool.
https://www.dfrobot.com/blog-1162.html
CC-MAIN-2019-26
refinedweb
1,399
66.74
On Thu, 2012-03-29 at 11:10 +0100, mark florisson wrote: Thanks for CCing me; various comments inline below throughout. >: [...snip example sources...] > >> We run gcc -fplugin=./python.so -fplugin-arg-python-script=walk.py test.c FWIW, the plugin has a helper script, so that you ought to be able to simply run: ./gcc-with-python walk.py test.c (paths permitting) My primary use-case for the plugin is my libcpychecker code which implements static analysis of refcount-handling, and for that I have another helper script "gcc-with-cpychecker" that invokes my code so that you can simply run: ./gcc-with-cpychecker -I/usr/include/python2.7 test.c So you might want to do something similar for the pxd generation. [...snip sample output...] > >. Very much so - thanks! (Hi everyone!) FWIW, I happened to see Dag's earlier email via a google search, and added the Cython idea to the list of "Ideas for using the GCC plugin" here: > I think the current GCC plugin support doesn't allow you to do much > with te preprocessor, it operates entirely after the C preprocessor > has run. So far, yes. I haven't explored GCC's C frontend as much as I have the stages that follow. The C preprocessor does run in-process; I don't know yet to what extent it's amenable to hacking via a GCC plugin. I believe that aspects of its integration may have been rewritten somewhat in GCC 4.7 (some of my colleagues tried to improve the line-numbering capture in the presence of macros). > So to support macros we have to consider that for this to > work the gcc plugin may have to be extended, which uses C to extend > GCC and Python, so it also requires knowledge of the CPython C API. Yes; I'd expect you to have to go digging into the guts of the GCC C preprocessor implementation, using GDB. I don't know yet how feasible it is to get at the data from a plugin: it might be anywhere from "easy" to "impossible". You might need to get a patch into GCC to expose the necessary information (if so, that would probably be worthy of a GSoC slot, I think). One issue is that although GCC has an API for plugins to use to register themselves, it doesn't yet have an official API for plugins to use for doing anything else, so we're somewhat at the mercy of future GCC developments (hopefully Python will make it easier to survive future internal interface changes though). BTW, the Python plugin's API isn't 100% frozen yet: I still reserve the right to tweak things if appropriate (I've only done this occasionally though, and I've gone through all the code I know of when I do to doublecheck if I'm about to break something). > David, would you mind elaborating why C was used for this project and > not (partially) Cython, and would it be possible to extend the plugin > with Cython? I did initial try using Cython: see early commits here:;a=commitdiff;h=4d62721d519008c325d7369f1330dc09080c0b51;a=commitdiff;h=9b5145955c823453404c49e4b295e8c739c5ff44 but GCC internals are just too, err, "baroque" (that's a euphemism): it makes very heavy use of the C preprocessor (e.g. *all* field accesses go through an access macro; there are garbage-collection annotations thoughout); many of the types are declared by repeatedly #include-ing .def files using macro definitions to expand the contents in a variety of ways. > >. [...snip...] > >.) I downloaded Philip's script from It's running immediately before "free_lang_data", which is the first interprocedural "whole-file" optimization pass, after some per-function passes have been run. You can see a map of the passes here: [See also for notes on how the sample code I showed Dag at PyCon works] So my guess is that this code can be run for *all* languages that GCC can handle: all of the language frontends feed in data near the top of that map: so in theory this ought to work for Fortran, C++, Go, etc. Having said that, I've been trying to get my libcpychecker code running on C++ and I keep running into subtle difference in the exact data they generate: e.g. the C++ frontend seems to add Nop statements for empty functions, whereas the C frontend doesn't; type declarations get hidden inside namespace objects in the C++ frontend; etc etc. BTW, some stylistic nits on Philip's script: * don't match types based on strings: c.f.: if T == "<type 'gcc.FunctionDecl'>": instead, use isinstance: if isinstance(decl.type, gcc.FunctionDecl) so that you're not relying on repr() or str(), and so you match subclasses, not just one class * "decl_location_get_file (decl)" jumps through lots of hoops to get at the filename of a decl.location by parsing the repr(). But you can simply look at the decl.location.file attribute: * similar considerations apply to decl_identifier_node_to_string(); have a look at the dir() of the object (and if something is not documented, file a bug, or a patch!). Hope this is helpful; good luck! Dave
https://mail.python.org/pipermail/cython-devel/2012-March/002160.html
CC-MAIN-2020-50
refinedweb
861
57.71
The. Archive and Compression Packages Go can read and write tarballs and .zip files. The relevant packages are archive/tar and archive/zip; and for compressed tarballs, compress/gzip and compress/bzip2. Other compression formats are also supported; for example, Lempel-Ziv-Welch ( compress/lzw), which is used for TIFF images and PDF files. Bytes and String-Related Packages The bytes and strings packages have many functions in common, but the former operates on []byte values and the latter on string values. The strings package provides all the most useful utilities to find substrings, replace substrings, split strings, trim strings, and change case. The strconv package provides conversions from numbers and Booleans to strings and vice versa. The fmt package provides a variety of extremely useful print and scan functions, which were shown in the first and second installments of this series. The unicode package provides functions for determining character properties, such as whether a character is printable, or whether it is a digit. The unicode/utf8 and unicode/utf16 packages provide functions for decoding and encoding runes (that is, Unicode code points/characters). The text/template and html/template packages can be used to create templates that can then be used to generate textual output (such as HTML), based on data that is fed into them. Here is a tiny and very simple example of the text/template package in use. type GiniIndex struct { Country string Index float64 } gini := []GiniIndex{{"Japan", 54.7}, {"China", 55.0}, {"U.S.A.", 80.1}} giniTable := template.New("giniTable") giniTable.Parse( '<TABLE>' + '{{range .}}' + '{{printf "<TR><TD>%s</TD><TD>%.1f%%</TD></TR>"' + '.Country .Index}}'+ '{{end}}' + '</TABLE>') err := giniTable.Execute(os.Stdout, gini) This outputs <TABLE> <TR><TD>Japan</TD><TD>54.7%</TD></TR> <TR><TD>China</TD><TD>55.0%</TD></TR> <TR><TD>U.S.A.</TD><TD>80.1%</TD></TR> </TABLE> The template.New() function creates a new *template.Template with the given name. Template names are useful to identify templates that are, in effect, nested inside other templates. The template.Template.Parse() function parses a template (typically from an .html file), which is then ready for use. The template.Template.Execute() function executes the template sending the resultant output to the given io.Writer, and reading the data that should be used to populate the template from its second argument. In this example, I have output to os.Stdout and passed the gini slice of GiniIndex structs as the data. (I've have split the output over several lines to make it clearer.) Inside a template, actions are enclosed in double braces ( {{ and }}). The {{range}} … {{end}} action can be used to iterate over every item in a slice. Here, I have set each GiniIndex in the slice to the dot ( .); that is, to be the current item. We can access a struct's exported fields using their names, preceded, of course, with the dot to signify the current item. The {{printf}} action works just like the fmt.Printf() function, but with spaces replacing the parentheses and argument-separating commas. The text/template and html/template packages support a sophisticated templating language in their own right, with many actions, including iteration and conditional branching, support for variables and method calls, and much else besides. In addition, the html/template package is safe against code injection. Collections Packages Slices are the most efficient collection type provided by Go, but sometimes it is useful or necessary to use a more specialized collection type. For many situations, the built-in map type is sufficient, but the Go standard library also provides the container package, which contains various collection packages. The container/heap package provides functions for manipulating a heap, where the heap must be a value of a custom type that satisfies the heap.Interface defined in the heap package. A heap (strictly speaking, a min-heap) maintains its values in an order such that the first element is always the smallest (or largest for a max-heap) this is known as the heap property. The heap.Interface embeds the sort.Interface and adds Push() and Pop() methods. It is easy to create a simple custom heap type that satisfies the heap.Interface. Here is an example of such a heap in use. ints := &IntHeap{5, 1, 6, 7, 9, 8, 2, 4} heap.Init(ints) // Heapify ints.Push(9) // IntHeap.Push() doesn't preserve the heap property ints.Push(7) ints.Push(3) heap.Init(ints) // Must reheapify after heap-breaking changes for ints.Len() > 0 { fmt.Printf("%v ", heap.Pop(ints)) } fmt.Println() // prints: 1 2 3 4 5 6 7 7 8 9 9 Here is the complete custom heap implementation. type IntHeap []int func (ints *IntHeap) Less(i, j int) bool { return (*ints)[i] < (*ints)[j] } func (ints *IntHeap) Swap(i, j int) { (*ints)[i], (*ints)[j] = (*ints)[j], (*ints)[i] } func (ints *IntHeap) Len() int { return len(*ints) } func (ints *IntHeap) Pop() interface{} { x := (*ints)[ints.Len()-1] *ints = (*ints)[:ints.Len()-1] return x } func (ints *IntHeap) Push(x interface{}) { *ints = append(*ints, x.(int)) } This implementation is sufficient for many situations. We could make the code slightly nicer to read by specifying the type as type IntHeap struct { ints []int }, since then we could refer to ints.ints rather than *ints inside the methods. The container/list package provides a doubly linked list. Items added to the list are added as interface{} values. Items retrieved from the list have type list.Element, with the original value accessible as list.Element.Value. items := list.New() for _, x := range strings.Split("ABCDEFGH", "") { items.PushFront(x) } items.PushBack(9) for element := items.Front(); element != nil; element = element.Next() { switch value := element.Value.(type) { case string: fmt.Printf("%s ", value) case int: fmt.Printf("%d ", value) } } fmt.Println() // prints: H G F E D B A 9 In this example, we push eight single-letter strings onto the front of a new list and then push an int onto the end. We then iterate over the list's elements and print each element's value. We don't really need the type switch because we could have printed using fmt.Printf("%v ", element.Value), but if we weren't merely printing, we'd need the type switch if the list contained elements of different types. Of course, if all the elements had the same type, we could use a type assertion for example, element.Value.(string) for string elements. In addition to the methods shown in the aforementioned snippet, the list.List type provides many other methods, including Back(), Init() (to clear the list), InsertAfter(), InsertBefore(), Len(), MoveToBack(), MoveToFront(), PushBackList() (to push one list onto the end of another), and Remove(). The standard library also provides the container/ring package, which implements a circular list. While all the collection types hold their data in memory, Go also has a database/sql package that provides a generic interface for SQL databases. To work with actual databases, separate database-specific driver packages must be installed. These, along with many other collection packages, are available from the Go Dashboard.
http://www.drdobbs.com/web-development/a-brief-tour-of-the-go-standard-library/240006639
CC-MAIN-2017-22
refinedweb
1,188
58.58
How can I round values like this: 1.1 => 1 1.5 => 2 1.9 => 2 Math.Ceiling() is not helping me. Any ideas? See the official documentation for more. For example: Basically you give the Math.Round method three parameters. - The value you want to round. - The number of decimals you want to keep after the value. - An optional parameter you can invoke to use AwayFromZero rounding. (ignored unless rounding is ambiguous, e.g. 1.5) Sample code: var roundedA = Math.Round(1.1, 0); // Output: 1 var roundedB = Math.Round(1.5, 0, MidpointRounding.AwayFromZero); // Output: 2 var roundedC = Math.Round(1.9, 0); // Output: 2 var roundedD = Math.Round(2.5, 0); // Output: 2 var roundedE = Math.Round(2.5, 0, MidpointRounding.AwayFromZero); // Output: 3 var roundedF = Math.Round(3.49, 0, MidpointRounding.AwayFromZero); // Output: 3 You need MidpointRounding.AwayFromZero if you want a .5 value to be rounded up. Unfortunately this isn’t the default behavior for Math.Round(). If using MidpointRounding.ToEven (the default) the value is rounded to the nearest even number ( 1.5 is rounded to 2, but 2.5 is also rounded to 2). Math.Ceiling always rounds up (towards the ceiling) Math.Floor always rounds down (towards to floor) what you are after is simply Math.Round which rounds as per this post there’s this manual, and kinda cute way too: double d1 = 1.1; double d2 = 1.5; double d3 = 1.9; int i1 = (int)(d1 + 0.5); int i2 = (int)(d2 + 0.5); int i3 = (int)(d3 + 0.5); simply add 0.5 to any number, and cast it to int (or floor it) and it will be mathematically correctly rounded 😀 You need Math.Round, not Math.Ceiling. Ceiling always “rounds” up, while Round rounds up or down depending on the value after the decimal point. Just a reminder. Beware for double. Math.Round(0.3 / 0.2 ) result in 1, because in double 0.3 / 0.2 = 1.49999999 Math.Round( 1.5 ) = 2 You can use Math.Round as others have suggested (recommended), or you could add 0.5 and cast to an int (which will drop the decimal part). double value = 1.1; int roundedValue = (int)(value + 0.5); // equals 1 double value2 = 1.5; int roundedValue2 = (int)(value2 + 0.5); // equals 2 You have the Math.Round function that does exactly what you want. Math.Round(1.1) results with 1 Math.Round(1.8) will result with 2.... and so one. Use Math.Round: double roundedValue = Math.Round(value, 0) this will round up to the nearest 5 or not change if it already is divisible by 5 public static double R(double x) { // markup to nearest 5 return (((int)(x / 5)) * 5) + ((x % 5) > 0 ? 5 : 0); } I was looking for this, but my example was to take a number, such as 4.2769 and drop it in a span as just 4.3. Not exactly the same, but if this helps: Model.Statistics.AverageReview <= it's just a double from the model Then: @Model.Statistics.AverageReview.ToString("n1") <=gives me 4.3 @Model.Statistics.AverageReview.ToString("n2") <=gives me 4.28 etc… If your working with integers rather than floating point numbers, here is the way. #define ROUNDED_FRACTION(numr,denr) ((numr/denr)+(((numr%denr)<(denr/2))?0:1)) Here both “numr” and “denr” are unsigned integers. var roundedVal = Math.Round(2.5, 0); It will give result: var roundedVal = 3 Using Math.Round(number) rounds to the nearest whole number. Write your own round method. Something like, function round(x) rx = Math.ceil(x) if (rx - x <= .000001) return int(rx) else return int(x) end decimal RoundTotal = Total - (int)Total; if ((double)RoundTotal <= .50) Total = (int)Total; else Total = (int)Total + 1; lblTotal.Text = Total.ToString();
https://exceptionshub.com/how-to-round-to-the-nearest-whole-number-in-c.html
CC-MAIN-2021-21
refinedweb
636
64.47
For example I have array x x = [10, 100, 1000, 10000] Assuming the input as sorted, we can use np.searchsorted to get the index where 600 could be placed in that sorted order and then simply use the index and one-off shifted index to get the lower, upper limits upon indexing, like so - idx = np.searchsorted(x,600) out = x[idx-1], x[idx] Sample run - In [41]: x = [10, 100, 1000, 10000] In [42]: idx = np.searchsorted(x,600) In [44]: x[idx-1], x[idx] Out[44]: (100, 1000) We can also use bisect module, which I believe could be a bit faster - import bisect idx = bisect.bisect_left(x,600)
https://codedump.io/share/faxnIrYxLJ8w/1/given-a-value-find-the-upper-and-lower-values-in-sorted-array
CC-MAIN-2018-47
refinedweb
114
64.04
very simple program crashes in SDL event managerPosted Sunday, 20 April, 2014 - 21:47 by zacaj in I installed OpenTK just yesterday, and was following-..., although the 'get started in 15 minutes' tutorial also had this problem. My code is just using System; using System.Collections.Generic; using System.Drawing; using System.Linq; using System.Text; using System.Threading.Tasks; using OpenTK; using OpenTK.Graphics; using OpenTK.Graphics.OpenGL; using OpenTK.Input; namespace Program { class Program : GameWindow { static void Main(string[] args) { using (Program game = new Program()) { game.Run(30.0); } } } } When I run it, it crashes in game.Run(). Following the debugger deeper, it crashes in SDL_events.c:313: "if (entry->event.type == SDL_SYSWMEVENT) {". Re: very simple program crashes in SDL event manager Which operating system are you using? Re: very simple program crashes in SDL event manager Windows 8 x64 Re: very simple program crashes in SDL event manager After some playing around with the project settings, I have discovered that, by default, Visual Studio 2013 was creating new projects with "Prefer 32bit" enabled in the Build settings page. Disabling this allows the example program to run properly.
http://www.opentk.com/node/3641
CC-MAIN-2015-18
refinedweb
190
52.97
- OSI-Approved Open Source (9) - GNU General Public License version 2.0 (4) - GNU Library or Lesser General Public License version 2.0 (4) - Academic Free License (1) - Apache License V2.0 (1) - Common Development and Distribution License (1) - Common Public License 1.0 (1) - GNU General Public License version 3.0 (1) - Mozilla Public License 1.1 (1) - Open Software License 3.0 (1) - PHP License (1) - Other Operating Systems (9) - Windows (9) - Grouping and Descriptive Categories (6) - Modern (5) - BSD (1) - Emulation and API Compatibility (1) Interpreters Software - Hot topics in Interpreters Softwarevbscript brainfuck html crawler web crawler .vbs crawler crawler software Brainfuck Center Brainfuck Center is an IDE and compiler for your Brainfuck scripts. It includes a full debugger with step-by-step debugging and much more. MDI window!1 weekly downloads PHP++ PHP++ is a new programming language with a syntax similar to PHP, but it's completly rewritten in C++ and comes with a lot of new features like namespaces and a own, easy extendable object oriented framework. Rexx service wrapper Wrapper to run Regina Rexx as a Windows service. Should also support other Rexx versions, especially ooRexx.1 weekly downloads Heat - a mini c++ script language. A small, simple script language. compatible witch c++, simulation c++ syntax. Microformats Parser Library for .NET A library for detecting and parsing microformats, implemented on Microsoft .NET platform.
https://sourceforge.net/directory/development/interpreters/os%3Amswin_server2003/?sort=update
CC-MAIN-2016-50
refinedweb
230
50.73
I am doing a C project in NB 6.1RC1 on WinXP with the MinGW tools. When I compile anything, the include directories are not found! For example, here is the start of some code (GPC_WIN and GPC_MinGW are defined): #include "comp.h" #include <stdio.h> #include <memory.h> #ifdef GPC_WIN #define JAVA_LOCATION "\"c:\\Program Files\\Java\\jre1.6.0_05\\bin\\java\"" #define TP_LOCATION "C:\\NetBeansProjects\\tekplot\\tekplot\\dist\\tekPlot.jar" #include <process.h> #ifdef GPC_CYGWIN #include <w32api/winsock2.h> #endif #ifdef GPC_MinGW #include <winsock2.h> #endif #include <time.h> #include <errno.h> #include <windows.h> #endif #ifdef GPC_UNX #define GPC_OSX #define JAVA_LOCATION "/usr/bin/java" #ifdef GPC_OSX #define TP_LOCATION "/Users/jar/Tekdraw2/tekPlot/dist/tekPlot.jar" #else // Note: unable to use ~ here #define TP_LOCATION "/home/jar/Tekdraw2/tekPlot/dist/tekPlot.jar" #endif // Note:Only include the main include directory in NetBeans C options #include <unistd.h> #include <errno.h> #include <netdb.h> #include <sys/socket.h> #include <netinet/in.h> #endif NetBeans "sees" the comp.h include file because the #ifdefs are properly colored. But when I try to compile the code, make cannot seem to find any of my include files. comp.h is in C:\nbprojects\graphic2dll\trunk\include mkdir -p build/Windows-Debug/MinGW-Windows/src gcc.exe -c -g -IC\:/nbprojects/graphic2dll/trunk/include -IC\:/MinGW/include -o build/Windows-Debug/MinGW-Windows/src/tkfPush.o src/tkfPush.c src/tkfPush.c:9:18: comp.h: No such file or directory src/tkfPush.c: In function `tkfpush': src/tkfPush.c:60: error: storage size of 'addr' isn't known src/tkfPush.c:61: error: storage size of 'inputaddress' isn't known GPC_WIN and GPC_MinGW are defined in comp.h The problem is the MinGW make. When I used cygwin's make, this worked. why? If the MinGW make is no good, you should warn people it that, or else modify the make files accordingly. Although compiler settings are changed in NetBeans 6.1, I recommend to read this tutorial: This issue suggests a usability issue in the product. Escalated to P2. I followed all of those setup instructions. But the old version does not let you specify make. The new one does. I used c:\MinGW\bin\mingw32-make.exe. That did not work. Using c:\cygwin\bin\make.exe did work. Why?? Yes it sure is a usability issue. It wasted 2 days for me too. And I have the newest version of the released MinGW tools installed too. Why? I don't know. It is old bug. (issue 79512 : "MinGW mingw32-make.exe not supported") Instead of 'mingw32-make.exe' need to use 'C:/msys/1.0/bin/make.exe' ('make' from MSYS). I think the issue is that MinGW's make tries to imitate Microsoft's nmake. So, if you allow users to specify the MinGW make, you need to create a different sort of Makefile. If you cannot do that, you need to warn users not to use the MinGW make in the GUI. I agree. Need to warn users not to use the MinGW make in the GUI. Also I think your problem should be resolved in the future. IDE generates strange string ('-IC\:/nbprojects/graphic2dll/trunk/include' instead of '-IC:/nbprojects/graphic2dll/trunk/include'). This from MinGW: > >Comment By: Keith Marshall (keithmarshall) Date: 2008-04-19 20:43 Logged In: YES user_id=823908 Originator: NO This has nothing to do with `make' -- the issue is with GCC's resolution of your include file paths. MinGW's GCC can't understand POSIX paths, even when you use MSYS. If you must specify absolute include file paths within your source code, then you *must* specify them in *native* Windows syntax, (although you *may* use normal slashes, in preference to backslashes, as the directory separators). Most projects avoid this issue, by using *relative* include file paths. Cygwin's GCC *can* understand POSIX paths, WRT it's own emulated POSIX mapping of the file system; this is why it does work in Cygwin. I'm closing this as `invalid' for now. If you feel I've misinterpreted your problem, please feel free to add a follow-up comment, and I'll reopen it. ---------------- So, you need to change the make file if the user uses MinGW or MSYS. Hence, I think this is a real bug. I am now thinking that when I use the cygwin make.exe, it also calls the cygwin gcc. That would agree with what the MinGW people said. In this case, the C/C++ settings page is very wrong for Windows. What I have concluded is that you MUST use the MinGW libraries and include files if you want to do anything that "plays with" Windows, e.g., a dll. If you look at the include files in cygwin vs MinGW, you will see that the latter have all the correct Windows declarations. But, unless NB changes the Makefile, you MUST use the cygwin tools. So neither the cygwin nor the MinGW tools setting will work. The GUI needs to explain this to users properly and prevent them from doing the wrong thing. I think this is a NB6.1 blocker. Yes. I cannot build project by 'mingw32-make.exe'. But I replaced this utility on 'c:\msys\1.0\bin\make.exe' and I have not problem with build. At least I cannot understand you problem. Can you attach your project? ------------------------------------------------------------------------------- Running "C:\msys\1.0\bin\make.exe -f Makefile CONF=Debug" in C:\tmp\mingw /usr/bin/make -f nbproject/Makefile-Debug.mk SUBPROJECTS= .build-conf make[1]: Entering directory `/c/tmp/mingw' mkdir -p build/Debug/MinGW-Windows gcc.exe -c -g -IC\:/nbprojects/graphic2dll/trunk/include -o build/Debug/MinGW-Windows/tkfPush.o tkfPush.c mkdir -p dist/Debug/MinGW-Windows gcc.exe -o dist/Debug/MinGW-Windows/mingw build/Debug/MinGW-Windows/tkfPush.o make[1]: Leaving directory `/c/tmp/mingw' Build successful. Exit value 0. Created attachment 60558 [details] My project Here is the issue as I see it: 1) If you use the cygwin include files and libraries, your executable will only run in the cygwin environment. This should be noted in the GUI. 2) If you want your executable to run in Windows directly, you must use the MinGW includes and libraries. 3) If what the MinGW people say is true, the problem is the gcc compiler, not the version of make. But it is actually the Makefile that is the problem. If the user opts to use the MinGW tools, the Makefile needs to use windows \ path separators. If the user chooses the cygwin tools, it needs to use the / separator, or maybe either. 4) I think that the cygwin make must launch the cygwin gcc. That would explain why it works using the cygwin make.exe. 5) To fix this, NetBeans should -> Make a different Makefile for each chosen toolset -> Document the issues raised above, preferably in the GUI itself. -> If NB does not want to create a separate Makefile for MinGW, the GUI needs to disallow the use of the MinGW mingw32-make.exe. I really think this needs fixing before the 6.1 release! We will disallow mingw32-make.exe in the tool set and provide an error dialog explaining why if the user attempts to select the MinGW make. Added a check to disallow mingw32-make.exe. comparing with:***@hg.netbeans.org/main/ searching for changes changeset: 78930:0d5b094f36cd user: Thomas Preisler <thp@netbeans.org> date: Tue Apr 22 20:03:36 2008 -0700 summary: 133260 Include files not found changeset: 78931:0374d6b5dc38 tag: tip user: Thomas Preisler <thp@netbeans.org> date: Tue Apr 22 20:04:52 2008 -0700 summary: 133260 Include files not found Fixed. Now disallowing mingw32-make.exe as a make in any compiler set. If the user selects this make, the validation fails and the following error message is displayed: mingw32-make.exe is not compatible and is not supported. Use make form MSYS. The plan is to also document this in the tool chain documentation with a more elaborate explanation why mingw32-make.exe doesn't work with our projects. Changes pushed to trunk. Waiting for QA to verify so the fix can go into 6.1 patch. changeset 0d5b094f36cd in main details:;node=0d5b094f36cd description: 133260 Include files not found changeset 0374d6b5dc38 in main details:;node=0374d6b5dc38 description: 133260 Include files not found I would say "use make from MSYS or Cygwin" I was unable to find an easy way to install MSYS. There is no Windows installer available for it any more. verified in dev build 20080423100942 jarome, latest MSYS you can find here: this link is derivable from mingw.org The link to MSYS installer is It is available from the MinGW download area on SourceForge. Please see Thanks for the link. But I was unable to find that link from the MinGW page. But, however, the cygwin make works properly, and many people have that installed already, so there is no reason not to suggest using either of them. I went to and then clicked on the sourceforge download link and see no MSYS executable file there. So I claim this is hard to find. There are onlt tar.bz2 files. jarome, please use Ctrl-F on your sourceforge page with 'msys' expression. You should look through not only Latest File Releases but also MSYS Base System below. It is hidden! You have to click + to find it. But I still think you should also suggest the cygwin make.exe. Many more people have already installed cygwin so that they can connect to Linux environments. Why not? Insofar as possible we'd like to remain toolchain neutral, hence the reluctance to make specific recommendations. The community (including yourself) is available to help future users who may run into this issue. Jesse I've trnsplanted the changesets;node=0d5b094f36cd and;node=0374d6b5dc38 into release61_fixes repository as resp. changeset: 77488:b7e78c87abdf user: Thomas Preisler <thp@netbeans.org> date: Tue Apr 22 20:03:36 2008 -0700 summary: 133260 Include files not found changeset: 77489:58982d4cd1db tag: tip user: Thomas Preisler <thp@netbeans.org> date: Tue Apr 22 20:04:52 2008 -0700 summary: 133260 Include files not found *** Issue 79512 has been marked as a duplicate of this issue. *** *** Bug 226706 has been marked as a duplicate of this bug. ***
https://netbeans.org/bugzilla/show_bug.cgi?id=133260
CC-MAIN-2015-11
refinedweb
1,734
68.97
I have a little shell script that simply starts up a Java program. It is installed somewhere deep in the hierarchy, so I don't want to add its containing folder too my path. So I put a symbolic link into /usr/bin. But when I try to run it, I get: -bash: /usr/bin/asadmin: cannot execute binary file I checked the permissions, and both the symbolic link and the shell script are executable. What can I do about this? asadminshell script. why does bash think it's a binary? – jdigital Dec 30 '13 at 2:17 #!/bin/sh– tbodt Dec 30 '13 at 2:19 file asadminreturn (using the actual path for asadmin)? – jdigital Dec 30 '13 at 2:22 POSIX shell script text executable– tbodt Dec 30 '13 at 2:23 #!/bin/bash^M. If it is a binary, check if it is a x64 binary and that you are not running an x32/x86 OS (Not sure if there still is a 32 bit OS X version, but that would cause the same error). – Hennes Dec 30 '13 at 4:09
https://superuser.com/questions/694684/what-can-i-do-about-cannot-execute-binary-file/694698
CC-MAIN-2019-30
refinedweb
185
81.12
All input and output (I/O) operations use the current file offset information stored in the system file structure (System File and File Descriptor Tables). The current I/O offset designates a byte offset that is constantly tracked for every open file. It is called the current I/O offset because it signals a read or write process where to begin operations in the file. The open subroutine resets it to 0. The pointer can be set or changed using the lseek subroutine. To learn more about file I/O, see: Read and write operations can access a file sequentially. This is because the current I/O offset of the file tracks the byte offset of each previous operation. The offset is stored in the system file table. You can adjust the offset on files that can be randomly accessed, such as regular and special-type files, using the lseek subroutine. The return value for the lseek subroutine is the current value of the pointer's position in the file. For example: cur_off= lseek(fd, 0, SEEK_CUR); The lseek subroutine is implemented in the file table. All following read and write operations use the new position of the offset as their starting location. Note: The offset cannot be changed on pipes or socket-type files. The read subroutine: The cycle completes when the file to be read is empty, the number of bytes requested is met, or a reading error is encountered during the process. Errors can occur while the file is being read from disk or in copying the data to the system file space. It is advantageous for read requests to start at the beginning of data block boundaries and to be multiples of the data block size. An extra iteration in the read loop can be avoided. If a process reads blocks sequentially, the operating system assumes all subsequent reads will be sequential too. During the read operation, the i-node is locked. No other processes are allowed to modify the contents of the file while a read is in progress. However the file is unlocked immediately on completion of the read operation. If another process changes the file between two read operations, the resulting data is different, but the integrity of the data structure is maintained. The following example illustrates how to use the read subroutine to count the number of null bytes in the foo file: #include <fcntl.h> #include <sys/param.h> main() { int fd; int nbytes; int nbytes; int nnulls; int i; char buf[PAGESIZE]; /*A convenient buffer size*/ nnulls=0; if ((fd = open("foo",O_RDONLY)) < 0) exit(); while ((nbytes = read(fd,buf,sizeof(buf))) > 0) for (i = 0; i < nbytes; i++) if (buf[i] == '\0'; nnulls++; printf("%d nulls found\n", nnulls); } Sometimes when you write to a file the file does not contain a block corresponding to the byte offset resulting from the write process. When this happens, the write subroutine allocates a new block. This new block is added to the i-node information that defines the file. If adding the new block produces an indirect block position (i_rindirect), the subroutine allocates more than one block when a file moves from direct to indirect geometry. During the write operation, the i-node is locked. No other processes are allowed to modify the contents of the file while a write is in progress. However the file is unlocked immediately on completion of the write operation. If another process changes the file between two write operations, the resulting data is different, but the integrity of the data structure is maintained. The write subroutine loops in a way similar to the read subroutine, logically writing one block to disk for each iteration. At each iteration, the process either writes an entire block or only a portion of one. If only a portion of a data block is required to accomplish an operation, the write subroutine reads the block from disk to avoid overwriting existing information. If an entire block is required, it does not read the block because the entire block is overwritten. The write operation proceeds block by block until the number of bytes designated in the NBytes parameter is written. You can designate a delayed write process with the O_DEFER flag. Then, the data is transferred to disk as a temporary file. The delayed write feature caches the data in case another process reads or writes the data sooner. Delayed write saves extra disk operations. Many programs, such as mail and editors create temporary files in the directory /tmp and quickly remove them. When a file is opened with the deferred update (O_DEFER) flag, the data is not written to permanent storage until a process issues an fsync subroutine call or a process issues a synchronous write to the file (opened with O_SYNC flag). The fsync subroutine saves all changes in an open file to disk. See the open subroutine for a description of the O_DEFER and O_SYNC flags. The truncate or ftruncate subroutines change the length of regular files. The truncating process must have write permission to the file. The Length variable value indicates the size of the file after the truncation operation is complete. All measures are relative to the first byte of the file, not the current offset. If the new length (designated in the Length variable) is less than the previous length, the data between the two is removed. If the new length is greater than the existing length, zeros are added to extend the file size. When truncation is complete, full blocks are returned to the file system, and the file size is updated. Beginning in AIX 4.3, an application will be able to use Direct I/O on JFS or JFS2 files. This article is intended to assist programmers in understanding the intricacies involved with writing programs to take advantage of this feature. Normally, the JFS or JFS2 caches file pages in kernel memory. When the application does a file read request, if the file page is not in memory, the JFS or JFS2 reads the data from the disk into the file cache, then copies the data from the file cache to the user's buffer. For application writes, the data is merely copied from the user's buffer into the cache. The actual writes to disk are done later. This type of caching policy can be extremely effective when the cache hit rate is high. It also enables read-ahead and write-behind policies. Lately, it makes file writes to the asynchronous, allowing the application to continue processing instead of waiting for I/O requests to complete. Direct I/O is an alternative caching policy which causes the file dta to be transferred from the disk to/from the user's buffer. Direct I/O for files is functionally equivalent to raw I/O for devices. The primary benefit of direct I/O is to reduce CPU utilization for file reads and writes by eliminating the copy from the cache to the user buffer. This can also be a benefit for file data which has a very poor cache hit rate. If the cache hit rate is low, then most read requests have to go to the disk. Direct I/O can also benefit applications which must use synchronous writes since these writes have to go to disk. In both of these cases, CPU usage is reduced since the data copy is eliminated. A second benefit if direct I/O is that it allows applications to avoid diluting the effectiveness of caching of other files. Any time a file is read or written, that file competes for space in the cache. This may cause other file data to be pushed out of the cache. If the newly cached data has very poor reuse characterisitics, the effectiveness of the cache can be reduced. Direct I/O gives applications the ability to identify files where the normal caching policies are ineffective, thus freeing up more cache space for files where the policies are effective. Although Direct I/O can reduce cpu usage, it typically results in longer wall clock times, especially for relatively small requests. This penalty is caused by the fundamental differences between normal cached I/O and Direct I/O. Every Direct I/O read causes a synchronous read from disk; unlike the normal cached I/O policy where read may be satisfied from the cache. This can result in very poor performance if the data was likely to be in memory under the normal caching policy. Direct I/O also bypasses the normal JFS or JFS2 read-ahead algorithms. These algorithms can be extremely effective for sequential access to files by issuing larger and larger read requests and by overlapping reads of future blocks with application processing. Applications can compensate for the loss of JFS or JFS2 read-ahead by issuing larger reads requests. At a minimum, Direct I/O readers should issue read requests of at least 128k to match the JFS or JFS2 read-ahead characteristics. Applications can also simulate JFS or JFS2 read-ahead by issuing asynchronous Direct I/O read-ahead either by use of multiple threads or by using aio_read. Every direct I/O write causes a synchronous write to disk; unlike the normal cached I/O policy where the data is merely copied and then written to disk later. This fundamental difference can cause a significant performance penalty for applications which are converted to use Direct I/O. In order to avoid consistency issues between programs that use Direct I/O and programs that use normal cached I/O, Direct I/O). Changing the file from normal mode to Direct I/O mode can be rather expensive since it requires writing all modified pages to disk and removing all the file's pages from memory. Applications enable Direct I/O access to a file by passing the O_DIRECT flag to the fcntl.h. This flag is defined in open. Applications must be compiled with _ALL_SOURCE enabled to see the definition of O_DIRECT. In order for Direct I/O to work efficiently, the request should be suitably conditioned. Applications can query the offset, length, and address alignment requirements by using the finfo and ffinfo subroutines. When the FI_DIOCAP command is used, finfo and ffinfo return information in the diocapbuf structure as described in sys/finfo.h. This structure contains the following fields: Failure to meet these requirements may cause file reads and writes to use the normal cached model. Different file systems may have different requirements. Direct I/O is not supported for files in a compressed file filesystem. Attempts to open these files with O_DIRECT will be ignored and the files will be accessed with the normal cached I/O methods. Although Direct I/O writes are done synchronously, they do not provide synchronized I/O data integrity completion, as defined by POSIX. Applications which need this feature should use O_DSYNC in addition O_DIRECT. O_DSYNC guarantees that all of the data and enough of the meta-data (eg. indirect blocks) have written to the stable store to be able to retrieve the data after a system crash. O_DIRECT only writes the data; it does not write the meta-data. Pipes are unnamed objects created to allow two processes to communicate. One process reads and the other process writes to the pipe file. This unique type of file is also called a first-in-first-out (FIFO) file. The data blocks of the FIFO are manipulated in a circular queue, maintaining read and write pointers internally to preserve the FIFO order of data. The PIPE_BUF system variable, defined in the limits.h file, designates the maximum number of bytes guaranteed to be atomic when written to a pipe. The shell uses unnamed pipes to implement command pipelining. Most unnamed pipes are created by the shell. The | (vertical) symbol represents a pipe between processes. For example: ls | pr the output of the ls command is printed to the screen. Pipes are treated as regular files as far is possible. Normally, the current offset information is stored in the system file table. However, because pipes are shared by processes, the read/write pointers must be specific to the file, not to the process. File table entries are created by the open subroutine and are unique to the open process, not to the file. Processes with access to pipes share the access through common system file table entries. The pipe subroutine creates an interprocess channel and returns two file descriptors. File descriptor 0 is opened for reading. File descriptor 1 is opened for writing. The read operation accesses the data on a FIFO basis. These two file descriptors are used with read, write, and close subroutines. In the following example, a child process is created and sends its process ID back through a pipe: #include <sys/types.h> main() { int p[2]; char buf[80]; pid_t pid; if (pipe(p)) { perror("pipe failed"); exit(1)' } if ((pid=fork()) == 0) { /* in child process */ close(p[0]); /*close unused read */ *side of the pipe */ sprintf(buf,"%d",getpid()); /*construct data */ /*to send */ write(p[1],buf,strlen(buf)+1); /*write it out, including /*null byte */ exit(0); } /*in parent process*/ close(p[1]); /*close unused write side /*side of pipe */ read(p[0],buf,sizeof(buf)); /*read the pipe*/ printf("Child process said: %s/n", buf); /*display the result */ exit(0); } If a process reads an empty pipe, the process waits until data arrives. If a process writes to a pipe that is too full (PIPE_BUF), the process waits until space is available. If the write side of the pipe is closed, a subsequent read operation to the pipe returns an end-of-file. Two other subroutines that control pipes are the popen and pclose subroutines. The parent closes the end of the pipe it did not use. These closes are necessary to make end-of-file tests work properly. For example, if a child process intended to read the pipe does not close the write end of the pipe, it will never see the end of file condition on the pipe, because there is one write process potentially active. The conventional way to associate the pipe descriptor with the standard input of a process is: close(p[1]); close(0); dup(p[0]); close(p[0]); The close subroutine disconnects file descriptor 0, the standard input. The dup subroutine returns a duplicate of an already open file descriptor. File descriptors are assigned in increasing order and the first available one is returned. The effect of the dup subroutine is to copy the file descriptor for the pipe (read side) to file descriptor 0, thus standard input becomes the read side of the pipe. Finally, the previous read side is closed. The process is similar for a child process to write from a parent. The pclose subroutine waits for the associated process to end, then closes and returns the exist status of the command. This subroutine is preferable to the close subroutine because pclose waits for child processes to finish before closing the pipe. Equally important, when a process creates several children, only a bounded number of unfinished child processes can exist, even if some of them have completed their tasks. Performing the wait allows child processes to complete their tasks. By default, writes to files in JFS or JFS2 file systems are asynchronous. However, JFS file systems support three types of synchronous I/O. One type is specified by the O_DSYNC open flag. When a file is opened using the O_DSYNC open mode, the write () system call will not return until the file data and all file system meta-data required to retrieve the file data are both written to their permanent storage locations. Another type of synchronous I/O is specified by the O_SYNC open flag. In addition to items specified by O_DSYNC, O_SYNC specifies that the write () system call will not return until all file attributes relative to the I/O are written to their permanent storage locations, even if the attributes are not required to retrieve the file data. Before the O_DSYNC open mode existed, AIX applied O_DSYNC semantics to O_SYNC. For binary compatibility reasons, this behavior can never change. If true O_SYNC behavior is required, then both O_DSYNC and O_SYNC open flags must be specified. Exporting the XPG_SUS_ENV=ON environment variable also enables true O_SYNC behavior. The last type of synchronous I/O is specified by the O_RSYNC open flag, and it simply applies the behaviors associated with O_SYNC or _DSYNC to reads. For files in JFS file systems, only the combination of O_RSYNC | O_SYNC has meaning. It means that the read system call will not return until the file's access time is written to its permanent storage location. Chapter 5, File Systems and Directories Working with JFS i-nodes JFS File Space Allocation Using File Descriptors ls command, pr command close subroutine, exec, execl, execv, execle, execve, execlp, execvp or exect subroutine, fclear subroutine, fsync subroutine, lseek subroutine, open, openx, or creat subroutine, read, readx, readv, or readvx subroutine, truncate or ftruncate subroutines, write, writex, writev, or writevx subroutine
http://ps-2.kev009.com/wisclibrary/aix51/usr/share/man/info/en_US/a_doc_lib/aixprggd/genprogc/fileio.htm
CC-MAIN-2022-27
refinedweb
2,865
61.87
From: Gregory Seidman (gseidman_at_[hidden]) Date: 2001-01-23 11:34:42 Dean Sturtevant sez: } --- In boost_at_[hidden], Gregory Seidman <gseidman_at_a...> wrote: } > Why not just use #ifndef NDEBUG? It keeps things consistent with } > cassert and, like cassert, allows automatic building of debugging and } > optimized versions. Of course, with any system we devise it is possible } > for the end programmer to put the #ifndef NDEBUG in his own code, but } > much of the point of boost is to make less work for the programmer. } > } > What is the win of a scheme other than NDEBUG? } > } As with anything, there are tradeoffs. If it was up to me, I'd do } what you suggested. But when I introduced the idea of putting asserts } in boost (last month?), some people objected on the basis that } the 'one definition rule' (ODR) would be violated. They were } concerned that modules compiled with different settings of NDEBUG } might be linked together in the same program. In order to accommodate } them, I proposed using a different namespace (boost_debug) to put } assert-laden code in. This has the drawback that one then needs a } macro to establish the boost namespace, but it does avoid ODR } violations. In fact, this is the technique the STLPort uses. Okay, try this on for size. We have two namespaces: boost_debug and boost_optimized (or whatever... bear with me). Each boost header looks something like this: #ifndef BOOST_THISFUNCTIONALITY_HPP #define BOOST_THISFUNCTIONALITY_HPP #ifndef NDEBUG //the code in this file is contained in namespace boost_debug; #include <boost/thisfunctionality_debug.hpp> #else //the code in this file is contained in namespace boost_optimized #include <boost/thisfunctionality_optimized.hpp> #endif namespace boost { #ifndef NDEBUG using namespace boost_debug; #else using namespace boost_optimized; #endif } #endif //BOOST_THISFUNCTIONALITY_HPP Now, the debug stuff is in a separate namespace from the ordinary or optimized version, so linking between debugging and non-debugging objects will fail outright. Nonetheless, the programmer simply uses boost:: (or using namespace boost) to access whichever version (which will be drawn into the boost namespace), but the version is chosen at compile-time with (or without) -DNDEBUG. Note that the includes in the middle could just be the appropriate definitions in the appropriate namespace; I showed it as includes for simplicity and readability. We preserve ODR while preserving ease of use. Any objections? } - Dean --Greg Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2001/01/8358.php
CC-MAIN-2019-30
refinedweb
401
56.55
In this modern web application era, Now es7 also got released but still, we need to give support to some legacy browsers like ie 11, ie 10. Because they don’t understand most of the things from es6 like below - Promise - browser fetch - spread operators - aync and await etc. Let’s understand how we can enable react application for IE browsers If you don’t want to go through this post then here is Github repository for react starter app with IE supported let’s go step by step Adding polyfills to react app There is already a react-app-polyfill package which takes cares of all the mandatory polyfills require for supporting native browsers You need to install that package by using below command npm i --save react-app-polyfill above command install the packages for react polyfill. Note: For people who don’t know what is polyfill. Polyfill is a term which means. It has rewritten the same functionality whatever written in an es6 and es7 standard, Which does not support on native browser. For IE support, we need to import browser version-specific module in root file means in index.js file for ie9 import 'react-app-polyfill/ie9'; for ie 11 import 'react-app-polyfill/ie11'; There are other browsers also who needs pollyfills for that we need to import react app stable module like below import 'react-app-polyfill/stable'; So, your code will look like import "react-app-polyfill/ie11"; import "react-app-polyfill/stable"; import React from 'react'; import ReactDOM from 'react-dom'; import './index(); Adding browser version in browserList We need to add ie11 in browserlist section under package.json. It’s mandatory because it tells the transpiler that outputted code should be compatible with the browser package.json will be look like { "name": "react-app-with-polyfill", "version": "0.1.0", "private": true, "dependencies": { "@testing-library/jest-dom": "^4.2.4", "@testing-library/react": "^9.5.0", "@testing-library/user-event": "^7.2.1", "react": "^16.13.1", "react-app-polyfill": "^1.0.6", "react-dom": "^16.13.1", "react-scripts": "3.4.1" }, "scripts": { "start": "react-scripts start", "build": "react-scripts build", "test": "react-scripts test", "eject": "react-scripts eject" }, "eslintConfig": { "extends": "react-app" }, "browserslist": { "production": [ ">0.2%", "not dead", "not op_mini all" ], "development": [ "ie 11", "last 1 chrome version", "last 1 firefox version", "last 1 safari version" ] } } As you can see above we have included the ie11 browser version in the development section.After adding you need to delete the .cache folder under node_modules to pick up the changes because the babel-loader don’t know regarding the changes. Still, if it is not working then just delete node_modules, install it again and restart the server. Conclusion: This is the configuration you need to do, after that it will start run on IE browser.
https://www.techboxweb.com/enabling-react-apps-for-ie11/
CC-MAIN-2021-21
refinedweb
475
54.32
Copyright © 1988, 1989, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003 Free Software Foundation, Inc.Copyright ©. collect2 configure.: { int careful; &careful; ... } GCC.. The interface to front ends for languages in GCC, and in particular the tree structure (see Trees),. This chapter describes the structure of the GCC source tree, and how GCC is built. The user documentation for building and installing GCC is in a separate manual (), with which it is presumed that you are.. @gccoptlist @gol FIXME: describe the texi2pod.pl input language and magic comments in more detail. ( branch(end) or returns(end) marks the end of a range without starting a new one. For example: if (i > 10 && j > i && j < 20) /* branch(27 50 75) */ /* branch(end) */ foo (i, j);. GCC.. chapter chapter. will hold of the ASM_STMT. BREAK_STMT breakstatement. There are no additional fields. CASE_LABEL In what follows, some nodes that one might expect to always have type TREE_OPERAND macro. For example, to access the first operand to a binary plus expression expr, use: TREE_OPERAND (expr, 0) Bit-Fields. 3 IF_THEN_ELSE. i INSN, JUMP_INSN, and CALL_INSN. See Insns. Note that not all RTL objects linked onto an insn chain are of class i. o m. Some RTL nodes have special annotations associated with them. MEM MEM_ALIAS_SET (x ) MEMs (see'.. Floating Point). The integers represent a floating point number, but not precisely in the target machine's or host machine's floating point format. To convert them to the precise bit pattern used by the target machine, use the macro REAL_VALUE_TO_TARGET_DOUBLE and friends (see Data Output).str ) (symbol_ref:mode symbol ) The symbol_ref contains a mode, which is usually Pmode. Usually that is the only mode for which a symbol is directly valid. (label_reflabel ) code_labelor a noteof type NOTE_INSN_DELETED. (addressof:m reg ) Pmode. If there are any addressofexpressions left in the function after CSE, reg is forced into the stack and the addressofexpression is replaced with a plusexpression for the address of its stack slot. ) minus, but using unsigned saturation in case of an overflow. ,.. For unsigned widening multiplication, use the same idiom, but with zero_extend instead of sign ) . Special expression codes exist to represent bit-field Standard Names) and is usually a full-word integer mode, which is the default if none is specified. ) ) (ss_truncate:m x ) (uslval x ) reg(or subreg, strict_low_partor zero_extract), mem, pc, parallel, or or zero_extract.functionx ) reg, scratch, parallelor)) or (mem:BLK (scratch)), it means that all memory locations must be presumed clobbered. If x is a parallel, it has the same meaning as a parallel in a set expression. Modifiers). You can clobber either a specific hard register, a pseudo register, or a scratch expression; in the latter two cases, GCCx ) regexpression.: (parallel [(set (reg:SI 2) (unspec:SI [(reg:SI 3) (reg:SI 4)] 0)) (use (reg:SI 1))]) ]) Delay Slots. These expression codes appear in place of a side effect, as the body of an insn, though strictly speaking they do not always describe side effects as such: (asm_inputs ) ..... In an addr_vec, JUMP_LABEL is NULL_RTX. FUNCTION_ARG_PASS_BY_REFERENCE) are stored. If the argument is caller-copied (see CONST_CALL_P) aren't assumed to read and write all memory, so flow would consider the stores dead and remove them. Note that, since a libcall must never return values in memory (see RETURN_IN_MEMORY), there will never be a CLOBBER for a memory address holding a return value.expressions.. define_insn Here is an actual example of an instruction pattern, is an instruction that sets the condition codes based on the value of a general operand. It has no condition, so any insn whose RTL description has the form shown may be handled according to this pattern. The name `tstsi' means “test a. `"rm"' is an operand constraint. Its meaning is explained below.. (match_operand:m n predicate constraint ) C function that accepts two arguments, an expression and a machine mode. During matching, the function will be called with the putative operand as the expression and m as the mode argument (if m is not specified, VOIDmode will be used, which normally causes predicate to accept any mode). If it returns zero, this instruction pattern fails to match. predicate may be an empty string; then it means no test is to be done on the operand, so anything which occurs in this position is valid. Most of the time, predicate will reject modes other than m—but not always. For example, the predicate address_operand uses m as the mode of memory ref that the address should be valid for. Many predicates accept const_int nodes even though their mode is VOIDmode. constraint controls reloading and the choice of the best register class to use for a value, as explained later (see Constraints). People are often unclear on the difference between the constraint and the predicate. The predicate helps decide whether a given insn matches the pattern. The constraint plays no role in this decision; instead, it controls various decisions in the case of an insn which does match. On CISC machines, the most common predicate is "general_operand". This function checks that the putative operand is either a constant, a register or a memory reference, and that it is valid for mode m. For an operand that must be a register, predicate should be "register_operand". Using "general_operand" would be valid, since the reload pass would copy any non-register operands through registers, but this would make GCC do extra work, it would prevent invariant operands (such as constant) from being removed from loops, and it would prevent the register allocator from doing the best possible job. On RISC machines, it is usually most efficient to allow predicate to accept only objects that the constraints allow. For an operand that must be a constant, you must be sure to either use "immediate_operand" for predicate, or make the instruction pattern's extra condition require a constant, or both. You cannot expect the constraints to do this work! If the constraints allow only constants, but the predicate allows something else, the compiler will crash when that case arises. (match_scratch:m n constraint ) scratchor regexpression. When matching patterns, this is equivalent to (match_operand:m n "scratch_operand" pred)n ) In construction, match_dup acts just like match_operand: the operand is substituted into the insn being constructed. But in matching, match_dup behaves differently. It assumes that operand number n has already been determined by a match_operand appearing earlier in the recognition template, and it matches only an identical-looking expression. Note that match_dup should not be used to tell the compiler that a particular register is being used for two operands (example: add that adds one register to another; the second register is both an input operand and the output operand). Use a matching constraint (see Simple Constraints) for those.. (match_operator:m n predicate [operands ...]) When constructing an insn, it stands for an RTL expression whose expression code is taken from that of operand n, and whose operands are constructed from the patterns operands. When matching an expression, it matches an expression if the function predicate returns nonzero on that expression and the patterns operands match the operands of the expression. Suppose that the function commutative_operator is defined as follows, to match any expression whose operator is one of the commutative arithmetic operators of RTL and whose mode is mode: int commutative_operator (x, mode) rtx x; enum machine_mode mode; { enum rtx_code code = GET_CODE (x); if (GET_MODE (x) != mode) return 0; return (GET_RTX_CLASS (code) == 'c' || code == EQ || code == NE); } Then the following pattern will match any RTL expression consisting of a commutative operator applied to two general operands: (match_operator:SI 3 "commutative_operator" [(match_operand:SI 1 "general_operand" "g") (match_operand:SI 2 "general_operand" "g")]) Here the vector [operands ...] contains two patterns because the expressions to be matched all contain two operands. When this pattern does match, the two operands of the commutative operator are recorded as operands 1 and 2 of the insn. (This is done by the two instances of match_operand.) Operand 3 of the insn will be the entire commutative expression: use GET_CODE (operands[3]) to see which commutative operator was used. The machine mode m of match_operator works like that of match_operator is used in a pattern for matching an insn, it usually best if the operand number of the match_operator. The operand of the insn which corresponds to the match_operator never has any constraints because it is never reloaded as a whole. However, if parts of its operands are matched by match_operand patterns, those parts may have constraints of their own. (match_op_dup:m n [operands ...]) match_dup, except that it applies to operators instead of operands. When constructing an insn, operand number n will be substituted at this point. But in matching, match_op_dupbehaves differently. It assumes that operand number n has already been determined by a match_operatorappearing earlier in the recognition template, and it matches only an identical-looking expression. (match_paralleln predicate [subpat ...]) parallelexpression with a variable number of elements. This expression should only appear at the top level of an insn pattern. When constructing an insn, operand number n will be substituted at this point. When matching an insn, it matches if the body of the insn is a parallel expression with at least as many elements as the vector of subpat expressions in the match_parallel, if each subpat matches the corresponding element of the parallel, and the function predicate returns nonzero on the parallel that is the body of the insn. It is the responsibility of the predicate to validate elements of the parallel beyond those listed in the match_parallel. A typical use of match_parallel is to match load and store multiple expressions, which can contain a variable number of elements in a parallel. For example, (define_insn "" [(match_parallel 0 "load_multiple_operation" [(set (match_operand:SI 1 "gpc_reg_operand" "=r") (match_operand:SI 2 "memory_operand" "m")) (use (reg:SI 179)) (clobber (reg:SI 179))])] "" "loadm 0,0,%1,%2") This example comes from a29k.md. The function load_multiple_operation is defined in a29k.c and checks that subsequent elements in the parallel are the same as the set in the pattern, except that they are referencing subsequent registers and memory locations. An insn that matches this pattern might look like: (parallel [(set (reg:SI 20) (mem:SI (reg:SI 100))) (use (reg:SI 179)) (clobber (reg:SI 179)) (set (reg:SI 21) (mem:SI (plus:SI (reg:SI 100) (const_int 4)))) (set (reg:SI 22) (mem:SI (plus:SI (reg:SI 100) (const_int 8))))]) (match_par_dupn [subpat ...]) match_op_dup, but for match_parallelinstead of match_operator. (match_insnpredicate ) match_*recognizers, match_insndoes not take an operand number. The machine mode m of match_insn works like that of match_operand: it is passed as the second argument to the predicate function, and that function is solely responsible for deciding whether the expression to be matched “has” that mode. (match_insn2n predicate ) The machine mode m of match_insn2 works like that of match_operand: it is passed as the second argument to the predicate function, and that function is solely responsible for deciding whether the expression to be matched “has” that mode. The output template is a string which specifies how to output the assembler code for an instruction pattern. Most of the template is a fixed string which is output literally. The character `%' is used to specify where to substitute an operand; it can also be used to identify places where different variants of the assembler require different syntax. In the simplest case, a `%' followed by a digit n says to output operand n at that point in the string. `%' followed by a letter and a digit says to output an operand in an alternate fashion. Four letters have standard, built-in meanings described below. The machine description macro PRINT_OPERAND can define additional letters with nonstandard meanings. `%cdigit' can be used to substitute an operand that is a constant value without the syntax that normally indicates an immediate operand. `%ndigit' is like `%cdigit' except that the value of the constant is negated before printing. `. `%ldigit' is used to substitute a label_ref into a jump instruction. `%=' outputs a number which is unique to each instruction in the entire compilation. This is useful for making local labels to be referred to more than once in a single template that generates multiple assembler instructions. `%' followed by a punctuation character specifies a substitution that does not use an operand. Only one case is standard: `%%' outputs a `%' `%' is to distinguish between different assembler languages for the same machine; for example, Motorola syntax versus MIT syntax for the 68000. Motorola syntax requires periods in most opcode names, while MIT syntax does not. For example, the opcode `movel' in MIT syntax is `move.l' in Motorola syntax. The same file of patterns is used for both kinds of output syntax, but the character sequence `%.' an matching define_split already defined, then you can simply use # as the output template instead of writing an output template that emits the multiple assembler instructions. If the macro ASSEMBLER_DIALECT is defined, you can use construct of the form `{option0|option1|option2}' in the templates. These describe multiple variants of assembler language syntax. See Instruction `@', then it is actually a series of templates, each on a separate line. (Blank lines and leading spaces and tabs are ignored.) The templates correspond to the pattern's constraint alternatives (see Multi-Alternative). For example, if a target machine has a two-address add instruction `addr' to add into a register and another `add, `clrreg' for registers and `@': (define_insn "" [(set (match_operand:SI 0 "general_operand" "=r,m") (const_int 0))] "" "@ clrreg %0 clrmem %0"). general_operand. This is normally used in the constraint of a match_scratchwhen certain alternatives will not actually require a scratch register.' in the constraint must be accompanied by address_operand as the predicate in the match_operand. This predicate interprets the mode specified in the match_operand as the mode of the memory reference for which the address would be valid. The machine description macro REG_CLASS_FROM_LETTER has first cut at the otherwise unused letters. If it evaluates to NO_REGS, then EXTRA_CONSTRAINT is evaluated. A typical use for EXTRA_CONSTRAINT. Here is how it is done for fullword logical-or on the 68000: (define_insn "iorsi3" [(set (match_operand:SI 0 "general_operand" "=m,d") (ior:SI (match_operand:SI 1 "general_operand" "%0,0") (match_operand:SI 2 "general_operand" "dKs,dmKs")))] ...) The first alternative has `m' (memory) for operand 0, `0' for operand 1 (meaning it must match operand 0), and `dKs' for operand 2. The second alternative has `d' (data register) for operand 0, `0' for operand 1, and `dmKs' for operand 2. The `=' and `%' in the constraints apply to all the alternatives; their meaning is explained in the next section (see: ? ! When an insn pattern has multiple alternatives in its constraints, often the appearance of the assembler code is determined mostly by which alternative was matched. When this is so, the C code for writing the assembler code can use the variable which_alternative, which is the ordinal number of the alternative that was actually satisfied (0 for the first, 1 for the second alternative, etc.). See Output Statement., `*' is used so that the `d' constraint letter (for data register) is ignored when computing register preferences. (define_insn "extendhisi2" [(set (match_operand:SI 0 "general_operand" "=*d,a") (sign_extend:SI (match_operand:HI 1 "general_operand" "0,g")))] ...) Simple Constraints), and `I', usually the letter indicating the most common immediate-constant format. For each machine architecture, the config a d w e b q t x y z I J K L M N O P G b f v h q c l x y z I J SImodeconstants) K L M N O P G Q asmstatements) R S T U q b, c, or dregister C d D S x y I J K L M leainstruction) N outinstruction) Z 0xffffffffor symbolic reference known to fit specified range. (for using immediates in zero extending 32-bit to 64-bit x86-64 instructions) e G f fp0to fp3) l r0to r15) b g0to g15) d I J K G H a r0to r3for addlinstruction b c d e f m G I J K L M N O P depinstruction Q R shladdinstruction S a ACC_REGS( acc0to acc7). b EVEN_ACC_REGS( acc0to acc7). c CC_REGS( fcc0to fcc3and icc0to icc3). d GPR_REGS( gr0to gr63). e EVEN_REGS( gr0to gr63). Odd registers are excluded not in the class but through the use of a machine mode larger than 4 bytes. f FPR_REGS( fr0to fr63). h FEVEN_REGS( fr0to fr63). Odd registers are excluded not in the class but through the use of a machine mode larger than 4 bytes. l LR_REG(the lrregister). q QUAD_REGS( gr2to gr63). Register numbers not divisible by 4 are excluded not in the class but through the use of a machine mode larger than 8 bytes. t ICC_REGS( icc0to icc3). u FCC_REGS( fcc0to fcc3). v ICR_REGS( cc4to cc7). w FCR_REGS( cc0to cc3). x QUAD_FPR_REGS( fr0to fr63). Register numbers not divisible by 4 are excluded not in the class but through the use of a machine mode larger than 8 bytes. z SPR_REGS( lcrand lr). A QUAD_ACC_REGS( acc0to acc7). B ACCG_REGS( accg0to accg7). C CR_REGS( cc0to cc7). G I J L M N O P a f j k b y z q c d u R QImode, since we can't access extra bytes S T I J K L M N O P d f h l x y z I J K L lui) M N O P G Q asmstatements) R asmstatements) S asmstatements) a d f I J K L M G a b d q t u w x y z A B D L M N O P f e c d b h I J K sethiinstruction) L movccinstructions M movrccinstructions N SImode O G H Q R S T U W a b c f k q t u v x y z G H I J K L M N O Q R S T U a d f I J K L (0..4095) (-524288..524287) M N 0..9: H,Q: D,S,H: 0,F: Q R S T U W Y a b c d e t y z I J K L M N O P Q R S T U Z. Bits outside of m, but which are within the same target word as the subreg are undefined. Bits which are outside the target word are left unchanged. which need scratch registers during or after reload, you must define SECONDARY_INPUT_RELOAD_CLASS and/or SECONDARY_OUTPUT_RELOAD_CLASS to detect them, and provide patterns `reload_inm' or `reload_outm' to handle them. Register Classes. There are special restrictions on the form of the match_operands used in these patterns. First, only the predicate for the reload operand is examined, i.e., ALL_REGS register class. This may relieve ports of the burden of defining an ALL_REGS constraint letter just for these patterns. subregwith mode m of a register whose natural mode is wider, the `movstrictm' instruction is guaranteed not to alter any of the register except the part which belongs to mode m. define_expand (see Expander Definitions) and make the pattern fail if the restrictions are not met. Write the generated insn as a parallel with elements being a set of one register from the appropriate memory location (you may also need use or clobber elements). Use a match_parallel (see RTL Template) to recognize the insn. See rs6000.md for examples of the use of this insn pattern. PUSH_ROUNDINGis defined. For historical reason, this pattern may be missing and in such case an movexpander is used instead, with a MEMexpression forming the push operation. The movexpander method is deprecated. 3instructions. The sqrt built-in function of C always uses the mode which corresponds to the C data type double and the sqrtf built-in function uses the mode which corresponds to the C data type float. The cos built-in function of C always uses the mode which corresponds to the C data type double and the cosf built-in function uses the mode which corresponds to the C data type float. The sin built-in function of C always uses the mode which corresponds to the C data type double and the sinf built-in function uses the mode which corresponds to the C data type float. The exp built-in function of C always uses the mode which corresponds to the C data type double and the expf built-in function uses the mode which corresponds to the C data type float. The log built-in function of C always uses the mode which corresponds to the C data type double and the logf built-in function uses the mode which corresponds to the C data type float. The pow built-in function of C always uses the mode which corresponds to the C data type double and the powf built-in function uses the mode which corresponds to the C data type float. The atan2 built-in function of C always uses the mode which corresponds to the C data type double and the atan2f built-in function uses the mode which corresponds to the C data type float. The floor built-in function of C always uses the mode which corresponds to the C data type double and the floorf built-in function uses the mode which corresponds to the C data type float. The trunc built-in function of C always uses the mode which corresponds to the C data type double and the truncf built-in function uses the mode which corresponds to the C data type float. The round built-in function of C always uses the mode which corresponds to the C data type double and the roundf built-in function uses the mode which corresponds to the C data type float. The ceil built-in function of C always uses the mode which corresponds to the C data type double and the ceilf built-in function uses the mode which corresponds to the C data type float. The nearbyint built-in function of C always uses the mode which corresponds to the C data type double and the nearbyintf built-in function uses the mode which corresponds to the C data type float. The ffs built-in function of C always uses the mode which corresponds to the C data type int. (set (cc0) (compare (match_operand:m 0 ...) (match_operand:m 1 ...))) (set (cc0) (match_operand:m 0 ...)) Misc). Expander Definitions) Jump Expander Definitions) (define_insn "" [(set (pc) (if_then_else (match_operator 0 "comparison_operator" [(cc0) (const_int 0)]) (return) (pc)))] "condition" "...") Looping Patterns. This optional instruction pattern should be defined for machines with low-overhead looping instructions as the loop optimizer will try to modify suitable loops to utilize it. If nested low-overhead looping is not supported, use a define_expand (see Expander Definitions) and make the pattern fail if operand 3 is not const1_rtx. Similarly, if the actual or estimated maximum number of iterations is too large for this instruction, make it fail. doloop_endrequired for machines that need to perform some initialization, such as loading special registers used by a low-overhead looping instruction. If initialization insns do not always need to be emitted, use a define_expand(see Expander Definitions) Expander Definitions). _ EH_RETURN_STACKADJ_RTX, if defined; it will have already been assigned. If this pattern is not defined, the default action will be to simply copy the return address to EH_RETURN_HANDLER_RTX. Either that macro or this pattern needs to be defined if call frame exception handling is to be used. Using a prologue pattern is generally preferred over defining TARGET_ASM_FUNCTION_PROLOGUE to emit assembly code for the prologue. The prologue pattern is particularly useful for targets which perform instruction scheduling. Using an epilogue pattern is generally preferred over defining TARGET_ASM_FUNCTION_EPILOGUE to emit assembly code for the epil (define_insn "conditional_trap" [(trap_if (match_operator 0 "trap_operator" [(cc0) (const_int 0)]) (match_operand 1 "const_int_operand" "i"))] "" "...") Targets that do not support write prefetches or locality hints can ignore the values of operands 1 and 2. the macro EXTRA_CC_MODES to list the additional modes required ., −1., `decrement_and_branch_until_zero' and the : For these operators, if only one operand is a neg, not, mult, plus, or minus expression, it will be the first operx (const_intn ))is converted to (plusx (define_insn "" [(set (match_operand:m 0 ...) (and:m (not:m (match_operand:m 1 ...)) (match_operand:m 2 ...)))] "..." "...") Similarly, a pattern for a “NAND” instruction should be written (define_insn "" [(set (match_operand:m 0 ...) (ior:m (not:m (match_operand:m 1 ...)) (not:m (match_operand:m 2 ...))))] "..." "...") In both cases, it is not necessary to include patterns for the many logically equivalent RTL expressions. (xor:m x y )and (not:m (xor:m x y )). (plus:m (plus:m x y) constant) cc0, (comparex (const_int 0))will be converted to x. zero_extractrather than the equivalent andor sign_extractoperations. hold the probability of original branch in case it was an simple conditional jump, −1 otherwise. To simplify recomputing of edge frequencies, `TARGET.._active)))]) ...] ...)) except that the pattern to match is not a single instruction, but a sequence of instructions. It is possible to request additional scratch registers for use in the output template. If appropriate registers are not free, the pattern will simply not match. Scratch registers are requested with a match_scratch pattern at the top level of the input pattern. The allocated register (initially) will be dead at the point requested within the original sequence. If the scratch is used at more than a single point, a match_dup pattern at the top level of the input pattern marks the last position in the input sequence at which the register must be available. Here is an example from the IA-32 machine description: ))])] "") This pattern tries to split a load from its use in the hopes that we'll be able to schedule around the memory load latency. It allocates a single SImode register of class GENERAL_REGS ( "r") that needs to be live only at the point just before the arithmetic. A real example requiring extended scratch lifetimes is harder to come by, so here's a silly made-up.. The define_attr expression is used to define each attribute required by the target machine. It looks like: (define_attr name list-of-values default) name is a string specifying the name of the attribute being defined. list-of-values is either a string that specifies a comma-separated list of values that can be assigned to the attribute, or a null string to indicate that the attribute takes numeric values. default is an attribute expression that gives the value of this attribute for insns that match patterns whose definition does not include an explicit value for this attribute. See Attr Example, for more information on the handling of defaults. See Constant Attributes, for information on attributes that do not depend on any particular insn. For each defined attribute, a number of definitions are written to the insn-attr.h file. For cases where an explicit set of values is specified for an attribute, the following are defined: For example, if the following is present in the md file: i ), (eq_attr "type" "load,store") is equivalent to (ior (eq_attr "type" "load") (eq_attr "type" "store")) If name specifies an attribute of `alternative', it refers to the value of the compiler variable which_alternative (see Output Statement) and the values must be small integers. For example, (eq_attr "alternative" "2,3") is equivalent to (ior (eq (symbol_ref "which_alternative") (const_int 2)) (eq (symbol_ref "which_alternative") (const_int 3))) Note that, for most attributes, an eq_attr test is simplified in cases where the value of the attribute being tested is known for all insns matching a particular pattern. This is by far the most common case. (attr_flagname )). (define_delay (eq_attr "type" "cbranch") [(eq_attr "in_branch_delay" "true") (and (eq_attr "in_branch_delay" "true") (attr_flag "forward")) (and (eq_attr "in_branch_delay" "true") (attr_flag "backward"))]) )value ) setmust be the special RTL expression attr, whose sole operand is a string giving the name of the attribute being set. value is the value of the attribute. The following shows three different ways of representing the same attribute value specification: (set_attr "type" "load,store,arith") (set_attr_alternative "type" [(const_string "load") (const_string "store") (const_string "arith")]) (set (attr "type") (cond [(eq_attr "alternative" "1") (const_string "load") (eq_attr "alternative" "2") (const_string "store")] (const_string "arith"))) The define_asm_attributes expression provides a mechanism to specify the attributes assigned to insns produced from an asm statement. It has the form: (define_asm_attributes [attr-sets]): (define_delay test [delay-1 annul-true-1 annul-false-1 delay-2 annul-true-2 annul-false-2 ...]): (define_delay (eq_attr "type" "branch,call") [(eq_attr "type" "!branch,call") (nil) (nil)]) Multiple define_delay expressions may be specified. In this case, each such expression specifies different delay slot requirements and there must be no insn for which tests in two: (define_delay (eq_attr "type" "branch") [(eq_attr "type" "!branch,call") (eq_attr "type" "!branch,call") (nil)]) (define_delay (eq_attr "type" "call") [(eq_attr "type" "!branch,call") (nil) (nil) (eq_attr "type" "!branch,call") (nil) (nil)])).: (define_function_unit name multiplicity simultaneity test ready-delay issue-delay [conflict-list]),: (define_function_unit "memory" 1 1 (eq_attr "type" "load") 2 0) For the case of a floating point function unit that can pipeline either single or double precision, but not both, the following could be specified: (define_function_unit "fp" 1 0 (eq_attr "type" "sp_fp") 4 4 [(eq_attr "type" "dp_fp")]) (define_function_unit "fp" 1 0 (eq_attr "type" "dp_fp") 4 4 [(eq_attr "type" "sp_fp")]) Note: The scheduler attempts to avoid function unit conflicts and uses all the specifications in the define_function_unit expression. It has recently: `*' and `@'. When define_cond_exec is used, an implicit reference to the predicable instruction attribute is made. See Insn Attributes. This attribute must be boolean (i.e. have exactly two elements in its list-of-values). Further, it must not be used with complex expressions. That is, the default and all uses in the insns must be a simple constant, not dependent on the alternative or anything else... Registers have various characteristics. Number of hardware registers known to the compiler. They receive numbers 0 through FIRST_PSEUDO_REGISTER-1; thus, the first pseudo register's number really is assigned the number FIRST_PSEUDO. Like. Zero or more C statements that. You need not define this macro if it has no work to do.. If the program counter has a register number, define this as that register number. Otherwise, do not define it..) Define this macro if the natural size of registers that hold values of mode... Define this macro if the compiler should avoid copies to/from. The number of the last stack-like register. This one is the bottom of the stack.. An enumerated type that must be defined with all the register class names as enumerated values. NO_REGSmust be first. ALL_REGSmust be the last register class, followed by one more enumerated value, LIM_REG_CLASSES, which is not a register class but rather tells how many classes there are. Each register class has a number, which is the value of casting the class name to type int. The number serves as an index in many of the tables described below. The number of distinct register classes, defined as follows:#define N_REG_CLASSES (int) LIM_REG_CLASSES An initializer containing the names of the register classes as C string constants. These names are used in writing some of the debugging dumps. An initializer containing the contents of the register classes, as integers which are bit masks. The nth integer specifies the contents of class n. The way the integer mask is interpreted is that register r is in the class ifwhich is defined in hard-reg-set.h. In this situation, the first integer in each sub-initializer corresponds to registers 0 through 31, the second integer to registers 32 through 63, and so on. A C expression whose value is a register class containing hard register regno. In general there is more than one such class; choose a class which is minimal, meaning that no smaller class also contains the register. A macro whose definition is the name of the class to which a valid base register must belong. A base register is one used in an address which is the register value plus a displacement. This is a variation of the BASE_REG_CLASSmacro which allows the selection of a base register in a mode dependent manner. If mode is VOIDmode then it should return the same value as BASE). For the constraint at the start of str, which starts with the letter.. A C expression that is just like. A C expression which is nonzero if register number. A C expression that places additional restrictions on the register class to use when it is necessary to copy value x into a register in class class. The value is a register class; perhaps class, or perhaps another, smaller class. On many machines, the following definition is safe:#define PREFERRED_RELOAD_CLASS(X,CLASS) CLASS Sometimes returning a more restrictive class makes better code. For example, on the 68000, when x is an integer constant that is in range for a `moveq' instruction, the value of this macro is always DATA_REGSas long as class includes the data registers. Requiring a data register guarantees that a `moveq' will be used. One case where PREFERRED_RELOAD_CLASSmust not return class is if x is a legitimate constant which cannot be loaded into some register class. By returning NO_REGSyou can force x into a memory location. For example, rs6000 can load immediate values into general-purpose registers, but does not have an instruction for loading an immediate value into a floating-point register, so PREFERRED_RELOAD_CLASSreturns NO_REGSwhen x is a floating-point constant. If the constant can't be loaded into any kind of register, code generation will be better if LEGITIMATE_CONSTANT_Pmakes the constant illegitimate instead of using PREFERRED_RELOAD_CLASS. Like PREFERRED_RELOAD_CLASS, but for output reloads instead of input reloads. If you don't define this macro, the default is to use class, unchanged. A C expression that places additional restrictions on the register class to use when it is necessary to be able to hold a value of mode mode in a reload register for which class class would ordinarily be used._CLASSto return the largest register class all of whose registers can be used as intermediate registers or scratch registers. If copying a register class in mode to x requires an intermediate or scratch register, SECONDARY_OUTPUT_RELOAD_CLASSshould be defined to return the largest register class required. If the requirements for input and output reloads are the same, the macro SECONDARY_RELOAD_CLASSshould be used instead of defining both macros identically. The values returned by these macros are often GENERAL_REGS. Return NO_REGSif register.of a pseudo-register, which could either be in a hard register or in memory. Use true_regnumto find out; it will return −1. Certain machines have the property that some registers cannot be copied to some other registers without using memory. Define this macro on those machines to be a C expression that is nonzero if objects of mode m in registers of class1 can only be copied to registers of class class2 by storing a register of class1 into memory and loading that memory location into a register of class2. Do not define this macro if its value would always be zero. Normally when SECONDARY_MEMORY_NEEDEDis defined, the compiler allocates a stack slot for a memory location needed for register copies. If this macro is defined, the compiler instead uses the memory location defined by this macro. Do not define this macro if you do not define SECONDARY_MEMORY_NEEDED. When the compiler needs a secondary memory location to copy between two registers of mode mode, it normally allocates sufficient memory to hold a quantity ofor if widening mode to a mode that is BITS_PER_WORDbits wide is correct for your machine. SMALL_REGISTER_CLASSESto. A C expression whose value is nonzero if pseudos that have been assigned to registers of class class would likely be spilled because registers of class are needed for spill registers.. A C expression for the maximum number of consecutive registers of class class needed to hold a value of mode. If defined, a C expression that returns nonzero for a class for which a change from mode from to mode CANNOT_CHANGE_MODE_CLASSas below:#define CANNOT_CHANGE_MODE_CLASS(FROM, TO, CLASS) \ (GET_MODE_SIZE (FROM) != GET_MODE_SIZE (TO) \ ? reg_classes_intersect_p (FLOAT_REGS, (CLASS)) : 0) Three other special macros describe which operands fit which constraint letters. A C expression that defines the machine-dependent operand constraint letters (. Like CONST_OK_FOR_LETTER_P, but you also get the constraint string passed in str, so that you can use suffixes to distinguish between different variants. A C expression that defines the machine-dependent operand constraint letters that specify particular rangesis used for all floating-point constants and for DImodefixed-point constants. A given letter can accept either or both kinds of values. It can use GET_MODEto distinguish between these kinds.. Here is the basic stack layout. Define this macro if pushing a word onto the stack moves the stack pointer to a smaller address. When we say, “define this macro if ...,” it means that the compiler checks this macro only with #ifdefso the precise definition used does not matter. This macro defines the operation used when something is pushed on the stack. In RTL, a push operation will be (set (mem (STACK_PUSH_CODE (reg sp))) ...) The choices are PRE_DEC, POST_DEC, PRE_INC, and POST_INC. Which of these is correct depends on the stack direction and on whether the stack pointer points to the last item on the stack or whether it points to the space for the next item on the stack. The default is PRE_DECwhen STACK_GROWS_DOWNWARDis defined, which is almost always right, and PRE_INCotherwise, which is often wrong. Define this macro if the addresses of local variable slots are at negative offsets from the frame pointer. Define this macro if successive arguments to a function occupy decreasing addresses on the stack. Offset from the frame pointer to the first local variable slot to be allocated. If FRAME_GROWS_DOWNWARD, find the next slot's offset by subtracting the first slot's length from STARTING_FRAME_OFFSET. Otherwise, it is found by adding the length of the first slot to the value STARTING_FRAME_OFFSET. Define to zero to disable final alignment of the stack during reload. The nonzero default for this macro is suitable for most ports. On ports where STARTING_FRAME_OFFSETis nonzero or where there is a register save block following the local block that doesn't require alignment to STACK_BOUNDARY, it may be beneficial to disable stack alignment and do it in the backend. Offset from the stack pointer register to the first location at which outgoing arguments are placed. If not specified, the default value of zero is used. This is the proper value for most machines. If ARGS_GROW_DOWNWARD, this is the offset to the location above the first location at which outgoing arguments are placed. Offset from the argument pointer register to the first argument's address. On some machines it may depend on the data type of the function. If ARGS_GROW_DOWNWARD, this is the offset to the location above the first argument's address. Offset from the stack pointer register to an item dynamically allocated on the stack, e.g., by alloca. The default value for this macro is STACK_POINTER_OFFSETplus the length of the outgoing arguments. The default is correct for most machines. See function.c for details. A C expression whose value is RTL representing the address in a stack frame where the pointer to the caller's frame is stored. Assume that frameaddr is an RTL expression for the address of the stack frame itself. If you don't define this macro, the default is to return the value of frameaddr—that is, the stack frame address is also the address of the stack word that points to the previous frame... A C expression whose value is RTL representing the value of the return address for the frame count steps up from the current frame, after the prologue. frameaddr is the frame pointer of the count frame, or the frame pointer of the count − 1 frame if RETURN_ADDR_IN_PREVIOUS_FRAMEis defined. The value of the expression must always be the correct address when count is zero, but may be NULL_RTXif there is not way to determine the return address of other frames. Define this if the return address of a particular stack frame is accessed from the frame pointer of the previous stack frame. A C expression whose value is RTL representing the location of the incoming return address at the beginning of any function, before the prologue. This RTL is either a_COLUMNto DWARF_FRAME_REGNUM (REGNO). A C expression whose value is an integer giving a DWARF 2 column number that may be used as an alternate return column. This should be defined only if DWARF_FRAME_RETURN_COLUMNis set to a general register, but an alternate column needs to be used for signal frames.. A C expression whose value is an integer giving the offset, in bytes, from the argument pointer to the canonical frame address (cfa). The final value should coincide with that calculated by branch to success. If the frame cannot be decoded, the macro should do nothing. TARGET_ASM. maximum size, in bytes, of an object that GCC will place in the fixed area of the stack frame when the user specifies -fstack-check. GCC computed the default from the values of the above macros and you will normally not need to override that default. )..says. You don't need to worry about them. In a function that does not require a frame pointer, the frame pointer register can be allocated for ordinary usage, unless you mark it as a fixed register. See FIXED_REGISTERSfor more information. FRAME_POINTER_REQUIREDis defined to always be.. macro MUST_PASS_IN_STACK (mode ,type . Define as a C expression that evaluates to nonzero if we do not know how..
http://www.mirbsd.org/htman/i386/manINFO/gccint.html
CC-MAIN-2015-48
refinedweb
6,920
53.31
: However, with the following, where variables are lined up with commas between them, the first variable is set and the second ends up undefined: var nValue1,nValue2 = 3; // nValue2 is undefined But, nValue2 will be equal to 3 and nValue1 will be undefined. So it should read: However, with the following, where variables are lined up with commas between them, the first variable is set and the first ends up undefined: var nValue1,nValue2 = 3; // nValue1 is undefined This: var nValue1,nValue2 = 3; // nValue2 is undefined document.writeln("<p>" + nValue1 + "</p>"); document.writeln("<p>" + nValue2 + "</p>"); Prints out this in Firefox and IE: undefined 3 Note from the Author or Editor:Should read: However, with the following, where variables are lined up with commas between them, the second variable is set and the first ends up undefined: var nValue1,nValue2 = 3; // nValue1 is undefined Code snippet used "If" that should be "if": var nValue = 3.0; var sValue = "3.0"; If (nValue == sValue) ... Should be: var nValue = 3.0; var sValue = "3.0"; if (nValue == sValue) ... Note from the Author or Editor:If (nValue == sValue) ... should be if (nValue == sValue) ... After Example 4-7 there is the following paragraph: The regular expression pattern used in Example 4-7 is a very handy expression to keep in mind. If you want to replace all occurrences of spaces in a string with dashes, regardless of what's following the spaces, use the pattern /\s+\g in the replace method, passing in the hyphen as the replacement character. /\s+\g should be /\s+/g This line returns a null object in Firefox 3.0.5: document.getElementById("text4").value=strResults; It works in IE7. This appears work in both browsers: document.getElementById("someForm").text4.value=strResults; Note from the Author or Editor:The textarea in example 8-5 should have an id attribute assigned "text4", as well as name to use getElementById. The example in the examples file for the book have the id attribute and work correctly. The following code: Var tmOut = setInterval("functionName", 5000); Should be: var tmOut = setInterval("functionName", 5000); Also, this is a minor note. Above this code is the syntax for the setTimeout function: var tmOut = setTimeout(func, 5000,"param1",param2,...,paramn); The double quotes around the 1st param1 appear to me to be unnecessary and confusing. I think it would read better as: var tmOut = setTimeout(func, 5000, param1, param2, ..., paramn); Thanks :) Steve Note from the Author or Editor:The double quotes around param1 in first code snippet are not necessary first word of third code snippet should read var, not Var I'm using Firefox version 3.0.5 and the text states: "The mimeTypes collection consists of mimeType objects, which have properties of description, type, and plugin." The words "description", "type", and "plugin" are bolded to denote the names of the properties. I checked using Firebug and the mimeType objects don't have a "plugin" property. There is a property named "enabledPlugin". Thanks :) Steve Note from the Author or Editor:The mimeType object has an enabledPlugin property, not a plugin property The follow style generates an error in Firefox 3.0.6: #div2 { background-color: #ff0; color: #; The error: Warning: Expected color but found '#'. Error in parsing value for property 'color'. Declaration dropped. Source File: Line: 30 The Safari version has an extra 's' after the div3. This isn't on the on-line example: <div id="div3"> <p>Red block that has fixed positioning.</p>s Note from the Author or Editor:Example 12-4, page 272, CSS setting for #div2 should be: color: #000; Not color: #; If you select California and them Missouri, California's cities remain in the options for Missouri. You can solve this by clearing the options in getCities() like so: // access city selection var citySelection = document.getElementById("citySelection"); citySelection.options.length = 0; Note from the Author or Editor:In example 15-3, add the following: citySelection.options.length = 0; After the line of code that reads: var citySelection = document.getElementById("citySelection"); "When you're visiting web pages and curious as to how..." s/b "When you're visiting web pages and ARE curious as to how..." EMCA-262 should be ECMA-262 6) 2nd sentence under "First Look at..." heading; "All you need to do, at a minimum, is include HTML..." s/b [bc of parallel construction] "All you need to do, at a minimum, is TO include HTML..." <body onload="hello();"> is an error as the function hello is undefined. The example runs quite well without the onload="hello(); as the popup is generated as soon as the script is executed. It's a bit discouraging to find such a fundamental error in the first example! "If it doesn't, chances are that JavaScript is disabled in the browser, or,..., Javascript isn't supported." s/b [bc of parallel construction] "If it doesn't, chances are that JavaScript is disabled in the browser, or,..., THAT Javascript isn't supported." Body tag in Example 1-1 should read: <body> The line document..writeln(msg); should be: document.writeln(msg); header: Quirks versus Standard Mode and DOCTYPEs In the examples, the DOCTYPE used is XHTML 1.0 Transitional, even though all of the examples in the book are served as HTML, and with an .html extension. Which DOCTYPE used can influence how the page markup, CSS, and even JavaScript are interpreted. It's all based on a concept called 'quirks' mode, as compared to standards or strict mode. As browsers improve their support for markup and CSS standards, they also have to maintain backwards compatibility with pages created for older browsers. One way to do this is to render a page in such a way that it supports the browser's earlier 'quirky' behavior, in a mode called 'quirks mode'. Depending on the DOCTYPE, either a browser will render the page using an earlier interpretation of a specification, such as Internet Explorer's infamous CSS box model bug; or it will render it according to a standards-based viewpoint. The XHTML 1.0 Transitional DOCTYPE triggers standards mode for most browsers, even if the page itself isn't meant to be interpreted as an XHTML document. To actually be served up as XHTML, it needs to be passed to the browser with the XHTML MIME type, typically triggered if the extension is .xhtml. However, other modifications need to be made to the page, such as ensuring it is valid XHTML, and also modifying the opening html tag to include the tag namespace, the vocabulary where all the elements are defined: <html xmlns=""> The HTML 4.01 strict DOCTYPE can also trigger standards mode for most browsers, but I don't use it because the page won't validate if proper XML markup is used. For example, the meta tag in most of the examples uses a closed tag format: <meta http- This is invalid with the HTML 4.01 strict DOCTYPE. For that DOCTYPE, we need to use: <meta http- Though HTML 4.01 is standard and still supported, we want to get into the habit of using proper XHTML markup as much as possible, even if the pages are still served up as HTML. This means less work when we are ready to make the step to serving pure XHTML documents, and eliminating all HTML. I mentioned earlier about how the DOCTYPE can impact on JavaScript. Well, in actuality it's how the document is served that can impact. Several of the examples use document.write or document.writeln. The reason I use these is that the example modifies the page. Later in the book, I introduce the Document Object Model (DOM) and demonstrate 'proper' methods of modifying the document. Until we reach that point, though, and in those cases where an alert box doesn't work well, the example uses document.write. Unfortunately, one of the drawbacks with using document.writeln is that it's not valid with a document that's served as XHTML. In fact, it won't work. The reason that documents rendered as XHTML don't allow document.write is that XHTML documents are considered to be valid XHTML, and won't display unless they are. If we can modify the documents with document.write, the browser has no way of knowing if the content being written to the page is valid or not. We could introduce 'bad' markup, which would render the guarantee of validity with XHTML. This same concern also exists with the use of innerHTML (which we'll look at later in the book). However, most browser do support this rather important property, and some browsers even validate the content being used to set innerHTML to ensure it's valid. For the rest of the chapter, the XHTML 1.0 Transitional DOCTYPE is used to ensure standards mode. Remember though that the namespace setting (), the extension (.xhtml), and the presence or not of document.write can impact on the document actually being served as XHTML. In the examples that can be downloaded for the book, multiple versions with different document types and extensions are provided for a couple of the examples, to demonstrate the variations discussed in this section. Each will have comments to discuss what changes were made so the document would serve correctly, as well as validate. a CDATA section, as demonstrated in Example 1-2. This chapter covers the the three basic... should be This chapter covers the three basic... The comment that reads, // becomes 71 should be, // becomes "71" to indicate that the result is a string data type, rather than a number. (N.B. This syntax was used correctly on the preceding line.) Note from the Author or Editor:Change third example on page 28, second line, to: var strValueTwo = 4 + 3 + "1"; // becomes "71" underscore to begin a variable, your best bet is to start with a letter." should read "Though you can use the $, letters, or underscore to begin a variable, your best bet is to start with a letter." "embedded in the page" does not match code, which uses the string "embedded in page" Two problems here: (1) The notation "2e31", which appears twice in the first sentence, is incorrect. This notation as written means "2 times 10 to the 31st power", clearly not the author's intent. Rather, this should read "2", followed by "31" written as a superscript in each of its two instances here. (2) The upper range for the functions you reference is actually "2 to the 31st power minus one", 2**31-1, the integer representation of which is 2,147,483,647. Note from the Author or Editor:Page 31 remove notation in second to last paragraph: Though larger numbers are supported, some functions can only work with numbers in range of -2,147,483,648 to 2,147,483,648. "The first, global.js, concatenates it's own string, globally in globalPrint, to the message." "globally in globalPrint" should be in fixed font to demonstrate it is a literal string message += " globally in globalPrint"; Since the variable is undefined, this generates an error in Firefox 2.x (though not necessarily other browsers, or earlier versions of Firefox). To prevent, use: if (typeof(message) != 'undefined') message += " globally in globalPrint"; "Hi, you were here" does not match the string in the code, which is "also accessed in global2Print" burningbird,net should be burningbird.net In this set of examples that begins on page 34, sValue is set to the empty string. This evaluates to false in JavaScript, not true. See the 2nd paragraph of the author's own quote of Simon Willison later in the page. Note from the Author or Editor:Commenter is correct. Third example on page 35 change to: if (sValue) // true if variable is both defined and given a value "...block in Example 2-1 is replaced..." should be "...block in Example 2-2 is replaced..." Examples of floating point numbers in middle of page should read: (to O'Reilly, note about superscript): 0.3555 144.006 -2.3 44.1(2) // note that the 2 should be typographed as superscript 19.5(e-2) (which is equivalent to 19.5(-2)) // note that the e-2 and -2 should be superscript Sentence that reads: "You can convert strings or booleans to numbers..." should read "You can convert strings to numbers..." more so then should be more so than sValue has not not been delcared should be sValue has not been declared hexidecimal should be hexadecimal var bValue = true; var sValue = "this is also true" should end with a semi-colon, like - var bValue = true; var sValue = "this is also true"; "Usually, I should add,..." should read "Most complex JS libraries are 'usually' not more than a few hundred lines, because some of the newer Ajax libraries can be quite large." "Usually, I should add,..." should read "Most complex JS libraries are 'usually' not more than a few hundred lines, because some of the newer Ajax libraries can be quite large." Sentence that reads: " - Represents a negative value" Should read " - Changes sign of value we're typed out should be we've typed out there is excellent Boolean algebra reference should be there is an excellent Boolean algebra reference nValue += 3.0; should be nValue += 30; The code snipped mid-page should read: var flag_A = 0x1; var flag_B = 0x2; var flag_C = 0x4; var flag_D = 0x8; switch (stateCode) { case 'OR' : case 'MA' : case 'WI' : statePercentage = 0.5; taxPercentage = 3.5; break; case 'MO' : taxPercentage = 1.0; statePercentage = 1.5; break; case 'CA' : case 'NY' : case 'VT' : statePercentage = 2.6; taxPercentage = 4.5; break; case 'TX' : taxPercentage = 3.0; break; default : taxPercentage = 2.0; statePercentage = 2.3; } "In this instance, the case values associated with the block are separated from the others by commas, which means any of the three can match." should be "In this instance, the case values all resolve to the same block, which means any one of the three can match." It's identical in behavior to listing out the options, separated by commas. In particular, the equality operator is implicitly used in the switch statement, which means that both of the following cases are applicable if the switch expression evaluates to "3.0": to In particular, the switch statement does not implicitly use an equality operator, which means that an expression of 3.0 evaluates true for the first case statement, false for the second: Code snippet should read: if (sValue <= 2.0) // true condition ? value if true : value if false; eval ("document.writeln(document.." + docprop + ")"); should be eval ("document.writeln(document." + docprop + ")"); holdAnwer should be holdAnswer given a starting location and length of string. (example) (following example) The substring and slice methods return a substring given a starting and ending index. The following using the explicit option: should be The following uses the explicit option: replaced by 's+' and /s+/ respectively. Example that reads: var newDt = new Date(1977,12,23); should be var newDt = new Date(1977,11,23); example of Date.UTC near the bottom that reads: var numMs = Date.UTC(1977,16,24,30,30,30) should read: var numMs = Date.UTC(1977,11,24,19,30,30,30) Last line of example code in page should read: <body> not </body> Also, the element directly after "</head>" should be "<body"> (not "</body>"). var removed = fruitArray.splice(2,2,'melon,banana'); should be var removed = fruitArray.splice(2,2,'melon','banana'); the letters in legnth are transposed, and should be "length". "The pop method removes the last element of the array, while the shift returns the first element." should read: "The pop method removes the last element of the array, while the shift removes the first element." second and third comments in Example 4-10 should read: // use shift to shift items off the array and then // now, same with unshift The result of running this JavaScript is: Should be The result of running this JavaScript in our target browsers (all but IE 6.x and IE7, which don't return a length when using unshift) is: document.writeln("new length is " + fifoNewArray.length );i should be document.writeln("new length is " + fifoNewArray.length ); (delete the 'i' at end of line) "If a condition isn't met..." Should be "If a condition is met..." Code snippet that reads: var func = (params) { Should read var fun = function(params) { Sentence that reads: Returning to the Array methods, the filter method ensures that elements are not added to any element... Should read Returning to the Array methods, the filter method ensures that elements are not added to an array... The sentence that starts "When the input field is clicked..." should read "When the second input field is clicked (the first loses focus)..." The sentence that starts "This makes sense, when you consider that cascade means that the lowest..." should read "This makes sense, when you consider that cascade means the highest..." add the following line as the first line in the function clickMe: evnt = evnt ? evnt : window.event; The sentence In fact, this is probably one of these most common reasons for triggering an event directly: s/b In fact, this is probably one of the most common reasons for triggering an event directly: "For the newer event systems, which use either the attachMethod or addEventListener..." Use "For newer event systems, which use either the attachEvent or addEventListener..." document.formname.addEventListener("submit", formFunction. false); The period should be a comma The reference to Example 4-9 should be Example 7-1 opts[opts.length] = new Option("Option Four", "Opt 4"); Should be opts[opts.length] = new Option("Option Four", "Opt4"); window.addEventListener("load",setupEvents1,false); should be window.addEventListener("load",setupEvents,false); } else if (document.someForm..attachEvent) { should be } else if (document.someForm.attachEvent) { "Also in the example, DOM Level event handling..." Should be "Also in the example, DOM Level 2 event handling..." Line: var theEvent = evnt ? evnt : window.event; Should be: evnt ? evnt : window.event You have this: function validateField(evnt) { var theEvent = evnt ? evnt : window.event; var target = evnt.target ? evnt.target : evnt.srcElement; var rgEx = /^\d{3}[-]?\d{2}[-]?\d{4}$/g; var OK = rgEx.exec(target.value); if (!OK) { alert("not a ssn"); } } Which does not work in Internet Explorer because you are referencing to the wrong object on var target = evnt.target ? evnt.target : evnt.srcElement; where evnt should be theEvent. See below function validateField(evnt) { var theEvent = evnt ? evnt : window.event; var theSrc = theEvent.target ? theEvent.target : theEvent.srcElement; var rgEx = /^\d{3}[-]?\d{2}[-]?\d{4}$/g; var OK = rgEx.exec(theSrc.value); if (!OK) { alert("not a ssn"); } } Can you please update the examples as well. Item 1 is written as "Select element event". This should be "second element event". Neither "First" (in item 2) nor "Document" (in item 3) should be capitalized, in order to correspond with the example 7-4 code. Note from the Author or Editor:Numbered list following code and first paragraph on page 153 change to 1. second element event 2. first document event 3. document event <p><a href="" onclick="parent.frameb.location.replace" should be <p><a href="" onclick="parent.frameb.location.replace No quote at the end of the line and the italic font should be removed. "The frametwo page has a link to another page called noway.htm..." should be: "The frametwo page has a link to another page called frame2.htm..." "color depths greater than the older eight pixels" should be "color depths greater than the older eight bits" wrong method name the text reads: The returnValue property is equivalent to returning false explicitly in the function, and preventDefault prevents the default behavior based on the object and the event. For instance, with a click event on a submit button, calling ---->stopPropagate<--- and setting returnValue to false prevents the default form submission behavior. I think instead, you need to chance name stopPropagate to stopPropagation Note from the Author or Editor:Page 198 (print) para 1, change stopPropagate to stopPropagation "To add text, the createTextNode factory object..." should read "To add text, the createTextNode factory method..." BODY tag "In the application, when the nodeValue property is not null..." should read "In the application, when the nodeValue property is null..." Code should be used in randomColor to check if the digit returned is single digit, and if it is, alter the string to add a preceding zero, as in if (r.length == 1) r = "0" + r; opacity = opacity * 100; img.style.filter = "alpha(opacity:"+opacity+")"; To var opac = opacity * 100; img.style.filter = "alpha(opacity:"+opac+")"; Move the following line, included in the setup function, outside the setup function: document.onclick=changeOpacity; And change the getOpacity function to: if (this.obj.style.filter) { var filterString = this.obj.style.filter; var derivedVal = filterString.substring(filterString.indexOf(':')+1,filterString.indexOf(')')); return derivedVal / 100; } else { return this.obj.style.opacity; } "valueb". should read: "valueb", Replace window.getComputedStyle with currentStyle and getComputedStyle with window.getComputedStyle. Instead of: "It tests, first, whether window.getComputedStyle is supported, and if not, tests for getComputedStyle." It should read: "It tests, first, whether currentStyle is supported, and if not, tests for window.getComputedStyle" return document.defaultView.getComputedStyle(obj,null).getPropertyValue(cssprop); to return window.getComputedStyle(obj,null).getPropertyValue(cssprop); "...the top value, is greater than a value (200 + a value * the number of elements..." should read "...the top value, is greater than a value (100 + a value * the number of elements..." Example 13-1: The order for the test of XMLHttpRequest needs to be reversed for IE 7 to work, or the overrideMimeType method removed, and PHP script adjusted accordingly, as IE7 does support XMLHttpRequest, but not overrideMimeType. In Example 14-6, I have a copy of Example 13-3 should be In Example 14-6, I have a copy of Example 13-4 The text says: "I've highlighted the lines of code where I've made changes based on adding in the logging functionality." But lines with changes are not highlighted. The variable ".someVariable" is also incorrect. The dashes should be underscores for CURRENT-MONTH var octNumber = parseInt(intNumber, 2); should be var octNumber = parseInt(intNumber, 8); switch(val) { case 'one' : case 'two' : result = 'OK'; break; case 'three' : result = 'OK2'; break; default : result = 'NONE'; } Change first line of solution to: var regexp = /fun/g; var newWindow = window.open("", "", "width=200,height=200,toolbar=no,status=no"); © 2015, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.oreilly.com/catalog/errata.csp?isbn=9780596527464
CC-MAIN-2015-27
refinedweb
3,771
65.22
A couple of weeks ago you asked for an example of the deviating y-mousepos.The thread is closed now.Running Win 8, MSVC2010Express and Allegro-508 Here is the adapted code to show the difference in y-pos. First a pixel is drawn on 1000, 500.Pointing and pressing LMB gives 1000, 490. Hello, I'm probably wrong, but I've compiled and ran your code and the only thing I can think of is that your mouse pointer doesn't click at (0,0). Can you add code that draws a pixel using mouse.x/mouse.y, and prints out mouse.x/mouse.y at the same time? Don't use the mouse cursor, just draw a pixel at the mouse.x/mouse.y instead and then read what it says when you hover over your white pixel at 1000/500. ----------------------------------------------------Please check out my songs: Protip: You can embed code directly in the forum with <code></code> tags. That is preferred to an attachment because it allows people to view the code without downloading, saving, and opening a file (OK, if the code is 30 files, or 10 MB in size, then attaching is probably better, but you get) Your screen is being rescaled. You ask for a 1920x1080 pixel window, presumably on a 1920x1080 monitor, but the Windows taskbar takes up some space so your window is scaled down. You can see this more clearly with this version of the example, that I hacked up. Some of the horizontal lines are blurry due to the scaling. You can use ALLEGRO_FULLSCREEN or ALLEGRO_FULLSCREEN_WINDOW, or translate from the unscaled mouse coordinates to match the rescaled screen coordinates. Bamccaig,I noticed of course the fact that my code was not looking the way other code-entries are showing in this forum and I was wondering how to fix that.Maybe you mean with your entry " tags." the same but I do NOT understand it.Some more details would be helpfull. Simple, for example: <code> #include <allegro.h> int main (int argc, char *argv[]) { return 0; } </code> would look like: #include <allegro.h> int main (int argc, char *argv[]) { return 0; } If I paste your code between <code></code> tags it would look like below. In capitalist America bank robs you. So here is the entry of the formatted text. #include <allegro5\allegro.h> #include <allegro5\allegro_image.h> #include <allegro5\allegro_native_dialog.h> #include <allegro5\allegro_ttf.h> #include <allegro5\allegro_primitives.h> Edit: OK, got it now. Do NOT use the dropbox. Did you read Peter's reply where he told you what's wrong with your code? Yes, I read his replay and did some trials.I used the line he commented out, but no result. I understand however what Peter means. Whatever I try, the bottom-bar stays visible, and the y-deviation is still there. BTW: Is editing of own post not possible in this forum? Is replying 2 times also not possible? Replying two times in a row automatically glues posts together. Edit button is to the right from post date label. Also, now that this is mentioned, Allegro shouldn't force window size so that it's smaller than requested, resulting in distortions. User should have no problem with requesting 100000x20000 px window. One doesn't with plain WinAPI and GTK+. Being serious is stupid, I'm done with it. Thanks Raidho,Due to your answer I could edit my "format"-entry and show that even I can do it. The y-deviation answer is puzzling me. Does it mean that my code is not wrong?Is using a smaller screenheight the answer for the moment. ( I need for my future program at least a height of 960 pix). I will try if using a height of 960 solvesthe problem, so I can go on) That's simply an oddities in Allegro design. You can find them every now and then. If you want to run a windowed application, then you shouldn't make window so big that it won't fit most of the desktops, make it 640x480 and the users will then adjust the sizes on their own.
https://www.allegro.cc/forums/thread/612547/982598
CC-MAIN-2018-34
refinedweb
693
76.22
: - Post on the Peter’s blog - An abbreviated HTML version of the proposal from Neal - Some thoughts from Gilad The use of closures and first class functions looks neat in Python and other languages, but the syntax proposed here is bearly legible - even in the simplest example. Remember the confusion on your first attempt at implementing a class with a generic interface? Yikes. Why do I get the impression closures will be forced upon the Java language for all the wrong reasons? Please don't sacrifice Java's easy-to-read syntax for the sake of looking 'cool' to the vocal minority. While closures make for scripts with some neat tricks, they can also lead to some horrible spaghetti code in larger code bases. Lets just keep Java simple. Or least legible. I've long wanted closures for dealing with anonymous classes for AWT/Swing action handlers and one-off threads that I have to use to get things off of or back onto the event-dispatch thread, both of which are tasks that sometimes require the incredible ugliness of declaring a final variable just to make a local variable visible to the thread. When I saw closures in Ruby, I immediately realized that this was a valid Java use for the concept. Java 1.7 should actually be Java SE 7.0 (Dolphin). I don't know. Closures and lambdas are great -- and I agree with Chris that it would clean up the syntax in some of the tossing of Runnables around, however I am going to go out on a limb here and say we just don't need it so much. Much like a lot of the talk about invokedynamic, I don't think we need this so much as just cleaning up what we already have. About 90% of what people seem to want invokedynamic for looks to me like something that could be solved by letting people create DynamicProxies of Objects. Here it seems that just having a more reasonable Reflection API and some more sane definitions for taking care of this stuff. It is not that closures are inherently bad, I just really tend to feel they make code less maintainable as you get to end up hiding "real" code in hard to find places. In terms of the proposed syntax, it really doesn't seem like a big step down from Pythons closures at all, just with the return types cast. Certainly not as bad as the "XML in the language" proposals I have seen. The syntax is horrible. "Thou shalt not add reserved words to Java" is getting annoying. Ok, seriously, someone tell me why: int plus2(int x) { return x+2; } is SOOO much worse than: def plud3 (x): return x+2 Closures are alright. There are definately times when they are needed to keep a clean interface. But I'm sure they will be misused more often. By all means put it in there, but how about making Java SE leaner first. And while you are at it lets have a homogenous single point of configuration for Java apps, rather than the gazillion of configuration files. A simple pluggable set of _interfaces_ for loading and accessing the configuration. You can play with closures in java already; groovy provides this: /Lars "If you evolve a language far enough you destroy it." - Josh Bloch Closures are already 90% solved by interfaces and anonymous classes, and the latter are actually more strongly typed and cleaner. I am personally against adding more features to the language that decrease readability with very little benefit. Generics has already made a mess of things. Closures are already 90% solved by interfaces and anonymous classes, and the latter are actually more strongly typed and cleaner. I am personally against adding more features to the language that decrease readability with very little benefit. Generics has already made a mess of things. We are creeping ever-closer to the cliff that upon falling from, will see us tumble into the syntactic abyss/nightmare that typifies the living Hell of the C++ programmer. Weren't we Java-gnomes the ones who didn't want this kind of complexity and thus the two main selling points of Java -- readability and ease-of-use -- are now at risk of being lost so some folks can have their wee programmatic nifties? Ugh. Go code in C++ if you want to be a full-bore weenie, but for the love of God, leave the rest of us out of it. Interestingly how much venom this idea has attracted from some people. I don't like the horrid syntax, but that's mainly the ridiculous amount of static type stuff that's needed. And yes def plus3(x): return x+3; is easier on the eyes than int(int) plus2b = (int x) {return x+2;}; I'll be tracking this development closely - and hope that a nicer version is actually implemented. Less is probably more in this case(plus it looks pretty ugly) int plus2(int x) { return 2 + x; } // legacy style int(int) plus3 = (x) { return 3 + x; } // closure style int(int) plus2b = plus2; int(int) plus3b = plus3; plus3 = plus2; // OK, compatible type and plus3 // is a closure plus2 = plus3; // Illegal, plus2 = legacy Please!!..NO. Don't complicate the lang sytnax. Java's beauty is the simplicity in lang. Why do they spend most of the hour in doing the syntax thing but not spending enough time to fix those bugs which have been stucked for years, especially the problems on desktop deployment. Why we need to ask clients to download 20M of JRE to run our 400k applet? Yes, it is downloaded once and for all but no customers would love to install something as big as 20M for their occational usage. Why does it just do what the flash plugin do? Just have a client vm with graphics mostly and do the show. Take out unneeded stuff and chop it small enough (1 - 2M). What the flash platform does is to gain the battle field in controlling the desktop market first and seed your BIG VM (apollo) there afterward. Then you are all set. NO!! This will cause nothing but issues and as other people have pointed out is not an absoluate necessity. KISS So Java is going to run a full circle and end up as complex and as unmaintainable, and as easy to shoot yourself in the foot as C++. And then a small community (or perhaps another "visionary" company) will come a long and simplify Java to something else nice, small, easy to learn and productive. Such is life. Is Java 1.7 trying to compete with Linq/C# 3.0 I've done both Java and C# development, and closures/anonymous methods have been really nice in my C# endeavors. Sure they can be abused and misused, but so can a lot of things. You can't force bad developers to write clean code no matter how simple the language is. Closures do server a purpose, but like anything they can be abused. But I guess I'm a little biased in my opinion, I also LOVE generics... :) Seems like a new keyword is needed: public static void main(String[] args) { int plus2 (int x) { return x+2; } function(int, returns int) plus2b = plus2; System.out.println(plus2b(2)); } closure: function(int x, returns int) plus2b = function(int x, returns int){return x+2}; A little more verbose, a lot more readable. BTW, I thought for(Ojbect o: objs) {} syntax wasn't very readable at first, but for loops are used all the time, so it's easily readable now. Closures in Java would typically follow this ... ( as per my understanding ) { int , int => int } ( takes two ints and yields an int But invariably closures can give java most powerful features ... like for example i could have a complex closure like { { int x, int y => x + y} ,{int z, int a => z + a} => int zbc }, wherein we have two closures yielding an int and both taken as args and yields a final int zbc. or something to that sort ... very intuitive and powerful. the closure result will be an object type ... in response to anonymous code about spaghetti code, closures are being added because : we have anonymous classes which are inherently weak. For example you cannot use the variable in the same way inside a lexical scoping ( the scoping of the anonymous class that is ). Example i declare a variable x = 100; and then in anonymous class i gotta access that variable do some calculation, then i have to make the modifier of x final ... why all of this required, when an elegant solution will give me all i need ... thats the thought behind closures ... Vyas, Anirudh I agree with Venant. Fix the bugs and the current issues with the platform rather than adding new features. Keep Java Simple. If you want closures use Groovy.
http://www.oreillynet.com/onjava/blog/2006/08/will_we_have_closures_in_java.html
crawl-002
refinedweb
1,495
70.02
Today first person shooter games like Counter Strike, Quake and Unreal Tournament are played online mostly. Each game ships with a built-in server-browser to show active servers on the net. A lot of people would like to have such a server-browser or similar tools as standalone applications. Maybe they want to look for more than one game at the same time, maybe standalone tools have additional features (like enhanced filtering) or are lightweight applications that do their specific job much faster than in the in-game-browsers, whatever. With Unreal Tournament 2003 being available soon, I thought it would be a good idea to share my code. Provided with this backend-class, you are able to develop your own server-browser, server-watching-tool or whatever you like. There are several sources of information on how to get the latest status of an Unreal Tournament server (see the Links-section below). (I personally play none of the other games, so I don't know about their query-protocols, but i believe, they are similar to the Unreal-ones presented here.) When speaking about Unreal Tournament 2003, there are two protocols available. The GameSpy-protocol that already worked with Unreal Tournament (the old version of the game) and a brand-new protocol, that was first seen with the Unreal Tournament 2003 Demo available in mid-September 2002 (when this article was written). The 'old' GameSpy-protocol is well documented by Epic (the company that does all the Unreal-series games). On Epic's Unreal Technology Site there is a page on server-communication that gives us a lot of information. In short: send '\status\' via UDP to the server, but add 1 to the official gameport (e.g: server-address = 62.93.201.43:8980 - send query to 62.93.201.43:8981). What you get back is a bunch of packets, each in the following format: "\property\value\property\value\...\queryid\X.Y". Each query is assigned a unique ID to - all the packets you receive from one query should have the same X-value. The second number Y is the packet-number (1-indexed). This is essential, since UDP is a connectionless protocol, thus packages may take different routes through the internet and may arrive out of order. Additionally from time to time packets get lost. Checking the packet-numbers you can asure, that you received all packets in the correct ordering and that none is missing. The last package ends with the string '\final\'. To give you an idea, of how a typical answer looks like, here is a very simple one: \gamename\ut2d\gamever\1080\minnetver\1000\location\0 \hostname\ UT2k3 TDM2 @Clanserver4u\hostport\8980 \maptitle\Citadel\mapname\CTF-Citadel\gametype\xCTFGame \numplayers\0\maxplayers\12\gamemode\openplaying\gamever\1080 \minnetver\1000\AdminName\hEx\AdminEMail\hex@inunreal.de\queryid\66.1\final\ Currently there's no player active on the server and we received all the information in one single packet. Querying an 'old' UT-server, you would use Ip:gameport+1 with that GameSpy-protocol (as written in that official document I mentioned above). Now in times of UT2003, that protocol was moved over to gameport+10. On gameport+1 there's this new, let's call it Unreal-protocol, which is byte-oriented. There's no official documentation on this protocol, but since it is far more robust (no backslash-bug, see below) and faster to parse, it will be used a lot in the future. See the Links-section for pages of people that looked at the packets (sent by the original built-in UT2003 server-browser) and have written down, what they found out. Update: After observing the two protocols for a few weeks now, there are certain important things to mention. See the Protocol Discussion Section below. You guess what the two 'in-depth' parts of this article (and the main problems writing the code for our purpose) are: connect to the server and parse the answer. But before getting into detail, let's have a look at how to use this class: First thing to do is to add the provided C#-file UT2003Server.cs into your project. Next you construct your server-object and call at least once the Refresh()method. This method connects to the server, parses the answer and assigns the current values to all the server-objects properties. Well, and then you just use them for your displaying pleasure. Refresh() server = new UT2003Server("62.93.201.43",8980); // ... try { server.Refresh(); textBoxServerName.Text = server.HostName; textBoxMapTitle.Text = server.MapTitle; textBoxPlayerCount.Text = server.NumPlayers.ToString(); for(int i=0; i<server.Players.Length;++i) { listBox.Add(server.Players[i].Name); } // other properties are Version, MinClientVersion, MapFilename, // Gametype (DM, CTF, ...) // MaxPlayers, AdminName, AdminEmail, Player-array containing // name,ping,frags for each player // and more... } catch (GameserverQueryException ex) { MessageBox.Show(ex.Message,"Error"); } Pretty simple. I would strongly recommend to put the Refresh() into a try-block and check for GameserverQueryExceptions and handle these cases accordingly in your application. UDP really isn't that reliable and from time to time the server doesn't respond or responds too slow or the user just isn't online. These exceptions do really occur. try GameserverQueryException Refinement I: Using server.Timeout you can adjust the maximum number of milliseconds, the query-class waits for a response before raising a ServerNotReachableException (which is a GameserverQueryException) when executing a refresh. server.Timeout ServerNotReachableException Refinement II: Using server.Protocol you can decide what query-protocol is used, the next time you call Refresh(). What protocol you use is up to your needs. The GameSpy-protocol delivers less information on the game settings but current player-statistics. The Unreal-protocol gives outdated player-information but more general settings. I've introduced a third protocol, that is a mixture of the two. First it uses the GameSpy-protocol to get the current player-statistics and after that a part of the Unreal-protocol enlarges the information on the game-settings. server.Protocol Attention: Be aware that some properties of the server-object may equal to null after refreshing. Some servers just don't deliver all possible information. Additionally future patches of the game may change or add certain properties. Hopefully I have the time to update this article each time something changes dramatically. null Update: In version 2.1 there was a new property introduced to the class, named ServerVars. This is a collection of all the server variables available, making the usage of the class far nicer and easier. The 108 lines of code for displaying all these values in the example-application dropped down to 7 lines. ServerVars for(int i=0; i<server.ServerVars.Count; ++i) { ListViewItem prop = new ListViewItem(new String[] { UT2003Server.Prop2En(server.ServerVars.GetKey(i)), server.ServerVars[i]}); listViewServer.Items.Add(prop); } The new static function Prop2En translates a property-name given by the server into a human readable English phrase (e.g. "gamestats"→"Stats logging" or "maptitle"→"Map Title"). Prop2En There are a few things left to mention about ServerVars. As, for ease of use (i.e. without casting), all the items in the collection need to be of the same type (which is string), the properties, that are not strings by nature are converted internally. So you don't get Port, NumPlayers, MaxPlayers etc back as ints, but as strings, when using server.ServerVars["numplayers"] instead of server.NumPlayers. But as both are still available, you can choose the way you like it. The same with mutators: server.Mutators returns a string array containing all mutators, server.ServerVars["mutator"] one string containing all mutators as a comma-separated list. Another advantage is the fact, that the ServerVars-collection is cleared, when you assign a new IP to the server-class. string int server.ServerVars["numplayers"] server.NumPlayers server.Mutators server.ServerVars["mutator"] The rule is: When you want to display all server-properties in a list or similar, loop through ServerVars. When you need access to a certain server-variable use the corresponding property. There is no real querying of the masterserver, at least if you don't know the authentication-algorithm, that UT2003 uses to connect to the masterserver. EPIC doesn't want everyone to query the server via TCP (as in good old UT times), but rather updates a textfile every 20 seconds (or something in that dimension) which is free for download. There's one file for the UT2003 demo-version and another for the retail: The Format is pretty simple: one line for each server and each line containing three tab-separated values, which are IP → Gameport → GameSpy-Queryport static UT2003Server ... UT2003Server [] serverList; serverList = UT2003Server.QueryMaster(); ... That's it. The UT2003Server.QueryProtocol enumeration contains three entries. And it's up to you to choose, which protocol is used when querying a server. Some thoughts may help you to decide, what fits your needs best. But there's nothing, you could damage, so just try, if you're unsure. UT2003Server.QueryProtocol None of the two protocols is better in each and every case. It seems, that the output of the new Unreal-protocol is cached on the server and far from being up-to-date. If you query a server every 2 seconds, you see that the output of the GameSpy-protocol changes everytime something happened (especially the ping and frags of the players) while the Unreal-protocol still delivers things from the past. On the other hand, the GameSpy-protocol gives you less information on the game and the server in general. Thus getting information about mutators, friendly fire, gamespeed, etc., you have to use the new Unreal-protocol. Summary: only using both protocols gives you complete and up-to-date information. That's why I introduced the Mixed-Protocol, that does the job for you. Mixed The following table outlines the properties, the protocols deliver exclusively (i.e. the other one doesn't): Some Servers don't have their GameSpy-Queryport on Gameport+10, since this is freely configurable by the server-admin. When I wrote this, there were 1196 servers online, 951 (79.5%) of them using Gameport+10 (default), 118 (9.8%) using Gameport+12, 67 (5.6%) using Gameport+1 and 60 (5.0%) using other Offsets (even negative ones, i.e. Gameport > Queryport). That means: there's no 100% reliable way to get the Queryport out of the Gameport, but with +10 or +12 you are right with a probability of 90%. Using Gameport+1 will work with half of the remaining 10%, but I have no idea, where these servers put the Unreal-protocol Queryport (since Gameport+1 is the default value here). But it turns out, that the entire story isn't that frustrating, because EPICs masterserver just tells you the GameSpy-Queryport - so if you get your servers from the masterserver, everything is alright. If you want to observe a certain server given by IP and Gameport, you have to ask the administrator for the GameSpy-Queryport or try yourself. The message is: it is not always Gameport+10! If you just want to use the class and don't care much about details and the problems I had to face, you can stop reading here and do a little C# instead. Well, now lets dive into the Details. First some preparatory work: The UdpClient-class in the System.Net.Sockets-namespace offers a blocking Receive() method, that doesn't return until a UDP-packet arrives at your computer. What I wanted is something that does return null or raises an exception, if a timer expires, such that you don't have to wait infinitely if something goes wrong at the network communication level. Daniel Turini sent me with this post here on the codeproject forums in the direction to try UdpClient System.Net.Sockets Receive() UdpClient::Client.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReceiveTimeout, 10000); but that just didn't work - would have been too nice. Notice the Client-property is protected, you have to build your own child-class to be able to access it. And that's what I did in the end to solve the problem: I created a new TimedUdpClient class which I present shortly here: Client TimedUdpClient public class TimedUdpClient : UdpClient { private int timeout = 1000; // 1 sec default private Thread timeoutWatchdog; public int Timeout { get { return timeout; } set { timeout = value; } } public new byte[] Receive(ref IPEndPoint remote) { byte [] ret; timeoutWatchdog = new Thread(new ThreadStart(StartWatchdog)); timeoutWatchdog.Start(); try { ret = base.Receive(ref remote); } catch (SocketException) { ret=null; } finally { timeoutWatchdog.Abort(); } return ret; } private void StartWatchdog() { Thread.Sleep(timeout); this.Send(new byte[] {0x00},1,"",8000); // port is arbitrary } } It makes a new Receive()-function available, that starts a "watchdog"-thread before calling the original blocking Receive()-function from its base class UdpClient. The watchdog just waits the specified amount of time and then just sends a single byte on the same socket the client is listening to. Unfortunately this byte is not received normally (so that one could distinguish between packets sent from outside and packets sent by myself), instead an exception is raised somewhere inside UdpClient::Receive() and the function returns. Hey, actually that's the only thing we wanted. Maybe there are nicer ways, maybe not. But it works. UdpClient::Receive() Having that and the base functionality of UdpClient due to inheritance, we can start querying the server. The first version of this article had a lot of code here, which was kind of straightforward and only lengthened the page needlessly. I will give you only the core parts now, if you're interested you will look at the code anyway First lets have a look at the GameSpy-Query part: // set the server remote = new IPEndPoint(ip,port+gameSpyQueryOffset); // form the query byte[] query = Encoding.ASCII.GetBytes("\\status\\"); // send the query udp.Send(query,query.Length,remote); // receive result answer = ""; byte[] receive = null; DateTime start = DateTime.Now; TimeSpan diff; do { receive=udp.Receive(ref remote); // turn received byte array into a string if (receive!=null) answer += Encoding.ASCII.GetString(receive); DateTime now = DateTime.Now; diff = now-start; } while (!answer.EndsWith("\\final\\") && receive!=null && (diff.Seconds*1000+diff.Milliseconds < timeout)); if (receive==null || (diff.Seconds*1000+diff.Milliseconds >= timeout)) throw new ServerNotReachableException("timed out"); // evaluate AnalyseGameSpyAnswer(); Sending the query via UDP needs the query itself to be in a byte-array, which is easily constructed with the help of the Encoding-namespace. As you know from the previous discussions gameSpyQueryOffset will be equal to 10 in most cases. The AnalyseGameSpyAnswer-function is described later on in the parsing-part of this article. byte Encoding gameSpyQueryOffset 10 AnalyseGameSpyAnswer Update: I found one server on the masterlist, that returned nonsense, when asked with '\status\' on the correct port. Since I want that class to work with every server on the masterlist, I updated the piece of code above with an additional timeout-condition, when there is something received, but no '\final\' could be found within the given timeout. The ominous server sent an increasing stream of zero-bytes... The Unreal-Query is devided into three parts (see the documentation for details), where all three queries are sent and evaluated, when the Unreal-Protocol is used, and only the second query is used, when working with the mixed protocol (since basic and player information is already there from the GameSpy-Answer in that case). // server props query remote = new IPEndPoint(ip,port+unrealQueryOffset); byte [] queryServer = {0x78,0x00,0x00,0x00,0x01}; udp.Send(queryServer,queryServer.Length,remote); receiveServer=udp.Receive(ref remote); This is the code for the second query, which receives the server properties packet. The two other ones are analog. Afterwards the AnalyseUnrealAnswer-function is called, which extracts the correct information from the byte-arrays. AnalyseUnrealAnswer First, I'll give you a real world example: \gamename\ut2d\gamever\1077\minnetver\1000\location\0 \hostname\The Drunk Snipers UT2003 CTF Instagib (DEMO) #2\hostport\7777 \maptitle\Citadel\mapname\CTF-Citadel\gametype\xCTFGame\ numplayers\11\maxplayers\16 \gamemode\openplaying\gamever\1077\minnetver\1000\AdminName\ Sirius\AdminEMail\sirius@drunksnipers.com \player_0\Player\frags_0\1\ping_0\ 63\team_0\0 \player_1\Perdition\frags_1\0\ping_1\ 221\team_1\1 \player_2\Porter\frags_2\10\ping_2\ 143\team_2\1 \player_3\JoeDozer\frags_3\20\ping_3\ 289\team_3\0 \player_4\Kee\/n\frags_4\38\ping_4\ 117\team_4\0 \player_5\Rekklih\frags_5\42\ping_5\ 166\team_5\1 \player_6\Hanover_Fist\frags_6\4\ping_6\ 196\team_6\0 \player_7\HELLAS-4774CK\frags_7\101\ping_7\ 107\team_7\1 \player_8\Psycho\frags_8\49\ping_8\ 105\team_8\1\queryid\29.1 \player_9\Levodopa\frags_9\45\ping_9\ 36\team_9\0 \player_10\Psy[Duck]\frags_10\49\ping_10\ 73\team_10\0\queryid\29.2\final\ This answer comes in two packets (29.1 and 29.2) in correct ordering and 29.2 is definitely the last packet, because it ends with '\final\'. There are 11 players currently on the server - the demo-version had a bug, that caused the server to send a weird team-value. The class takes care of that and simply sets the team-value to an empty string, when querying a UT2003-Demo server. There's another problem in that answer, that is well known to GameSpy-Style-Query-Programmers. The 'very cool' Player named 'Kee\/n' confuses our nice little answer.Split(new char[] {'\\'});-parser, because he uses the splitter-character (the backslash) in his name. If you don't take care of that, you will get his name has 'Kee' and after that a property named '/n' with the value 'frags_4' - which is nonsense. *grmpf* answer.Split(new char[] {'\\'}); But hey, you're a programmer and programmers are there to solve problems. Well, at least I solved the problem with the following hack: if the next property immediately after 'player_x' is not 'frags_x' or 'queryid' a playername with a backslash is detected and handled appropriately. In fact a backslash ('\') and the nonsense-property-name ('/n') (which is the second part of the players name) are appended to the name and the nonsense-value ('frags_4') becomes the new property-name. Put that into a while loop (to handle even cooler players using more than one backslash in their name) and everything goes fine. Since there's nothing technically new and it should be an easy read, just go ahead and look at the code, if you're interested in the details. When you stick to the specification, parsing the byte-arrays is pretty straightforward. I needed a function that extracts a string from the byte-array, another reading an int and the rest is ++pos. ++pos Notice, that this protocol may change with each patch version released for UT2003. Currently it is working with the UT2003 Demo and Retail version! Maybe some patched servers deliver additional information, maybe something is altered - please contact me, if you think you've found a server, that sends things that are not taken care of by my class. What about the player-properties? I included a very basic Player-class and wanted the user from outside to use a simple Player-array. The problem with arrays is, that you have to know how big they are, when you create them. The other possibility, an ArrayList, is more dynamic by nature, but it only handles objects. So I made a tradeoff and went the middle way: When parsing the answer, the players are put together in a growing ArrayList and at the very end it is kind of converted into an array, which is fast accessible and doesn't require the user to cast anything. The same principle is used for the returned string-array containing the mutators and the list of servers returned by the masterserver query function. Player ArrayList object Btw: I won't rely on the 'numplayers'-property to be there all the time. Using that (and assuming it always is delivered before the first player-property) might be a solution to know the size of the player-array before the first player occurs. A little request: if you're using that class when debugging your application or for private use, I would be glad if you set RaiseExceptionOnUnhandledProperty to true. This raises an exception, when the class receives a property that is not known so far. When this member is false (default) unknown properties are just ignored. I've tested the class a lot, but you never know. If you've found an exotic server delivering unknown properties, please contact me and I will extend the class with the new property. RaiseExceptionOnUnhandledProperty true false This version is compatible with 2.0 - so replacing UT2003Server.cs with that new version in an existing project, should work fine. ExtractString() AnalyseUnrealAnswer() Sorry for my bad English. I'm trying to improve it, but here and there, you see I'm natively German-speaking. Prospective: When future patches are released, I will make sure, the class runs with all versions of UT2003. This is my first article. Feel free to make suggestions, both on the code and on the article itself. Email: ruepel@gmx.
http://www.codeproject.com/Articles/2942/UT2003-Gameserver-Status?fid=9026&df=90&mpp=10&sort=Position&spc=None&select=1232267&tid=393519
CC-MAIN-2014-41
refinedweb
3,513
55.54
Evan Prodromou <evan@debian.org> writes: > In other words: I don't think anything here is technically defective > nor out of order with Debian policy. I don't think there's anything > misleading or conflicting. So have you completely missed the previous threads on this list bitching about your package names? The package names _are_ misleading: they imply (even if they don't explicitly state) that these packages are somehow more generic than the other 3,478 terminal-emulators/pdf-viewers/etc present in debian. A reasonable solution is to rename the packages gnustep-FOO. The binary names will then not be the same as the package names, but that's not a particular problem. The binary names will _still_ be misleading, but it will be less harmful, because it only affects those people who installed the packages. Such generic names may have been fine on NeXTStep, where a single company controlled exactly what apps users would see, but they are simply not a very good idea in heterogeneous environment like debian, and this applies especially to the packaging namespace. -Miles -- `Life is a boundless sea of bitterness'
https://lists.debian.org/debian-devel/2004/04/msg00143.html
CC-MAIN-2017-13
refinedweb
188
53.71
Minecraft Region Fixer. Poll: What do you think Region-Fixer is lacking of? What do you think Region-Fixer is lacking of? - Single Choice - Be able to fix worlds that have non-vanilla block ids in the world (created by mods, etc). 31.2% - A graphic user interface (GUI). Because the command line is so 80's 64.6% - More features. 4.2% Ended Apr 23, 2015 Posts Quoted: New version out! (0.2.0) It has a flashy GUI for all you and some new cool features as: It took mucho more than expected, but life is busy! Hi, I have a Feed The Beast Infinity server running 1.7.10 which now is having a wrong located chunk. When I start the server I get this after a while: I've tried Region Fixer v0.1.3 and v0.2.0 without luck. When scanning the world with v0.1.3 I get this: And when running v0.2.0 I get this: I've tried deleting the region file "-1.0.mca" and then the server runs fine, but everything we had built is gone. Is there any possible solution to the problem I'm experiencing? Hello! It looks like the problem is that python ends up out of memory while scanning that region file. This is a known problem with python and 32bits when there is a "too many entities problem" somewhere (note: I'm pretty sure your system is 64 bits, but the executable you downloaded is not). If I am correct, you can scan the region file using regionfixer source code and the python interpreter (for 64bits systems). You can give a look to this: You can also send me that region file, I'll be glad to scan it myself and send it to you fixed. You can send me a PM. Good luck! Thats right, my system is 64bit. I'll take a look at the python interpreter. Thanks! Btw PM sent. Hello! I'm sorry to say that I have no idea what multicraft is and I don't know how to set regionfixer to auto remove them within it. And I can't investigate in the internets right now how to do it. I can only point you in how to use regionfixer from CLI or GUI, which I guess you can use if you stop your server. In order to remove corrupted chunks and other stuff you have to use the '--delete-*' options. For example, from console you can do: python regionfixer.py --delete-corrupted (this will remove corrupted chunks, you can also use '--dc') Please refer to the help for more info! I won't be able to answer in a week or so but I will try to look this when I can. Good luck! Hi Fenixin =) I'm very glad to have found your program, because I'm running a server and have just had a crash with that "relocating chunks" error. I can't login without it crashing immediately. I'm hoping Region Fixer will help (About to try the CMD). Just letting you know that I tried running the GUI first and the program crashes immediately (I have Win7 64bit). Here are the details of the crash. Sorry, my Windows is in Portuguese =P Nome do Evento de Problema: APPCRASH Nome do Aplicativo: regionfixer_gui.exe Versão do Aplicativo: 0.0.1.0 Carimbo de Data/Hora do Aplicativo: 49180193: 03039602 Versão do sistema operacional: 6.1.7601.2.1 I'll update you as to whether I'm able to fix my world or not with the CMD. o/ EDIT: Update =) So I ran everything through CMD. First I ran the -dc argument only, then reuploaded the world to my server but it got stuck while starting the server (because of a few client mods which are not used server side). I simply put in the command "/fml confirm", and restarted the server, and it worked! I logged in fine. There are a few chunks (which I assume were the corrupted ones) which simply WILL NOT load, hehe. So they were permanently deleted. (Which is fine because they are far away from our spawn area). I then went further by trying to replace the chunks instead of deleting them... So I loaded up the same seed in a local world and ran the --backups and --replace-corrupted arguments. (I had "backed up" the broken world on my PC). Sadly, region fixer crashed. Here's the report: Do you want to send an anonymous bug report to the region fixer ftp? (Answering no will print the bug report) [Y/n] n Bug report: <type 'exceptions.indexerror'=""> Traceback (most recent call last): File "regionfixer.py", line 506, in File "regionfixer.py", line 433, in main File "regionfixer_core\world.pyo", line 906, in replace_problematic_chunks IndexError: list index out of range I think this might be because I prrobably didn't explore far enough in the new world I created with the orig. world's seed so the chunks weren't ever loaded there for RF to read.(?) Anyway, at least the server is up and those few faraway chunks won't really be a problem =) Another EDIT: Sadly, it seems, the problem has not been fixed. After a while, it crashed in the same way as before. I forgot to mention that in doing everything as I did above, the scan said there were 11 corrupted chunks and 11 chunks in the wrong location (or something like that). After the -dc, I scanned again and it said there were 0 corrupted chunks but still 11 "wrong location" chunks. The crash log had something about chunks being in the wrong location so I'm assuming they're the culprits... Sadness! Looks like I'll be starting again... Ah, well =) ", cursive">==== Modded Minecraft Let's Plays | Fractals in Minecraft ==== Hey! It looks like I'm late! First, thanks for reporting back! It's very useful to have more info in bugs and crashes. With all the info you gave me it's clear that the GUI have some kind of problem and needs some fixing (you are not the first person to report this but you have been the most helpful). Second, the crash while replacing chunkgs... I'm afraid is a known bug! I have a few reports in the bug autoreporter. As soon as I get some free time I will look to this and other issues (probably in a month, I'm afraid). It would be helpful to be able to replicate the bug. Could you send me to region files (broken and replacement) to reproduce the problem? I don't really know if I need them but it will be better when I try to fix to have a way to replicate the problem. Third, although you probably know this already, you can also remove wrong-located chunks with '--dw'. Just in case you didn't know. Well, now about your chunks. You say there are chunk that don't load... what do you mean? they give you error while loading? Or they simply don't show in game? If it's the second that is really strange and should not happen. When region-fixer removes a chunk minecraft should replace it as soon as it loads the region file. If it shows in game as an empty chunk there is another problem around. Maybe a server mod? Only empty chunks (chunks that exists in the region file but contain only air blocks) show in game as empty chunks. About replacing chunks from a local world with the same seed... you shouldn't need that, as soon as you load the world in minecraft it should regenerate those removed chunks. Also, if the world is not too big and you feel like so, you can send me the world and I'll try to fix it. I'm always searching for new worlds to test new stuff in region-fixer. I hope that helps! Hello =) Great to hear from you. I'm happy my information was useful to you ^^ It's the least I can do for the free use of your little program, which, as it seems, might still save my world. Because I didn't know about the --dw argument! =D It's busy scanning now. If the wrong-location chunks are fixed, I'll reupload and let you know how it goes. I didn't know about --dw because I was using this page to get my info: I guess I should have used your in-program help command ;P About the crash when trying to replace with a back-up world: I just realised that the world I was trying to fix had already been fixed with --dc. So there were no corrupt chunks to replace! (I thought doing that would replace the wrong location chunks..(?) .. ) So yes, that's probably why it crashed? Okay, so, the --dw scan just finished and the wrong location chunks have been removed! Woot ^^ But there is something I didn't notice before: There are 75 "shared o." chunks. What are those and do you think they'll be a problem? o.O Will upload anyway and see what happens. Good luck with that GUI! EDIT qucik update. I googled the shared o. and then also actually used your --help command, lol. So I have now also run the --ds argument, and will try uploading. Hold thumbs! =) One last EDIT: Big sigh. Despite our finest efforts, the logger of crashes has won the day. I don't believe your program can fix this... It seems like it has something to do with Tile entities and the PermGen size... Here's the part of the error that matters: "The state engine was in incorrect state SERVER_STOPPING and forced into state SERVER_STOPPED. Errors may have been discarded." ... This came up only when I flew around that problematicregion with the permanently empty chunks. So anyway.. Thank you for the help. I've learnt a lot. I'll start over now, with all mob and animal spawning disabled. ", cursive">==== Modded Minecraft Let's Plays | Fractals in Minecraft ==== I'm sorry to hear that. Best luck in your new world! Hi guys, I'm having issues with my server. I just ran your most updated region fixer and I got some errors. 'level.dat' is corrupted with the following error/s: Not a gzipped file Unreadable data files: villages_nether.dat villages_end.dat villages.dat Any ideas how I can repair this? Thanks! Hello! If level.dat is corrupted there is nothing that you can do to fix it. You can use a backup to restore it or, if you know the seed of the map, create a new map in minecraft with that seed and copy the level.dat to your broken world (note, this method will lose a lot of info). And the other data files are more or less the same, once corrupted there is not much to do, if you don't have a backup the information in the file is lost. So the only why to "fix" this is to replace the corrupted files fro ma backup by hand. (Yes, backups are important) I hope that helps. Don't hestitate to ask if you need anything else. Hi Fenixin, Thanks for your reply! Unfortunately, my backup contains the corrupted world However, I am still able to play the world. Everything seems fine, except for the fact that my friend can't load her items/levels. If I keep playing this world, will it overwrite the corrupted file after I shut down the server? In other words, if I keep playing this world, will it corrupt future saves? Thanks for your response! I'm afraid I have no idea. But don't ever use region-fixer in a world that is currently being used by a server. Good luck! If you have any news about your world, share them here! Great! Thanks for your responses Fenixin! This needs to be easier to install, it's a fantastic program but people are getting put off by the difficulty of setting it up. Edit: IT FOUND NOTHING WHHHHHHHHHHHHHHHYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY IT TOOK 2 HOURS TO GET IT WORKING PERFECTLY Hey! I don't think its that hard to install.. you can download the windows executable and it should work out of the box. And maybe your world is not broken at all! If you have any sugestions regarding to install and usage, please, I want to hear them. I really want to improve it. Do I have to stop the server to remove corrupted chunks? Thanks. Yes yo do! If you don't stop the server before fixing your world, really bad things can happen! Hi Fenixin, i have searched and searched around this topic and doesn't seem that I'm getting anywhere. 1. I have a Mac. 2. I downloaded Python 2.7 3. I downloaded the gui program. My world/server works untill a player (including me) gets near the corrupted block. Please help me! I looked and looked around but forgive me if i have missed anyone reporting the same problem. Thank you so much!!!! Edit 1: Just to make things clearer, what i do is: open "regionfixer_gui.py" with python launcher, and then it takes me to terminal. I don't know what to do next. Forgive if my post sounds stupid. The terminal window that appears shows this: Last login: Tue Aug 18 18:06:57 on ttys000 Salehs-MacBook-Pro:~ bu9loo7$ cd '/Users/bu9loo7/Downloads/Minecraft-Region-Fixer-master/' && '/usr/local/bin/pythonw' '/Users/bu9loo7/Downloads/Minecraft-Region-Fixer-master/regionfixer_gui.py' && echo Exit status: $? && exit 1 Traceback (most recent call last): File "/Users/bu9loo7/Downloads/Minecraft-Region-Fixer-master/regionfixer_gui.py", line 11, in <module> from gui import Starter File "/Users/bu9loo7/Downloads/Minecraft-Region-Fixer-master/gui/__init__.py", line 4, in <module> from main import MainWindow File "/Users/bu9loo7/Downloads/Minecraft-Region-Fixer-master/gui/main.py", line 4, in <module> import wx ImportError: No module named wx Salehs-MacBook-Pro:Minecraft-Region-Fixer-master bu9loo7$
https://www.minecraftforum.net/forums/mapping-and-modding-java-edition/minecraft-tools/1261480-minecraft-region-fixer?page=15
CC-MAIN-2019-13
refinedweb
2,365
75.71
On Tue, 28 Mar 2000, Jack Jansen wrote: > > On Sat, 25 Mar 2000, David Ascher wrote: > > > This made me think of one issue which is worth considering -- is there a > > > mechanism for third-party packages to hook into the standard naming > > > hierarchy? It'd be weird not to have the oracle and sybase modules within > > > the db toplevel package, for example. > > > > My position is that any 3rd party module decides for itself where it wants > > to live -- once we formalized the framework. Consider PyGTK/PyGnome, > > PyQT/PyKDE -- they should live in the UI package too... > > For separate modules, yes. For packages this is different. As a point in case > think of MacPython: it could stuff all mac-specific packages under the > toplevel "mac", but it would probably be nicer if it could extend the existing > namespace. It is a bit silly if mac users have to do "from mac.text.encoding > import macbinary" but "from text.encoding import binhex", just because BinHex > support happens to live in the core (purely for historical reasons). > > But maybe this holds only for the platform distributions, then it shouldn't be > as much of a problem as there aren't that many. Assuming that you use an archive like those found in my "small" distro or Gordon's distro, then this is no problem. The archive simply recognizes and maps "text.encoding.macbinary" to its own module. Another way to say it: stop thinking in terms of the filesystem as the sole mechanism for determining placement in the package hierarchy. Cheers, -g -- Greg Stein,
https://mail.python.org/pipermail/python-dev/2000-March/002884.html
CC-MAIN-2017-30
refinedweb
260
64.3
Definition of a class ◆ In C++, separated .h and .cpp files are used to define one class. ◆ Class declaration and prototypes in that class are in the header file (.h). ◆ All the bodies of these functions are in the source file (.cpp). The header files ◆ If a function is declared in a header file, you must include the header file everywhere the function is used and where the function is defined. ◆ If a class is declared in a header file, you must include the header file everywhere the class is used and where class member functions are defined. Header = interface ◆ The header is a contract between you and the user of your code. ◆ The compile enforces the contract by requiring you to declare all structures and functions before they are used. Declarations vs. Definitions ◆ A .cpp file is a compile unit ◆ Only declarations are allowed to be in .h ★ extern variables ★ function prototypes ★ class/struct declaration #include ◆ #include is to insert the included file into the .cpp file at where the #include statement is. ★ #include"xx.h": first search in the current directory, then the directories declared somewhere ★ #include<xx.h>: search in the specified directories ★ #include<xx>: same as #include<xx.h> Standard header file structure #ifndef HEADER_FLAG #define HEADER_FLAG //Type declaration here... #endif //HEADER_FLAG Tips for header 1. One class declaration per header file 2. Associated with one source file in the same prefix of file name. 3. The contents of a header file is surrounded with #ifndef #define #endif
https://blog.csdn.net/lilejin322/article/details/108695600
CC-MAIN-2020-45
refinedweb
252
66.13
Chapter 16. Input and Output Operations in C ALL READING AND WRITING OF DATA up to this point has been done through your terminal.[1] When you wanted to input some information, you either used the scanf or getchar functions. All program results were displayed in your window with a call to the printf function. The C language itself does not have any special statements for performing input/output (I/O) operations; all I/O operations in C must be carried out through function calls. These functions are contained in the standard C library. Recall the use of the following include statement from previous programs that used the printf function: #include <stdio.h> This include file contains function declarations and macro definitions associated with the I/O ... Get Programming in C, Third Edition now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
https://www.oreilly.com/library/view/programming-in-c/9780768689068/ch16.html
CC-MAIN-2021-04
refinedweb
155
54.93
Deep. class Employeee implements Cloneable { private String name; private String designation; public Employeee() { this.setDesignation("Programmer"); } public String getDesignation() { return designation; } public void setDesignation(String designation) { this.designation = designation; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Object clone() throws CloneNotSupportedException { /* * CloneExample2222 copyObj = new CloneExample2222(); * copyObj.setDesignation(this.designation); copyObj.setName(this.name); * return copyObj; */ return super.clone(); } } public class CloneExample2222 { public static void main(String arg[]) { Employeee jwz = new Employeee(); jwz.setName("Jamie Zawinski"); try { Employeee joel = (Employeee) jwz.clone(); System.out.println(joel.getName()); System.out.println(joel.getDesignation()); } catch (CloneNotSupportedException cnse) { System.out.println("Cloneable should be implemented. " + cnse); } } } From novice to tech pro — start learning today. Open in new window The output you should get is: // See how list1, list2 and list3 are all copies of each other: List1 (before change) = List=[Alan, Bill, Clare] List2 (shallow copied) = List=[Alan, Bill, Clare] List3 (deep copied) = List=[Alan, Bill, Clare] // But then we modify the internal list "names" to add a new name to it. // The shallow copy of MyList is sharing the list of names, so it sees the change. // While the deep copy of MyList creates a new internal list, so it does not see the change. List1 (after change) = List=[Alan, Bill, Clare, Doug] List2 (shallow copied) = List=[Alan, Bill, Clare, Doug] List3 (deep copied) = List=[Alan, Bill, Clare] The shallow copy and the deep copy behavior are different. Either may be correct in a given situation - it depends on what you mean by "making a copy of the original". Does that finally make sense? Doug This only matters when the object being copied, contains other objects. E.g. Open in new window Take a look at the differences between createShallowCopy() and createDeepCopy(). Imagine we have: List<String> names = new ArrayList<String>() ; names.add("Sam") ; names.add("Tony") ; names.add("Lisa") ; MyList start = new MyList(names) ; Now if we create a shallow copy: MyList shallowCopy = start.createShallowCopy() ; MyList deepCopy = start.createDeepCopy() ; the list *inside* shallowCopy and start are both the same list. So if we now do: start.toString() returns "Sam, Tony, Lisa" shallowCopy.toString() returns "Sam, Tony, Lisa" deepCopy.toString() returns "Sam, Tony, Lisa" start.remove("Sam") ; start.toString() returns "Tony, Lisa". But: shallowCopy.toString() also returns "Tony, Lisa" because the list inside shallowCopy is the same list as the one inside start. That may be OK, or it may not. But it's different from deep copy: deepCopy.toString() returns "Sam, Tony, Lisa" because it contains a different internal list than start does. So when we removed "Sam" it didn't affect deep copy. Does that all make sense? The rest of the article just talks about specifically what clone does (a shallow copy) and some of the reasons why it's usually best avoided. Also if you implement serializable you can potentially use that to make a copy since you serialize the data into some form and then deserialize that to get back to the original. If you do that without throwing away the original then it produces a copy. So it's one way to copy an object. Doug How and where do i see to find it contains different internal list. What is the name of internal list deepCopy, shallowCopy has. For me it seems both has same list as below Please advise With monday.com’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you. Should be: Open in new window Can you see now how the deep copy is creating a newInternalList while the shallow copy doesn't do that? Doug I wonder why newInternalList again contains m_Names similar to Shallow Copy. I did not get the gist of removing list member etc and how it is different in both cases? I am still not very clear. Can you please post complete code which i can run and see. Please advise I do not understand clearly. I modified program as below Open in new window I see compilation errors like List cannot be resolved to a type at line 2 and following lines where m_names is used. can you please post complete code. please advise import java.util.*; Doug Open in new window It fixed the compilation error once i imported list and array list. But there is no main method or anything like that. How to run. Can you please provide me complete code to understand this example So the link author program is only clone example right which internally uses shallow copy not deep copy. >>Employee joel = (Employee) jwz.clone(); author in this example using shallow copy(clone) cloned jamie(jwz) into joe1 object right. so joe1 name , designation also became same as jamie. So output is Jamie Zawinski Programmer Is my understanding is correct? (There are lots of problems with clone(). The usual advice is to avoid it because unless you're very careful with it, it will do something different from what you want - like this example of a shallow copy). Doug
https://www.experts-exchange.com/questions/28362444/Deep-Clone-Example.html
CC-MAIN-2018-09
refinedweb
857
58.28
Mattias, we have less than 10 string properties. This is how our model is structured. We have one collection type property which is getting stored in [tblBigTableReference] table internal class Message : IDynamicData { public Message() { Teachers= new List<Teachers>(); } public Identity Id { get; set; } public List<Teachers> Teachers{ get; set; } public string Subject { get; set; } public string Body { get; set; } public DateTime DateCreated { get; set; } public Guid Sender { get; set; } public string SenderImageUrl { get; set; } public string SenderInitials { get; set; } public string SenderName { get; set; } public int SenderContentId { get; set; } } } public class Teachers { public string Name { get; set; } public string code{ get; set; } } However, seems like tblBigTable is spliting rows because String01 column is unable to hold complete data which is a HTML string. I am trying to get a List of Messages based on Teachers code. Any Suggestion? Sorry, no more suggestions other than migrating to Entity framework, especially since you have a collection type property. DDS is simply not designed for these purposes. I wish that Episerver never told anyone about it from the beginning. :) However, Quan is right that you should not have to use .ToList here. What was the reason for that again??
https://world.optimizely.com/forum/developer-forum/CMS/Thread-Container/2020/2/optimize-dds-linq-query/
CC-MAIN-2021-39
refinedweb
196
58.01
im using a Nucleo32l432kc flashed in Micropython and I would like to execute a python script whenever the device boots up. Problem is, there's no file system so I can't save any code in "main.py" or "boot.py" like you would do on other Nucleo boards. The only way i found to put python code permanently on my Nucleo is to rebuild my firmware with a frozen module ("testModule.py") which worked, but its not very convenient since I have to type everytime I reboot the device. Code: Select all import testModule testModule.test() What I want to do is have my Nucleo execute a specific frozen module everytime it boots up. The frozen module would essentially act like "boot.py" but it wouldn't be editable without rebuilding the firmware. I already tryed modifying the main.c file like in this thread viewtopic.php?f=2&t=3067 but it didn't work. If anyone can help me achieve this I would really appreciate it. Thank you Gbergero
https://forum.micropython.org/viewtopic.php?f=12&t=6303&p=35915
CC-MAIN-2021-10
refinedweb
172
77.74
Earlier this month, I posted about how the ASP.NET membership providers are creating the required database schema for me automagically when I first hit the site. Here is a quick update to that statement now that I more thoroughly understand what’s going on. Scott Hanselman does a great job introducing us to the ASP.NET Universal Providers for Session, Membership, Roles and User Profile so I won’t repeat it here. What I DON’T get from his article is that the Universal Providers are the default for a new ASP.NET MVC3 project (and that they’re not yet supported on Azure). I hadn’t touched ASP.NET or MVC for multiple years, so I just went along quietly and created a new project, pointed my connection string at a SQL Server and things were all working. It wasn’t until I published the site to Azure that things fell apart. The Universal Providers (“DefaultMembershipProvider”) are referenced throughout the web.config for all the different pieces of membership, and here you see it set as the default provider (the one that membership code in the site will look for and use). <membership defaultProvider="DefaultMembershipProvider"> > This works fine as long as we were developing on our local machines (where the Universal Providers are installed), and even when we’re hitting the SQL Azure database from our local machine (the provider is local, the database is remote – so the code can create the SQL Azure compatible schema on the fly regardless of the DB location). When the site is deployed to Azure, where the Universal Providers are not installed, you get an error: Unable to find the requested .Net Framework Data Provider. It may not be installed. It took forever to figure out that this was the problem, and 1 minute to fix it – just switch web.config to use the SQL Providers that are already configured, just not as the default. <membership defaultProvider="AspNetSqlMembershipProvider"> The bottom line: the Universal Providers are not yet available on Azure servers, so you have to go with the legacy SQL Providers. These providers, as always, require you to run the ASPNET_REGSQL tool to create the required database schema before you hit the site. - Universal Providers: create their required DB schema on first use, do not have the aspnet_xxx prefix on the tables, and do not use any views or stored procedures. - SQL Providers: require you to run ASPNET_REGSQL before first use, the tables are namespaced with “aspnet_” in front, and there are views and stored procedures that go along with the tables (all created by the ASPNET_REGSQL tool)
http://blogs.interknowlogy.com/tag/sql/
CC-MAIN-2021-10
refinedweb
436
69.01
Here’s a common question I get asked on the PyImageSearch blog: How do I make a Python + OpenCV script start as soon as my system boots up? There are many ways to accomplish. By my favorite is to use crontab and the @reboot option. The main reason I like this method so much is because crontab exists on nearly every Unix machine (plus, crontab is a really neat utility that I think everyone should have at least some experience with). It doesn’t matter if you’re on Raspbian, Linux, or OSX — crontab is likely to be installed on your system. In the remainder of this blog post, I’ll demonstrate how to utilize crontab to start a Python + OpenCV script when your system boots up. Looking for the source code to this post? Jump right to the downloads section. Running a Python + OpenCV script on reboot As I mentioned in the introduction to this blog post, we’ll be using crontab to launch a script on system reboot. I’ll be using my Raspberry Pi to accomplish, but the same general instructions apply for other Linux distributions and OSX as well — all you need to do is change the paths to your scripts. An example application In last week’s post, I demonstrated how to create an “alarm” program that detects this green ball in a video stream: Figure 1: The green ball we will be detecting in our video stream. If this green ball is detected, an alarm is raised by activating a buzzer and lighting up an LED on the TrafficHAT module (which is connected to a Raspberry Pi): Figure 2: The TrafficHAT module for the Raspberry Pi, which includes 3 LED lights, a buzzer, and push button, all of which are programmable via GPIO. An example of the “activated alarm” can be seen below: Figure 3: Notice how when the green ball is detected in the video stream, the LED on the TrafficHAT lights up. Here we can see the green ball is in view of the camera. Our program is able to detect the presence of the ball, light up an LED on the board, and if there was sound, you could hear the buzzer going off as well. Today, we are going to take this example alarm program and modify it so that it can be started automatically when the Raspberry Pi boots up — we will not have to manually execute any command to start our alarm program. Creating the launcher Before we can execute our Python script on reboot, we first need to create a shell script that performs two important tasks: - (Optional) Accesses our Python virtual environment. I’ve marked this step as optional only because in some cases, you may not be using a Python virtual environment. But if you’ve followed any of the OpenCV install tutorials on this blog, then this step is not optional since your OpenCV bindings are stored in a virtual environment. - Executes our Python script. This is where all the action happens. We need to (1) change directory to where our Python script lives and (2) execute it. Accomplishing both these tasks is actually quite simple. Below I have included the contents of my on_reboot.sh shell script which I have placed in /home/pi/pi-reboot : When we reboot our Pi, the on_reboot.sh script will be running as the root user (provided that you edit the root crontab, of course; which we’ll cover in the next section). However, we first need to access the cv virtual environment (or whatever Python virtual environment you are using), so we’ll call source /home/pi/.profile to setup the virtual environment scripts, followed by workon cv to drop us into the cv environment. Note: To learn about Python virtual environments, please refer to this post. After we have setup our environment, we change directory to /home/pi/pi-reboot , which is where I have stored the pi_reboot_alarm.py script. Finally, we are ready to execute pi_reboot_alarm.py — executing this script will be done within the cv virtual environment (thanks to the source and workon commands). Note: Again, make sure you have read both the accessing RPi.GPIO and GPIO Zero with OpenCV post and the OpenCV, RPi.GPIO, and GPIO Zero on the Raspberry Pi post before continuing with this tutorial. Both of these posts contain important information on configuring your development environment and installing required Python packages. After adding these lines to to your on_reboot.sh , save the file and then. Then, to make it executable, you’ll need to chmod it: After changing the permissions of the file to executable, you’re ready to move on to the next step! Updating crontab Now that we have defined the on_reboot.sh shell script, let’s update the crontab to call it on system reboot. Simply start by executing the following command to edit the root user’s crontab: This command should bring up the crontab file, which should look something like this: Figure 4: An example of crontab file. You should then enter the following lines at the bottom of the file: This command instructs the system to execute the on_reboot.sh script whenever our system is rebooted. Note: You can obviously replace the path to on_reboot.sh with your own shell script. Once you have finished editing the crontab, save the file and exit the editor — the changes to crontab will be automatically applied. Then at next reboot, the on_reboot.sh script will be automatically executed. Creating our Python script The contents of pi_reboot_alarm.py are near identical to last week’s blog post on OpenCV, RPi.GPIO, and GPIO Zero on the Raspberry Pi, but I’ve included the contents of the script as a matter of completeness: Lines 2-9 handle importing our required Python packages. We’ll be using VideoStream to seamlessly access either the Raspberry Pi camera module or USB camera module. The TrafficHat class from gpiozero will allow us to easily manipulate the TrafficHAT board. And the imutils library will be used for some OpenCV convenience functions. If you don’t already have imutils installed, let pip install it for you: Lines 12-17 parse our command line arguments. The first argument, --picamera is used to indicate whether or not the Raspberry Pi camera module should be used. By default, a USB webcam is assumed to be connected to the Pi. But if you want to use the Raspberry Pi camera module instead, simply supply --picamera 1 as a command line argument. The second switch, --log , is used to control the path to the output log file which can be used for debugging. Our next code block handles performing a series of initializations, including accessing the VideoStream class and setting up the TrafficHat module: We can now move on to the main video processing pipeline of our script, where we read frames from the VideoStream and process each of them, looking for a green ball: If we find the green ball, then we’ll buzz the buzzer and light up the green LED on the TrafficHAT: Finally, if the green ball is not found, we’ll turn off the LED on the TrafficHAT: Again, for a more comprehensive review of this code, please refer to last week’s blog post. Assuming you’ve read through last week’s post, you might notice an interesting modification — I’ve removed the call to cv2.imshow , which is used to display output frames to our screen. Why would I do this? Mainly because our pi_reboot_alarm.py script is meant to run in the background when our Pi is rebooted — the output is never meant to be displayed to our screen. All we care about is the alarm being properly raised if the green ball enters our video stream. Furthermore, removing calls to cv2.imshow reduces I/O latency, thereby allowing our Python script to run faster and process frames quicker (you can read more about I/O latency related to video streams in this post). Executing a Python script at reboot All that’s left to do now is test our crontab installation by rebooting our system. To restart my Raspberry Pi, I execute the following command: And as the following video demonstrates, as soon as my Pi boots up, the on_reboot.sh shell script is called, thereby executing the pi_reboot_alarm.py Python program and arming the alarm: You can see from the following screenshot that once the green ball enters the view of the camera, the green LED of the TrafficHAT is illuminated: Figure 5: An example of the alarm program running on Raspberry Pi after reboot, detecting the presence of the green ball, and then lighting up the green LED on the TrafficHAT board. And if you watch the video above, you can also hear the buzzer going off at the same time. Summary In this blog post, I demonstrated how to use crontab to launch a Python + OpenCV script on reboot. To accomplish this task, I utilized my Raspberry Pi; however, crontab is installed on nearly all Unix machines, so no matter if you’re on Linux or OSX, crontab is likely available for you to use. Anyway, I hope you enjoyed this series of blog posts on utilizing the Raspberry Pi, OpenCV, and GPIO libraries. If you would like to see more blog posts about these topics, please leave a comment in the comments section at the bottom of this post. And before you go, don’t forget to enter your email address in the form below to be notified when new blog posts are published! That’s a useful tutorial, exactly what I needed right now. Great job! More articles on interfacing between OpenCV and GPIO would be welcome, e.g. working with a DC+Stepper Motor HAT which moves the camera to follow an object. Thanks! Thanks Atomek! I don’t have a stepper motor/servo yet, that’s something I need to invest in. Do you have one that you’re already using that you can recommend? I’m using a 2 axis motor from car mirror. It’s supposed to required 12V but I’m running it straight from Raspberry’s USB 5V. It’s slower but that allows for more precision. Thanks for sharing 🙂 Just a heads up, @reboot isn’t terribly reliable. Setting up an init script/systemd unit file (depending on OS version). Would be a much more reliable way of accomplishing what you’re looking for. With systemd you’d also get the free benefit of process monitoring and sane logging. I (personally) don’t like systemd for this type of thing as this GIF jokes about. Upstart for Debian-based systems is also a good option. Hi, I wanted to share also a good option to start something on boot, called “supervisor” (install via “apt-get supervisor”). Very handy since you can start, stop, restart and get status of your running apps. I am using it for my RPI projects 😉 Thanks for sharing Lukas! Hi Adrian, for some reason I’m getting the following error when I manually call the on-reboot.sh to test it : /home/pi/pi-reboot/on_reboot.sh: line 3: workon: command not found my script is still running but it is not executing inside the virtual environment. When I type workon cv at the command prompt it runs without a problem and I can get into the virtual env. Any ideas what I might be missing here? Thanks much for all your guidance. It sounds like your .profilefile is not being source‘d before executing workon. Without running source /home/pi/.profilefirst, your system paths will not be setup correctly to find the workoncommand. Hi Adrian, thanks for your kind reply. This is why I’m all confused. On the command prompt I use the following commands pi@Rpi3:~ $ source ~/.profile pi@Rpi3:~ $ workon cv (cv) pi@Rpi3:~ $ deactivate and as you will note I am able to get into the virtualenv without any trouble and then deactivate to come back out. My on_reboot.sh contents look like the following : #!/bin/bash source ~/.profile workon cv but when I execute this I get the workon: command not found error. So if I understand this correctly what you are suggesting is that the call to source ~/.profile in my script is not doing its job and as a result my system paths are not setup for the script to find the workon command. Funny thing is, as soon as I terminate the script and type workon at command prompt I am able to get workon to run. Hope this provides some hints as to what might be wrong in my environment. Many thanks for your amazing support. On reboot, your script is being executed as the root user, not the pi user. Therefore, you need to supply the full path to the .profilefile: source /home/pi/.profile You can add your script executable command to the bottom of .bashrc that will run your script every time you log in. Make sure you are in the pi folder: $ cd ~ Create a file and write a script to run in the file: $ sudo nano superscript Save and exit: Ctrl+X, Y, Enter Open up .bashrc for configuration: $ sudo nano .bashrc Scroll down to the bottom and add the line: ./superscript Save and exit: Ctrl+X, Y, Enter If you are looking for a solution that works on bootup to the console, take a look at this link. Basic rundown: If you want a script to run when you boot into the LXDE environment, you could take a look at this Raspberry Pi forum post: Navigate to etc/xdg/lxsession/LXDE-pi Open the autostart file in that folder: $ sudo nano autostart Add @midori on a new line. If you want to run something like a python script, put something like @python mypython.py on a new line. Running a script file would be @./superscript, but for some reason the script runs in an infinite loop (perhaps this will stop that). Save and exit: Ctrl+X, Y, Enter Restart your Raspberry Pi into the LXDE environment. Awesome, thanks so much for sharing Hai! I have the same prob as kerem. I followed your installation tutorial for pi 3 + opencv + python. I ran the on_reboot.sh in root in order to emulate what happens during boot. The results were the same as kerem said. It seems we are only able to go to our opencv virtual env after logging in (not in root). How are you able to go to your virtual environment while still in root? PS. I also made a log directory so i can see errors during boot. The resulting error messages were the same as the one generated by the one above You can still access your Python virtual environment while in the root account. For example, if I wanted to launch a root shell and access my virtual environment from the “pi” account, all I would need to do is: And from there, you can access the cvvirtual environment as the root user. You can apply the same technique to your shell scripts that are launched on boot as well. thanks for the reply. but unfortunately it didn’t work. once I typed workon cv. an error has occured. ERROR: Environment ‘cv’ does not exist. Create it with ‘mkvirtualenv cv’. root@raspberrypi:/home/pi# If the cvvirtual environment doesn’t exist, then you may have installed OpenCV without Python virtual environments. That’s totally okay, but you’ll need to manually debug the issue. Please note that I can only support OpenCV install questions if you followed one of my OpenCV install tutorials. Run it In terminal : bash yourscript.sh If the script is already executable, you could also do: $ ./yourscript.sh Hey Adrian how to open tkinter gui on startup?? . I ‘m using vncserver to execute the script So your goal is to boot up your Pi, have it automatically launch the GUI interface, and then at the same time open your TKinter GUI script? On boot up, Pi launches with GUI interface, only how to open Tkinter GUI script? I personally haven’t tried this before, but I imagine you could accomplish this by editing your .xinitrc file. Dear Adrian I had started learning OpenCV with your book about 2 years ago. Now, It is quite amazing to see how you constantly enhanced your courses/blog posts according to industry and our needs over the past years. And again, you sensed our needs quite accurately by publishing this crontab tutorial. Thank you for the kind words Emrah! 🙂 hello adrian i followed your steps, i checked it after bash filename.sh but after reboot its not running Hey Ishant — it’s hard to tell what the exact issue may be on your machine without having access to it. Take a look at my reply to “Luis Lopez” above for suggestions on how to debug the script. hey adrian, first of all thank you for all the help you provide us, i have learned a lot from you and i just want you to know that you are a great teacher and person. Now, I have a problem with one script, I did everything you said in this lesson and it works just fine when I reboot except with one script that does nothing after the reboot reboot, it’s the only one that does this and the only difference between this script and the others is that I use the skfuzzy library, do you think this has something to do? or how can i check whats wrong?. Again thanks for everything. I would suggest inserting as many logging statements into your script as possible. Have the script write to a file you can monitor (like using print statements to debug). You may have forgotten to install skfuzzy into your virtual environment and the script could be throwing an error when trying to import it (hence the script not running). That’s just a best guess though. Hello, and thank you a lot for this lesson. I have done exactly what you have instructed in this tutorial, and when I tested the on_reboot.sh file from the terminal it worked out perfectly. But unfortunately when I reboot the Pi nothing happens. I added the line on crontab file and everything, but still nothing is happening when I reboot. Do you have any idea why is this happening? thank you, It’s hard to say without having physical access to your machine. I would suggest inserting some echocalls that log to file inside on_reboot.shto help determine if on_reboot.shis actually being called. There might be an error in your crontab entry. Gtk-WARNING **; cannot open display : on log while running above It sounds like you’re trying to use cv2.imshowin your Python reboot script. This will not work since the script is running in the background. Remove the call to cv2.imshowand your script should work. hi adrian , i made a cronlog to know the what is happening and i found out tha the error says (opencv:523):gtk -warning **:can not open display: what do you think happen? tnx for your tutorial Are you calling cv2.imshowfrom your cronjob? That won’t work. The cronjob runs in the background and does not have access to the display. is there any other method we can use for display it on window, after i boot the system will this work for home surveillance system shown in other tutorial? do i need to remove cv2.imshow part ? and how to wait before pi is connected to internet after rebooting for dropbox access? plz reply.. Yes, make sure you remove the cv2.imshowcalls so that script simply runs in the background. will it be connected to internet before execution of code or show error? That is hard to say as you cannot guarantee an internet connection (invalid WiFi password, network down, etc.) You would want to add extra logic to ensure your Raspberry Pi is connected to the internet if that is possible. The easiest way to do this would be via a shell script rather than trying to code it in Python. thank you sir Adrian, Thanks for the good advice. I was using the cv2.imshow in the python script that was called on reboot and I was getting the GTK Error. I need to have the image display to work on reboot. Please advise where I can find the procedure to display the cv2.imshow image after reboot. A Python script executed in the background does not have access to the window manager, hence the GTK error. I would suggest updating your Pi such that the piuser is automatically logged into the Raspberry Pi and the window manager started. From there you should be able to update your ~/.profilefile to automatically launch the script. can you please explain this i really want to start the video streaming after the boot process. Hi Adrian, my script works/environment works normally when manually run(im using your preinstalled py3cv3 environment). But when i try and run it with crontab with @reboot it throws the error of Traceback (most recent call last): File “reboot.py”, line 2, in from imutils.video import VideoStream ImportError: No module named imutils.video the environment i’m using has imultils installed and i can run it manually but am not sure why it’s not running with @reboot?? Any hints you have would be really good, thanks! Thanks Adrian for the tutorials, really detailed and useful! Just to add an alternative method, i had trouble starting the virtual environment with the workon command, with many imports like cv2 or imutils not being found. I managed to get around this using this startup script instead: #!/bin/bash . /home/pi/.virtualenvs/py3cv3/bin/activate cd /home/pi/Scripts/ python videoSave.py >> /home/pi/on-reboot/feedback.log 2>&1 deactivate cd /home/pi Hope this is helpful Thank you for sharing Albert! my crontab is empty. I have been trying to run @reboot python /home/pi/reboot8.py. This script looks for an input on GPIO to reboot the Rpi in case of lost remote control ie., other side of the planet. I placed that statement in my crontab anyway but doesn’t take. Your instruction go from optional cv to running a cv script anyway. A little guidance please, old brains aren’t as quick the youngins. Thanks Hi William — thanks for the comment, although I’m not sure what your exact question is? It sounds like you are trying to edit your crontab file but it is always empty? Are you making sure to save and exit the editor? I have exactly the same issue. I added traces and yes, on_reboot is fired at boot. The problem is that .profile is not OK when running it as root. For example executing sudo /bin/bash && source ~/.profile and then workon cv There is an error telling: ERROR: Environment ‘cv’ does not exist. Create it with ‘mkvirtualenv cv’ I followed completely your user guide for installing python and opencv3. I tried to copy all .profile content in on_reboot.sh and that’s the same. I think the virtualenv is not OK when running as root but I don’t know why. Hi Valica, instead of trying to use the “workon” command on boot try specifying the full path to the Python virtual environment binary instead: $ ./home/pi/.virtualenvs/cv/bin/python your_script.py That should resolve the issue. it says permission denied Sorry , My bad !! It’s still not activating virtual environment You do not have to activate the Python virtual environment provided you supply the full path to the Python interpreter inside the virtual environment. Hi Adrian, Indeed, this is the solution I’ve used. However I’m wondering why virtualenv is not OK when running as root… Another potential solution that I will check is to use cron as a different user than root. Normally this should run OK when setting the virtualenv. I want to display use cv2.imshow. How to do? Please take a look at my reply to “ghanendra”. Hi Adrian I tried everything to make this work on pi3 with stretch but no luck at all. In the end I did this: sudo nano /etc/systemd/system/yourservice.service Put this inside [Install] WantedBy=multi-user.target [Unit] Description=Example service Wants=network-online.target After=network-online.target [Service] User=youruser Group=yourgroup WorkingDirectory=/home/pi/yourdirectory ExecStart=/path/to/your/script.py ExecStartPre=/bin/sleep 10 Type=simple [Timer] OnStartupSec=25 Then ran these commands sudo systemctl enable yourname sudo systemctl start yourname sudo systemctl status yourname I forgot to mention that you use the full path to your virtual env python bin when calling the script. Just as you mentioned a couple of posts up. I ended up with a similar solution to @Marius but used the following script inside the new service to initialize the virtual environment: #!/bin/bash HOME=/home/pi VENVDIR=$HOME/.virtualenvs/py3cv3 BINDIR=$HOME/SmartSecurityCamera cd $BINDIR source $VENVDIR/bin/activate python $BINDIR/main2.py Credits to @dbsahu who described the auto boot solution for a nice SmartSecurityCamera solution using CV by HackerHouse. Thank you for sharing, Jack! Hi Adrian, I tried so many different approaches using such your tutorial, Jack Boyd’s comment, your reply to Valica and so many others such like myfile.service in /lib/systemd/system/myfile.service and rc.local in /etc/rc.local but still I am not succeed 🙁 and when I run sh on_reboot.sh I get the following error: source: not found workon cv : not found and when I replace it with the path ./home/pi/.virtualenvs/cv/bin/activate : not found import error: No module named cv 2. I have installed python 2.7+openCV using your tutorial on my Pi 3 model B with Stretch OS. I am looking forward to get a solution. I am thankful to you and your support to us. Are you calling just a single Python script on reboot? If so you can skip the “source” and “workon” commands and supply the full path to the Python interpreter: /home/pi/.virtualenvs/cv/bin/python your_script.py Thank you for your response! No. There are multiple files connected to my script.py. I have a system that recognise the images and stores the unknown images. This is the source code I am using: However I have modified it somehow and I am running only the recognize-people on bootup. I have followed your instruction and removed cv2.imshow and the code is fully functional using terminal but not bootup. I would suggest create an entirely separate test Python script that logs a line to file (such as the timestamp). Focus on getting the script to run on boot first, then worry about running the Python + OpenCV script on boot. Right now there are too many variables. Try to eliminate some. Instead of saying “sh script”, use “bash script”. Sh and bash are not the same program. Thank you very much for your comment. I worked for me but I had to add another command line in the crontab as follow: @reboot sudo sh -x /home/pi/myFolder/on_reboot.sh Hi Adrian , I have done everything that you have told here. Whenever I run the on_reboot.sh on terminal it runs perfectly but on reboot its not working. I can’t seem to find the problem !! Everything is alright with shell script , it works perfectly. I have checked the path that have been given to the crontab file, it’s also right, but the script won’t start at startup. I am working on raspberry pi 3 with raspbian stretch os. How are you validating that it’s not starting? Keep in mind that crontab will run the script in the background, not the foreground. In case anyone is interested; I too had problems, executing the script at startup, due to a permission problem. In crontab, this solved it: @reboot sudo su – pi bash -c ‘/home/pi/start.sh’ I use it, as root-user, to run the startup-script as user “pi” I tried to apply this on drowsiness detection on raspberry pi i follow everything and works fine when i run it on command line. However when I reboot the pi the script it is not working. Are you using a Python virtual environment? If so, you may be forgetting to access it via the “workon” command before executing your script. Thank you very much! You are welcome, Greg! adrian you should make a video for that beacuse i try like 1000 times but on every reboot nothing happen plz help i did every single step To help you debug the process I suggest: 1. Trying to run your command from your terminal to ensure it works as you think it does 2. Create a simple Python script that writes some text to a file (so you can debug that the script is actually running on reboot) 3. From there, actually try to insert your Python + OpenCV script. On a raspberry pi with raspbian stretch, this will autostart a python 3.6 opencv 3 program that uses cv2.imshow. This is with a raspbian and Pixel. Edit /home/pi/.config/lxsession/LXDE-pi/autostart Add a line @/home/pi/run_stuff.sh Create a file: /home/pi/run_stuff.sh Add your commands to run, for example: echo “starting run_stuff.sh” > ~/out.txt export DISPLAY=:0.0 cd /home/pi/myprograms py myopencvprog.py -d 4 & echo “finished run_stuff.sh” >> ~/out.txt then change permissions: chmod 755 run_stuff.sh then reboot. This will autostart your opencv python program when you reboot. However, I don’t think it will restart if it crashes. Need cron or something. You can omit the echo statements – just for tracing You can change to whatever directory you need and invoke your own programs and arguments. WOW! Thank you so much for sharing this, Peter!! THANK-YOU PETER! I made the following (minor) change to line 4 in the .sh: python3 myopencvprog.py -d 4 & and needed “sudo” before chmod: sudo chmod 755 run_stuff.sh I was also successfully able to call this .sh from crontab I have fulfilled the procedure described in this chapter, however without success. Only after I have accomplished what was written in the comment by Peter Bahrs dated December 1, 2018, the scripts on reboot started running automatically. Thank you Peter and Adrian too ! hi, all. I don’t automatically run an Opencv application on boot Please, help me. Are you trying to run the app in the background (via a cronjob) or launch it via the GUI? Hello, thank you for sharing. But I have a problem. I made the shell file, the virtual environment and python script run with no problem when I try it. But when I reboot, the indicator light of camera was activated for 2 seconds, then crash out. The code was not run and cpu utilization was not changed. I tried systemd and it had the same problem. Any idea what happened? It sounds like the script is erroring out for some reason. Try redirecting the stddout and stderr output (when running the Python script) to a file so you can debug. I was also facing “workon” command not found issue and below code worked for me #!/bin/bash source /home/pi/.profile workon cv cd /home/pi/pi-reboot python pi_reboot_alarm.py
https://www.pyimagesearch.com/2016/05/16/running-a-python-opencv-script-on-reboot/
CC-MAIN-2019-43
refinedweb
5,285
74.08
I need to be able to construct and destruct socket.io namespaces on-the-fly. It is easily to find information how to create a namespace, but I find nothing about how I remove/disconnect the namespace to release its memory. Say I got the following code already running: var nsp = io.of('/my-namespace'); nsp.on('connection', function(socket){ console.log('someone connected'): }); nsp.emit('hi', 'everyone!'); The io.of method just creates an array element: Server.prototype.of = function(name, fn){ if (String(name)[0] !== '/') name = '/' + name; if (!this.nsps[name]) { debug('initializing namespace %s', name); var nsp = new Namespace(this, name); this.nsps[name] = nsp; } if (fn) this.nsps[name].on('connect', fn); return this.nsps[name]; }; So I assume you could just delete it from the array in socket io. I tested it pretty quick and it seems to work. Sockets that are already connected, keep connected. delete io.nsps['/my-namespace']; Connecting to /my-namespace then falls back to the default namespace. I don't know if this is a good solution, but maybe you can play with this a little..
https://codedump.io/share/9N8nbYt4uUpq/1/socketio-how-do-i-remove-a-namespace
CC-MAIN-2017-39
refinedweb
186
62.54
SQLite.Net.Cipher: Secure Your Mobile Data Seamlessly and Effortlessly SQLite.Net.Cipher: Secure Your Mobile Data Seamlessly and Effortlessly SQLite database is used for storing data on mobile devices. It's important to secure your data to avoid rooting and jailbreaking. Join the DZone community and get the full member experience.Join For Free SQLite database. In a previous blog post, I talked broadly about how you could secure your data on mobile apps from an architectural point of view. In this post, I will show you how you can use SQLite.Net.Cipher to encrypt/decrypt data when stored/accessed in/from your database. This library helps you secure the data and do all the work for you seamlessly. All you need to do it annotate the columns that you want to encrypt with one attribute. The library will do the rest for you. The Model public class SampleUser : IModel { public string Id { get; set; } public string Name { get; set; } [Secure] public string Password { get; set; } } Notice above that we have decorated our password property with a [Secure] attribute. This will tell the SQLite.Net.Cipher to encrypt the password property whenever storing data in the database, and it will decrypt it when reading out of the database. The model needs to implement IModel, which enforces the contract of having a property with the name Id as a primary key. This is a common standard, and you could use other columns for PrimaryKey if you want and use backing properties to satisfy this requirement if you like. The Connection Your database connection entity needs to extend the SecureDatabase, which is provided to you by the SQLite.Net.Cipher as below: public class MyDatabase : SecureDatabase { public MyDatabase(ISQLitePlatform platform, string dbfile) : base(platform, dbfile) { } protected override void CreateTables() { CreateTable<SampleUser>(); } } You can use the CreateTable() method to create whatever tables you need. There is also another constructor that allows you to pass your own implementation of the ICryptoService if you like. This is the entity that is responsible for all encryption and decryption tasks. See It in Action To see the library in action, you could establish a connection to the database, insert some data, and retrieve it: var dbFilePath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments), "mysequredb.db3"); var platform = new SQLite.Net.Platform.XamarinIOS.SQLitePlatformIOS(); ISecureDatabase database = new MyDatabase(platform, dbFilePath); var keySeed = "my very very secure key seed. You should use PCLCrypt strong random generator"; var user = new SampleUser() { Name = "Has AlTaiar", Password = "very secure password :)", Id = Guid.NewGuid().ToString() }; var inserted = database.SecureInsert<SampleUser>(user, keySeed); // you could use any desktop to inspect the database and you will find the Password column encrypted (and converted base64) var userFromDb = database.SecureGet<SampleUser>(user.Id, keySeed); And that’s all, assuming that you have installed the Nuget Package. Dependencies Please note that this library relies on the following great projects: SQLite.Net-PCL and PCLCrypto. Both of these projects are really great and they support all major platforms, including builds for PCL libraries, so I would highly encourage your to look into them if you have not seen them before. You can find the library on Nuget here, and the source code is on GitHub here; feel free to fork, change, and do whatever you like. I hope you find the library useful and I would love to hear any comments, questions, or feedback. Published at DZone with permission of Has Altaiar . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/sqlitenetcipher-secure-your-mobile-data-seamlessly
CC-MAIN-2018-39
refinedweb
600
55.54
[Solved] Qt application run on Qt5.3.0 is slower than run on Qt4.8.4 Hello everybody I have tried to port Qt5.3.0 on my platform ARM freescale iMx6 recently. There was no any problem when building. However, when I run a simple Qt application with this command: ./qmlviewer -platform eglfs I found that the application was so slow to show, it took about 10 sec to show. But I recompiled the same Qt application with Qt4.8.4 and run with the same command, it only took about 2 sec. Not only qmlviewer application but also other my own gui applications have the same problem. I searched at here on the internet to see if there was the same problem and found the similar discussion: But he used the TI board, I'm not sure whether the platform problem or there is another root cause. Therefore I start this discussion and ask for your help. Thank you very much and appreciate! - sierdzio Moderators last edited by Have you tried updating your code to QtQuick 2.3 (just change the import statements), and then run it with qml exectuable? The old QtQuick1 code is not actively developed in Qt 5, it's only kept for compatibility reasons. Thanks for sierdzio's reply, but it seems to be useless. I tried to write a simple qt app with qml, the code shows as below: @ //main.cpp #include <QGuiApplication> #include <QtQuick/QQuickView> #include <QTextCodec> int main(int argc, char **argv) { QGuiApplication app(argc, argv); QTextCodec *codec = QTextCodec::codecForName("Utf-8"); QTextCodec::setCodecForLocale(codec); QQuickView view; view.setSource(QUrl("qml/main.qml")); view.show(); return app.exec(); } //main.qml import QtQuick 2.3 import Qt.labs.folderlistmodel 2.0 ListView { width: 200; height: 400 FolderListModel { id: folderModel folder: "/mnt/usb" } Component { id: fileDelegate Text { text: fileName; font.pixelSize: 20} } model: folderModel delegate: fileDelegate } @ This app can simply show the content of usb inserts on my board. Even I changed to QtQuick 2.0, QtQuick 2.2, or QtQuick 2.3 in the qml, it still took about 8~10 sec to show. Of course, other apps with qml also need to take this time. Is there another way to think about how to solve this problem? Anyway, thanks for your reply. I note I'm in the habit of, in constructors of my QQuickView subclasses, doing: @ setSurfaceType(QSurface::OpenGLSurface); QSurfaceFormat format; #ifdef MOBILE // This is something I set for OpenGLES platforms format.setRenderableType(QSurfaceFormat::OpenGLES); #else format.setRenderableType(QSurfaceFormat::OpenGL); #endif format.setAlphaBufferSize(8); format.setStencilBufferSize(8); setFormat(format); @ Although I'm not sure how I got into the habit. Maybe a relic from QDeclarative days (is QQuickView OpenGL by default? ... I'm not actually sure). I just mention it because in that SO post you link, the fix is to move from drawing on a QWidget to a QGLWidget. So maybe forcing QQuickView to use OpenGL will help. Thanks for timday's reply. Yesterday I tried to solve this problem with some tools, and fortunately found some good news. :) I used debug tool - strace to analyze my app. Here is the command : strace ./my_qt_app -platform eglfs and there was something strange... I saw the app would open each event located under the /dev/input/eventN, just like below: open("/dev/input/event0", O_RDONLY ...) open("/dev/input/event1", O_RDONLY ...) open("/dev/input/event2", O_RDONLY ...) .... On my target board, there were about 20 event (event0 ~ event20) for specially use. I tried to delete all the events except event0 and ran again. It was surprised that it only took about 4 sec to show. Besides, I found that the path of fonts would affect the execute time too, that is, the app load fonts from default path at /usr/share/fonts that set in the file /etc/fonts/fonts.conf on my board. If I changed the path to the Qt5.3 libary path, for example, /usr/local/Qt5.3/lib/fonts, it would reduce about 1 sec. Now my app needed to take about 3 sec to show, but still lower than Qt4.8.4. (it only needed to take about 1 sec) Therefore, now there is one more question: Why Qt5.3.0 would poll each event under /dev/input? (my config set error?) Does someone know where is the source code of this setting or can give me some direction to chase this problem? Anyway, thanks for your help again. - sierdzio Moderators last edited by donbychang - great investigation, thanks for sharing! timday QtQuick2 not only uses OpenGL by default - it cannot be run without it (OpenGL 2.0 is the minimal version). sierdzio - ah thanks for the clarification. I think my lines of code came as part of a naive port from a QDeclarative application then (where it did used to be necessary to select OpenGL rendering explicitly). That does make me wonder what the other QSurface::RasterSurface option to QQuickView::setSurfaceType (ok actually inherited QWindow::setSurfaceType) would do though. Docs seem to imply some sort of less OpenGL-ey stuff being used... - JKSH Moderators last edited by Hi donbychang, [quote]Therefore, now there is one more question: Why Qt5.3.0 would poll each event under /dev/input? (my config set error?) Does someone know where is the source code of this setting or can give me some direction to chase this problem?[/quote]I don't know the answer, but I'm sure the Qt engineers do. You can find them at the "Interest mailing list": Thanks for everybody's reply. I solved this problem! Like I said before, I found that the application would search all the events under the /dev/input, even you don't use any input event plugin (tslib, evdevtouch, evdevkeyboard...etc). This would cause the app delay to show. Therefore, I traced the source code to find where the setting is, and finally I found that in the following code: //Qt5.3/qtbase/src/platformsupport/devicediscovery/qdevicediscovery_static.cpp in this file, the function QDeviceDiscovery::scanConnectedDevices() would search /dev/input/ This code finally will be compiled and package as libQt5PlatformSupport.a and the platform eglfs will use this. In the file Qt5.3/qtbase/src/plugins/platforms/eglfs/qeglfsintegration.cpp, if there is no export variable set, that is, QT_QPA_EGLFS_DISABLE_INPUT, the plugin will set the variable mDisableInputHandlers to be false. This flag finally will start to search all event under /dev/input, and hence slow down the app to show. In fact, I tried that before I ran the app, I set the export: export QT_QPA_EGLFS_DISABLE_INPUT=1 and the app would show within 4,5 sec. This proved the setting indeed affect the speed. Of course, this time still did not satisfy me, so I still used the tool strace to find another possibility. Finally I found that the font would cause the delay too. In my platform, the original font is DroidSansFallback.ttf in the /usr/local/Qt5.3/lib/fonts. When I selected another font, msyh.ttf. It was apparently that the app will show within about 1 sec. That is what I want. :) By the way, if I choose this font, there is no difference that set the path in the file /etc/fonts/font.conf, that is, even I set the path to be /usr/share/fonts. The speed seems no much difference. Finally, let me make 2 short conclusions: - set the flag QT_QPA_EGLFS_DISABLE_INPUT to be true, that is, export QT_QPA_EGLFS_DISABLE_INPUT=1 before start to run the app. 2.check what the font you used at least, use msyh.ttf is faster than DroidSansFallback.ttf in my platform. I am not sure that the above comments are totally right or not, so if someone finds any wrong, please also let me know. Thanks for everybody's opinion!
https://forum.qt.io/topic/48498/solved-qt-application-run-on-qt5-3-0-is-slower-than-run-on-qt4-8-4/6
CC-MAIN-2019-43
refinedweb
1,298
68.06
Release v0.9.8 is loaded with skill creation tools. Bettering the lives of Developers and Users alike. You can find the source code, as always, in our Github. Get response #1278 get_response() method added to MycroftSkill. The method allows skill creators to synchronously wait for a users response inside an intent handler. The method takes a dialog reference as an argument. The referenced dialog will be spoken and listening will be triggered. The user response will be returned from the method. def echo(self, message): response =self.get_response('what.should.I.repeat') self.speak(response) Global skill settings #1290 Skill settings are now defaulted per user, not per device. This means instead of having multiple entries for a skill under the “Skills” tab, a single entry will be visible and used by all devices. Skiller #1282 skiller.sh script updated. The skiller.sh script creates a standard template for building a new skill, including skill class, readme, and license. Translate text #1288 New translate methods to help the internationalization of skills. The new methods are, translate() translate_template() and translate_list(). body = self.translate_template('email.template', {'from': data['name']}) self.send_email(self.translate('You have a message!'), body) The above example will get the localized version of the email.template.dialog file, and render one of the entries as a string. The string will then be sent as the body in an e-mail message. Unnamed intents #1280 IntentBuilder can now be used with empty names: IntentBuilder('').require('intent')... If left empty, the handler method name will be used. Misc changes - Corrected test coverage #1272 - Speech related fixes, correct isSpeaking#1283, Fix protection from intermixing utterances. #1291 - Update CLI, minor fixes and changes to the keyboard interface. #1284 mycroft.awokenis now sent when speech client is awoken from sleep. #1279 - fix status messages sent from msm to mycroft messagebus. #1275 Steve has been building cutting edge yet still highly usable technology for over 25 years, previously leading teams at Autodesk and the Rhythm Engineering. He now leads the development team at Mycroft as a partner and the CTO.
https://mycroft.ai/blog/release-notes-v0-9-8-whats-new/
CC-MAIN-2019-47
refinedweb
347
60.31
Guido van Rossum schrieb: >> With the filenames decoded by UTF-8, your files named têste, ô, dossié will >> be displayed and handled correctly. The others are *invalid* in the filesystem >> encoding UTF-8 and therefore would be represented by something like >> >> u'dir\uXXffname' where XX is some private use Unicode namespace. It won't look >> pretty when printed, but then, what do other applications do? They e.g. display >> a question mark as you show above, which is not better in terms of readability. >> >> But it will work when given to a filename-handling function. Valid filenames >> can be compared to Unicode strings. >> >> A real-world example: OpenOffice can't open files with invalid bytes in their >> name. They are displayed in the "Open file" dialog, but trying to open fails. >> This regularly drives me crazy. Let's not make Python not work this way too, >> or, even worse, not even display those filenames. > > How can it *regularly* drive you crazy when "the majority of fie names > [...] encoded correctly" (as you assert above)? Because Office files are a) often named with long, seemingly descriptive filenames, which invariably means umlauts in German, and b) often sent around between systems, creating encoding problems. Having seen how much controversy returning an invalid Unicode string sparks, and given that it really isn't obvious to the newbie either, I think I now agree that dropping filenames when calling a listdir() that returns Unicode filenames is the best solution. I'm a little uneasy with having one function for both bytes and Unicode return, because that kind of str/unicode mixing I thought we had left behind in 2.x, but of course can live with.
https://mail.python.org/pipermail/python-dev/2008-September/082675.html
CC-MAIN-2017-04
refinedweb
282
62.88
You can click on the Google or Yahoo buttons to sign-in with these identity providers, or you just type your identity uri and click on the little login button. explains how epydoc supports docstrings such as: class Frobble def __init__(self): self.important_attribute = None """doctring for important_attribute""" however this code generates W0105 (string statement has no effect). I can certainly disable that warning, but I think it could be nice to have pylint (maybe with an option) not complain for such strings if they happen after an assignment. FTR, this is what PEP 257 defines as “additional docstring”. —merwok Ticket #9848 - latest update on 2011/08/01, created on 2009/07/28 add comment - 2011/08/01 13:53, written by anon FTR, this is what PEP 257 defines as “additional docstring”. —merwok
https://www.logilab.org/ticket/9848
CC-MAIN-2019-13
refinedweb
134
58.92
Table Of Contents Popup¶ New in version 1.0.7. The Popup widget is used to create modal popups. By default, the popup will cover the whole “parent” window. When you are creating a popup, you must at least set a Popup.title and Popup.content. Remember that the default size of a Widget is size_hint=(1, 1). If you don’t want your popup to be fullscreen, either use size hints with values less than 1 (for instance size_hint=(.8, .8)) or deactivate the size_hint and use fixed size attributes. Changed in version 1.4.0: The Popup class now inherits from ModalView. The Popup offers a default layout with a title and a separation bar. Examples¶ Example of a simple 400x400 Hello world popup: popup = Popup(title='Test popup', content=Label(text='Hello world'), size_hint=(None, None), size=(400, 400)) By default, any click outside the popup will dismiss/close it. If you don’t want that, you can set auto_dismiss to False: popup = Popup(title='Test popup', content=Label(text='Hello world'), auto_dismiss=False) popup.open() To manually dismiss/close the popup, use dismiss: popup.dismiss() Both open() and dismiss() are bindable. That means you can directly bind the function to an action, e.g. to a button’s on_press: # create content and add to the popup content = Button(text='Close me!') popup = Popup(content=content, auto_dismiss=False) # bind the on_press event of the button to the dismiss function content.bind(on_press=popup.dismiss) # open the popup popup.open() Same returning True from your callback: def my_callback(instance): print('Popup', instance, 'is being dismissed but is prevented!') return True popup = Popup(content=Label(text='Hello world')) popup.bind(on_dismiss=my_callback) popup.open() - class kivy.uix.popup.Popup(**kwargs)[source]¶ Bases: kivy.uix.modalview.ModalView Popup class. See module documentation for more information. -’.
https://kivy.org/doc/master/api-kivy.uix.popup.html
CC-MAIN-2021-39
refinedweb
305
52.97
Welcome to the first lesson of the ‘Introduction to Big Data and Hadoop’ tutorial (part of the Introduction to Big data and Hadoop course). This lesson provides an introduction to Big Data. Further, it gives an introduction to Hadoop as a Big Data technology. Let us explore the objectives of this lesson in the next section. By the end of this lesson, you will be able to: Explain the characteristics of Big Data Describe the basics of Hadoop and HDFS architecture List the features and processes of MapReduce Describe the basics of Pig In the next section of introduction to big data tutorial, we will focus on the need for Big Data. Following are the reasons why Big Data is needed. By an estimate, around 90% of the world’s data has been created in the last two years alone. Moreover, 80% of the data is unstructured or available in widely varying structures, which are difficult to analyze. As IT systems are being developed, it has been observed that structured formats like databases have some limitations with respect to handling large quantities of data. It has also been observed that it is difficult to integrate information distributed across multiple systems. Further, most business users do not know what should be analyzed and discover requirements only during the development of IT systems. As data has grown, so have ‘data lakes’ within enterprises. Potentially valuable data for varied systems such as Enterprise Resource Planning or ERP (and Supply Chain Management or SCM (read as S-C-M) are either dormant or discarded. It is often too expensive to integrate large volumes of unstructured data. Information, such as natural resources, has a short, useful lifespan and is best used in a limited time span. Further, information is best exploited for business value if a context is added to it. In the next section of introduction to big data tutorial, we will focus on the characteristics of Big Data. Big Data has three characteristics, namely, variety, velocity, and volume. Variety Variety encompasses managing the complexity of data in many different structures, ranging from relational data to logs and raw text. Velocity Velocity accounts for the streaming of data and movement of large volume of data at a high speed. Volume Volume denotes the huge scaling of data ranging from terabytes to zettabytes and more. In this section, we will discuss the characteristics of Big Data technology. Big Data technology helps to respond to the characteristics discussed in the previous section. It helps to process the growing volumes of data in a cost-efficient way. For example, as per IBM, Big Data technology has helped to turn the 12 terabytes of Tweets created daily into improved product sentiment analysis. It has converted 350 billion annual meter readings to better predict power consumption. Big Data technology also helps to respond to the increasing velocity of data. For example, it has scrutinized 5 million trade events created daily to identify potential frauds. It has helped to analyze 500 million daily call detail records in real time to predict customer churn faster. Big Data technology can collectively analyze the wide variety of data. For example, it has helped to monitor hundreds of live video feeds from surveillance cameras to target points of interest for security agencies. It has also been able to exploit the 80% data growth in images, videos, and documents to improve customer satisfaction. According to Gartner.com, Big Data is high-volume, high-velocity and high-variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making. In the next section of introduction to big data tutorial, we will focus on the appeal of Big Data technology. Following are the reasons for the popularity of Big Data technology: Big Data technology helps to manage and process a large amount of data in a cost-efficient manner. It analyzes all available data in their native forms, which can be unstructured, structured, or streaming. It captures data from fast-happening events in real time. Big Data technology is able to handle the failure of isolated nodes and tasks assigned to such nodes. It can turn data into actionable insights. In the next section of introduction to big data tutorial, we will focus on handling limitations of Big Data. There are two key challenges that need to be addressed by Big Data technology. These are handling the system uptime and downtime, and combining data accumulated from all systems. To overcome the first challenge, Big Data technology uses commodity hardware for data storage and analysis. Further, it helps to maintain a copy of the same data across clusters. To overcome the second challenge, Big Data technology analyzes data across different machines and subsequently, merges the data. In the next section of introduction to big data tutorial, we will introduce the concept of Hadoop that helps to overcome these challenges. Trying to make a career in Big data? Click here to know more! Hadoop helps to leverage the opportunities provided by Big Data and overcome the challenges it encounters. What is Hadoop? Hadoop is an open source, a Java-based programming framework that supports the processing of large data sets in a distributed computing environment. It is based on the Google File System or GFS (read as G-F-S). Why Hadoop? Hadoop runs a number of applications on distributed systems with thousands of nodes involving petabytes of data. It has a distributed file system, called Hadoop Distributed File System or HDFS, which enables fast data transfer among the nodes. Further, it leverages a distributed computation framework called MapReduce. In the next section of introduction to big data tutorial, we will focus on Hadoop configuration. Hadoop supports three configuration modes when it is implemented on commodity hardware: Standalone mode Pseudo-distributed mode Fully distributed mode In standalone mode, all Hadoop services run in a single JVM, that is, Java Virtual Machine on a single machine. In pseudo-distributed mode, each Hadoop service runs in its JVM but on a single machine. In a fully distributed mode, the Hadoop services run in individual JVMs, but these JVMs reside on different commodity hardware in a single cluster. In the next section of introduction to big data tutorial, we will discuss the core components of Apache Hadoop. There are two major components of Apache Hadoop. They are Hadoop Distributed File System, abbreviated as HDFS, and Hadoop MapReduce. HDFS is used to manage the storage aspects of Big Data, whereas MapReduce is responsible for processing jobs in a distributed environment. In the next two sections, we will discuss the core components in detail. We will start with HDFS in the next section. HDFS is used for storing and retrieving unstructured data. Some of the key features of Hadoop HDFS are as follows. HDFS provides high-throughput access to data blocks. When unstructured data is uploaded on HDFS, it is converted into data blocks of fixed size. The data is chunked into blocks so that it is compatible with the commodity hardware's storage. HDFS provides a limited interface for managing the file system. It ensures that one can perform a scale up or scale down of resources in the Hadoop cluster. HDFS creates multiple replicas of each data block and stores them in multiple systems throughout the cluster to enable reliable and rapid data access. In the next section of introduction to big data tutorial, we will focus on MapReduce as a core component of Hadoop. The MapReduce component of Hadoop is responsible for processing jobs in distributed mode. Some of the key features of the Hadoop MapReduce component are as follows: It performs distributed data processing using the MapReduce programming paradigm. It allows you to possess a user-defined map phase, which is a parallel, share-nothing processing of input. It aggregates the output of the map phase, which is a user-defined, reduce phase after a mapping process. In the next section of introduction to big data tutorial, we will focus on the HDFS architecture. A typical HDFS setup is shown in a diagram below. This setup shows the three essential services of Hadoop: NameNode DataNode Secondary NameNode services The NameNode and the Secondary NameNode services constitute the master service, whereas the DataNode service falls under the slave service. The master server is responsible for accepting a job from clients and ensuring that the data required for the operation will be loaded and segregated into chunks of data blocks. HDFS exposes a file system namespace and allows user data to be stored in files. A file is split into one or more blocks that are stored and replicated in DataNodes. The data blocks are then distributed to the DataNode systems within the cluster. This ensures that replicas of the data are maintained. In the next section of introduction to big data tutorial, we will focus on an introduction to the Ubuntu Server. Ubuntu is a leading open-source platform for scale-out. Ubuntu helps in the optimum utilization of infrastructure, irrespective of whether you want to deploy a cloud, a web farm, or a Hadoop cluster. Following are the benefits of Ubuntu Server: It has the required versatility and performance to help you get the most out of your infrastructure. Ubuntu services ensure efficient system administration with Landscape. These services provide access to Ubuntu experts as and when required, and enable fast resolution of a problem. In the next section, we will discuss Hadoop installation. To install Hadoop, you need to have a VM installed with the Ubuntu Server 12.04 LTS operating system. You also need high-speed internet access to update the Operating System and download the Hadoop files to the machine. In the next section, we will discuss Hadoop multi-node installation. For Hadoop multi-node installation, you require an Ubuntu Server 12.04 VM preconfigured in Hadoop pseudo-distributed mode. You will also need to ensure that the VM has Internet access so that it can update the file if required. In the next section, we will differentiate between single-node and multi-node clusters. The table below shows the differences between a single-node cluster and a multi-node cluster. In the next section, we will focus on MapReduce in detail. MapReduce is a programming model. It is also an associated implementation for processing and generating large data sets with parallel and distributed algorithms on a cluster. MapReduce operation includes specifying the computation in terms of a map and a reduce function. It makes parallel computation across large-scale clusters of machines possible. MapReduce handles machine failures and performance issues. It also ensures efficient communication between nodes performing the jobs. Computational processing can occur on data stored either in a filesystem, namely, unstructured data, or in a database, namely structured data. MapReduce can be applied to significantly larger datasets when compared to "commodity" servers. In the next section, we will discuss the characteristics of MapReduce. Some characteristics of MapReduce are listed below in the section. MapReduce is designed to handle very large scale data in the range of petabytes, exabytes and so on. It works well on Write once and read many data, also known as WORM data. MapReduce allows parallelism without mutexes. The Map and reduce operations are typically performed by the same physical processor. Operations are provisioned near the data that is, data locality is preferred. Commodity hardware and storage is leveraged in MapReduce. The runtime takes care of splitting and moving data for operations. In the next section, we will list some of the real-time uses of MapReduce. Some of the real-time uses of MapReduce are as follows: Simple algorithms such as grep, text indexing, and reverse indexing. Data-intensive computing, for example, sorting also uses MapReduce. Data mining operations like Bayesian classification use this technique. Search engine operations like keyword indexing, ad rendering, and PageRank have been commonly regularly using MapReduce, and so is enterprise analytics. Gaussian analysis for locating extra-terrestrial objects in astronomy have found MapReduce as a good technique. There seems to be a good potential for MapReduce in semantic web and web 3.0. In the next section, we will discuss the prerequisites for Hadoop installation in Ubuntu Desktop 12.04 (read as twelve point oh four). Ubuntu Desktop 12.04 VM installed with Eclipse, and a high-speed internet connection is required to install Hadoop in Ubuntu Desktop 12.04. In the next section, we will list the key features of Hadoop MapReduce. MapReduce functions use key/value (read as ‘key value’) pairs. Some of the key features of Hadoop MapReduce function are as follows: The framework converts each record of input into a key/value pair, which is a one-time input to the map function. The map output is also a set of key/value pairs which are grouped and sorted by keys. The reduce function is called once for each key, in sort sequence, with the key and set of values that share that key. The reduce method may output an arbitrary number of key/value pairs, which are written to the output files in the job output directory. In the next section, we will explore the processes related to Hadoop MapReduce. The framework provides two processes that handle the management of MapReduce jobs. They are the TaskTracker and JobTracker services. TaskTracker Services The TaskTracker service resides in the Data Node. The TaskTracker manages the execution of individual map and reduces tasks on a compute node in the cluster. JobTracker Services The JobTracker service resides in the system where the NameNode service resides. The JobTracker accepts job submissions, provides job monitoring and control and manages the distribution of tasks to the TaskTracker nodes. In the next section, we will focus on advanced HDFS. The Hadoop Distributed File System is a block-structured, distributed file system. It is designed to run on small commodity machines in a way that the performance of the running jobs will be better when compared to single standalone dedicated servers. HDFS provides the storage solution to store Big Data and make the data accessible to Hadoop services. Some of the settings in advanced HDFS are HDFS benchmarking, setting up HDFS block size, and decommissioning or removing a DataNode. In the next section, we will focus on advanced MapReduce. Hadoop MapReduce uses data types when it works with user-given mappers and reducers. The data is read from files into mappers and emitted by mappers to reducers. Processed data is sent back by the reducers. Data emitted by reducers go into output files. At every step, data is stored in Java objects. In the Hadoop environment, objects that can be put to or received from files and across the network must obey a particular interface called Writable. This interface allows Hadoop to read and write data in a serialized form for transmission. In the next section, we will focus on the data types in Hadoop and their functions. The table below shows a list of data types. In the next section, we will introduce the concept of distributed cache. Distributed Cache is a Hadoop feature to cache files needed by applications. Following are the functions of distributed cache: It helps to boost efficiency when a map or a reduce task needs access to common data. It lets a cluster node read the imported files from its local file system, instead of retrieving the files from other cluster nodes. It allows both single files and archives such as zip and tar.gz. It copies files only to slave nodes. If there are no slave nodes in the cluster, distributed cache copies the files to the master node. It allows access to the cached files from mapper or reducer applications to make sure that the current working directory is added into the application path. It allows referencing the cached files as though they are present in the current working directory. In the next section, we will understand joins in MapReduce. Joins are relational constructs you can use to combine relations. In MapReduce, joins are applicable in situations where two or more datasets need to be combined. A join is performed either in the Map phase or the Reduce phase by taking advantage of the MapReduce Sort-Merge architecture. The various join patterns that are available in MapReduce are: Reduce side join Replicated join Composite join Cartesian product In the next section, we will focus on an introduction to Pig. Pig is one of the components of the Hadoop eco-system. It is a high-level data flow scripting language. Pig runs on Hadoop clusters. It was initially developed by Facebook for their project, as they did not want to use java for performing Hadoop operations. Later Pig became an Apache open-source project. Pig uses HDFS for storing and retrieving data and Hadoop MapReduce for processing Big Data. In the next section, we will discuss the major components of Pig. The two major components of Pig are the Pig Latin (pig-latin) script language and a runtime engine. The Pig Latin script language is a procedural data flow language. It contains syntax and commands that can be applied to implement business logic. Examples of Pig Latin are LOAD, STORE, etc. The runtime engine is a compiler that produces sequences of Map-Reduce programs. It uses HDFS for storing and retrieving data. It is also used to interact with the Hadoop system, that is, HDFS and MapReduce. It parses, validates, and compiles the script operations into a sequence of MapReduce jobs. In the next section, we will understand the data model associated with Pig. As part of its data model, Pig supports four basic types. Atom Atom is a simple atomic value like int, long, double, or string. Tuple A tuple is a sequence of fields that can be of any data type. Bag The bag is a collection of tuples of potentially varying structures and can contain duplicates. Map A map is an associative array. The key must be a chararray but the value can be of any type. In the next section, we will differentiate between Pig and SQL (read as S-Q-L). The table below shows the differences between Pig and SQL. The first difference between Pig and SQL is that Pig is a scripting language used to interact with HDFS while SQL is a query language used to interact with databases residing in the database engine. In terms of query style, Pig offers a step by step execution style compared to the single block execution style of SQL. Pig does a lazy evaluation, which means that data is processed only when the STORE or DUMP command is encountered. However, SQL offers an immediate evaluation of a query. Pipeline Splits are supported in Pig, but in SQL, you may require the join to be run twice or materialized as an intermediate result. In the next section, we will discuss the prerequisites for setting the environment for Pig Latin. Want to know more about Big data? Check out our course preview here! Ensure the following parameters while setting the environment for Pig Latin. Ensure all Hadoop services are running Ensure Pig is installed and configured Ensure all datasets are uploaded to the NameNode, that is, HDFS. Let us summarize the topics covered in this lesson: Big Data has three characteristics, namely, variety, velocity, and volume. Hadoop HDFS and Hadoop MapReduce are the core components of Hadoop. One of the key features of MapReduce is that the map output is a set of key/value pairs which are grouped and sorted by key. TaskTracker manages the execution of individual map and reduces tasks on a compute node in the cluster. Pig is a high-level data flow scripting language. It uses HDFS for storing and retrieving data. In the next lesson, we will focus on Hive, HBase, and components of the Hadoop ecosystem. A Simplilearn representative will get back to you in one business day.
https://www.simplilearn.com/introduction-to-big-data-and-hadoop-tutorial
CC-MAIN-2019-22
refinedweb
3,331
56.15
This is the 10th round-up already and React has come quite far since it was open sourced. Almost all new web projects at Khan Academy, Facebook, and Instagram are being developed using React. React has been deployed in a variety of contexts: a Chrome extension, a Windows 8 application, mobile websites, and desktop websites supporting Internet Explorer 8! Language-wise, React is not only being used within JavaScript but also CoffeeScript and ClojureScript. The best part is that no drastic changes have been required to support all those use cases. Most of the efforts were targeted at polishing edge cases, performance improvements, and documentation. Khan Academy - Officially moving to React # Joel Burget announced at Hack Reactor that new front-end code at Khan Academy should be written in React!. React: Rethinking best practices # Pete Hunt's talk at JSConf EU 2013 is now available in video. Server-side React with PHP # Stoyan Stefanov's series of articles on React has two new entries on how to execute React on the server to generate the initial page load. This post is an initial hack to have React components render server-side in PHP. - Problem:! Rendered markup on the server: TodoMVC Benchmarks # Webkit has a TodoMVC Benchmark that compares different frameworks. They recently included React and here are the results (average of 10 runs in Chrome 30): - AngularJS: 4043ms - AngularJSPerf: 3227ms - BackboneJS: 1874ms - EmberJS: 6822ms - jQuery: 14628ms - React: 2864ms - VanillaJS: 5567ms Please don't take those numbers too seriously, they only reflect one very specific use case and are testing code that wasn't written with performance in mind. Even though React scores as one of the fastest frameworks in the benchmark, the React code is simple and idiomatic. The only performance tweak used is the following function: /** * This is a completely optional performance enhancement that you can implement * on any React component. If you were to delete this method the app would still * work correctly (and still be very performant!), we just use it as an example * of how little code it takes to get an order of magnitude performance improvement. */ shouldComponentUpdate: function (nextProps, nextState) { return ( nextProps.todo.id !== this.props.todo.id || nextProps.todo !== this.props.todo || nextProps.editing !== this.props.editing || nextState.editText !== this.state.editText ); }, By default, React "re-renders" all the components when anything changes. This is usually fast enough that you don't need to care. However, you can provide a function that can tell whether there will be any change based on the previous and next states and props. If it is faster than re-rendering the component, then you get a performance improvement. The fact that you can control when components are rendered is a very important characteristic of React as it gives you control over its performance. We are going to talk more about performance in the future, stay tuned. Guess the filter # Connor McSheffrey implemented a small game using React. The goal is to guess which filter has been used to create the Instagram photo. React vs FruitMachine # Andrew Betts, director of the Financial Times Labs, posted an article comparing FruitMachine and React.. Even though we weren't inspired by FruitMachine (React has been used in production since before FruitMachine was open sourced), it's great to see similar technologies emerging and becoming popular. React Brunch # Matthew McCray implemented react-brunch, a JSX compilation step for Brunch. Adds React support to brunch by automatically compiling *.jsxfiles. You can configure react-brunch to automatically insert a react header ( /** @jsx React.DOM */) into all *.jsxfiles. Disabled by default. Install the plugin via npm with npm install --save react-brunch. Random Tweet # I'm going to start adding a tweet at the end of each round-up. We'll start with this one: This weekend #angular died for me. Meet new king #reactjs— Eldar Djafarov ッ (@edjafarov) November 3, 2013
http://facebook.github.io/react/blog/2013/11/06/community-roundup-10.html
CC-MAIN-2015-48
refinedweb
646
54.22
Hello, i need some help with C++...I'm attempting to make an 4x4 array. The user inputs the array "Line 1: " // the the user inputs his 4 numbers for line 1 and so on.... the program then adds the array across on all 4 lines, and twice diagonally. I can only use #include using namespace std; int main (){ //using only types int and double I want it to be stupid simple, using for loops, and make it as simple as possible. I realize that you must array[4][4]; int i, j; for(i=0; i<4; i++){ cout << "enter line 1" << i << endl; cin >> i; // not sure if this is for(j=0; j<4; j++) // right but I know // the for's are right // or at least on trak so i know the fors are getting there, but i have no clue how a user can input integers in an array, and i think i know how to add them [0][0]+[0][1]+[0][2]+[0][3]= result am i even close? can you give me some code please!!!
https://www.daniweb.com/programming/software-development/threads/109048/adding-multidemensional-arrays
CC-MAIN-2018-51
refinedweb
183
81.36
Closed Bug 454831 Opened 13 years ago Closed 13 years ago Create a policy for hosting content at namespace URIs Categories (mozilla.org :: Governance, task) Tracking (Not tracked) People (Reporter: davidwboswell, Assigned: zak) References Details I've been looking through the open bugs and there are several requests to create pages on the site to host a page for various namespace URIs. For instance: XBL bug 334634 microsummaries bug 354964 MozSearch bug 452351 XForms bug 419708 From reading the bugs it seems that having namespace URIs resolve to a real page isn't necessary, but it is desirable. It sounds like not having a page causes confusion and setting up a redirect isn't desirable because people aren't sure what URI to use for the namespace. The various bugs have more description about the issues. There's a page up at the XUL namespace URI so this is something we've done before. Should we just go ahead and add pages for these other namespace URIs or is this something that needs more discussion? This is not an area I'm familiar with so I wanted some guidance. This sounds like a good idea to me, though we probably should have some guideline for any namespaces created in the future, so we get them into a common style, i.e. directory. The already existing ones probably need to stay what they are anyway - including the XUL one that still makes a great joke for those who understand it (including the actual content on the page!) The only possible objection is that there could be software which fetches the URIs. I suggest you talk to someone who knows about this stuff such as dbaron and if he says it's OK, go ahead and do it. Gerv There is no generic software that fetches arbitrary namespace URIs - RDF-related namespaces quite often serve RDF, but no specification requires them to do so, nor have they agreed on what RDF and how it should be serialized and whether it should be embedded in another document and how and... The "only possible objection" is the desire to use each and every instance of someone encountering their first namespace URI which does not serve a namespace document (something they won't hit until they leave W3C specs, since anything on the W3C recommendation track has been required to return a document from any namespace URI it defines, since ten months after Namespaces was first published) as a teachable moment, a chance to explain to the hundredth or thousandth person that namespace URIs are opaque strings and even if they look like http URIs, they are not required to return anything. I would cc the people that I know are in that camp, but curiously enough, despite their love for such teachable moments, they seem to uncc themselves from bugs concerning namespace documents with great vigor, frequency, and rapidity. We could continue this desultory discussion for another three years or so (or, knowing us as I do, six or nine or twelve years), or someone with access and a web tree checked out could follow the W3C's example as seen in the SVG and XLink specs, and for the namespaces where someone has cared enough to file a bug just bloody well check in [[ This is an XML namespace defined in the <a href="link-to-spec">Name of spec</a>. For more information about Thing, please refer to <a href="link-to-devmo">The Devmo Introduction to Thing</a>. ]]] adjusting as need be for things which lack a separate spec and intro, and we could be done with all this. (Mildly amusing: (the "much lengthy discussion of the philosophy behind this [...] let us all get down to the rest of our business" part).) David: I suggest you or one of your team just go ahead and do this. Gerv Sounds good. The owners of the various projects can decide if they want to do this or not, but knowing there are no governance objections to this is helpful. Closing as fixed. Status: NEW → RESOLVED Closed: 13 years ago Resolution: --- → FIXED
https://bugzilla.mozilla.org/show_bug.cgi?id=454831
CC-MAIN-2021-49
refinedweb
688
61.4
A microframework for Bottle+React projects. Project description image Bottle-React *NOW SUPPORTS FLASK!* See examples/hello_world/run_flask.py. Description This library allows you to return react components from either Bottle or Flask. Originally created for. Example (Hello World) Assume you have a normal JSX file hello_world.jsx: var HelloWorld = React.createClass({ render: function() { return ( <div className='hello_world'> <h1>Hello {this.props.name}!</h1> <div> Thanks for trying bottle-react! </div> </div> ); } }) bottlereact._register('HelloWorld', HelloWorld) And some python code: app = bottle.Bottle() br = BottleReact(app) @app.get('/') def root(): return br.render_html( br.HelloWorld({'name':'World'}) ) When your route is called the react component will be rendered. See examples/hello_world for details. Principles Why did we develop this? We had several goals: - [x] Don’t cross-compile javascript during development. Compiling with webpack is too slow for non-trivial applications. (One of the niceties about web developement it alt-Tab/ctrl-R to see your changes.) And it causes too many subtle bugs between dev and prod that waste developer resources. - [x] Don’t merge all javascript into one ginormous bundle. Making your user download a 1.5Mb kitchensink.min.js every deployment is horrible. And 99% of it isn’t used on most pages. Loading 40kb total from multiple resources with HTTP keep-alive takes just a few ms per file and is much faster in practice. - [x] React components should be composable from Python. A lot of our routes look like this: @app.get('/something') def something(): user = bottle.request.current_user return br.render_html( br.HvstApp({'user':user.to_dict()}, [ br.HelloWorld({'name':user.name}), ]) ) The React component HvstApp (which renders the title bar and left nav) is taking two parameters. The first is a dict that will be passed as the JSON props to the React component. The second is a list that will become the children. This list can (and usually does) contain other React components. Install sudo pip install bottle-react NGINX Integration By default (in production mode) bottle-react writes to /tmp/bottlereact/hashed-assets/. To make NGINX serve these files directly, use the following: location ^~ /__br_assets__/ { alias /tmp/bottlereact/hashed-assets/; expires max; } Server Side Rendering To use server side rendering, please install the npm package `node-jsdom <>`__ with: $ sudo npm install -g node-jsdom Then pass either True or a callable into the render_server parameter. For example: def render_server(): ua = bottle.request.environ.get('HTTP_USER_AGENT') return util.is_bot(ua) BTW… Before enabling it for everyone, run some benchmarks. We find that it has very little impact on total page load time, at a considerable CPU expense and double the downloaded HTML size. So we only do it for search bots (as you can see in the example above). You will also likely have to shim some missing browser features. At minimum, React likes to put itself under window when run inside nodejs, so we have: // react in nodejs will put itself under window if(typeof React == 'undefined') { React = window.React; } In our application.js, since all our code expects it to be a global. Likewise, for things node-jsdom hasn’t yet implemented, you’ll likely find a few checks are needed, like: if (typeof DOMParser=='undefined') { // i guess we're not using DOMParser inside nodejs... } Documentation See the full documentation. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/bottle-react/
CC-MAIN-2018-47
refinedweb
577
52.15
2D line segment - Double More... #include <line_segment.h> 2D line segment - Double A line segment has a start point and an end point Clip this line to a rectangle. If clipping was not successful, this object is undefined Return true if two line segments are collinear. (All points are on the same line.). Return the intersection point of two lines. Get the midpoint of this line. Return true if two line segments intersect. Return the normal vector of the line from point A to point B. When using CL_Vec2i, the vector is an 8 bit fraction (multiplied by 256) != operator. == operator. Return the distance from a point to a line. Return [<0, 0, >0] if the Point P is right, on or left of the line trough A,B. Start point on the line.
http://gyan.fragnel.ac.in/docs/clanlib/classCL__LineSegment2d.html
CC-MAIN-2019-09
refinedweb
134
85.69
This document assumes the knowledge of the W3C recommendation or working drafts used in Cocoon (mainly XML, XSL in both its transformation and formatting capabilities). This document is not intended to be an XML or XSL tutorial but just shows how these technologies may be used inside the Cocoon framework to create web content. Cocoon is a publishing system that allows you to separate web development into three different layers: content, style and logic. In a way, Cocoon does not aim to simplify the creation of web content: in fact, it is harder to create XML/XSL content than it is to use HTML from the beginning. So, if you are happy with the web technology you are using today, don't waste your time and stick with what you already have. Otherwise, if your troubles are site management, if your graphics people are always in the way, if your HTML authors always mess up your page logic, if your managers see no results in hiring new people to work on the site - go on and make your life easier! This comment posted on the Cocoon mailing list shows you what we mean: I've got a site up and running that uses Cocoon. It rocks, the management loves me (they now treat me like I walk on water), and a couple of summer interns that I had helping me on the project are suddenly getting massively head-hunted by companies like AT&T now that they can put XML and XSL on their resumes. In a word: Cocoon simply rocks! Every good user guide starts with a Hello World example, and since we hope to write good documentation (even if it's as hard as hell!), we'll start from there too. Here is a well-formed XML file that uses a custom and simple set of tags: Hello World <?xml version="1.0"?> <page> <title>Hello World!</title> <content> <paragraph>This is my first Cocoon page!</paragraph> </content> </page> Even if this page mimics HTML (in a sense, HTML was born as a simple DTD for homepages), it is helpful to note that there is no style information and all the styling and graphic part is missing. Where do I put the title? How do I format the paragraph? How do I separated the content from the other elements? All these questions do not have answers because in this context they don't need one: this file should be created and maintained by people that don't need to be aware of how this content if further processed to become a served web document. On the other hand, we need to indicate how the presentation questions will be answered. To do this, we must indicate a stylesheet that is able to indicate how to interpret the elements found in this document. Thus, we follow a W3C recommendation and add the XML processing instruction to map a stylesheet to a document: <?xml-stylesheet href="hello.xsl" type="text/xsl"?> Now that our content layer is done, we need to create a stylesheet to convert it to a format readable by our web clients. Since most available web clients use HTML as their lingua franca, we'll write a stylesheet to convert our XML in HTML (More precisely, we convert to XHTML which is the XML form of HTML, but we don't need to be that precise at this point). Every valid stylesheet must start with the root element stylesheet and define its own namespace according to the W3C directions. So the skeleton of your stylesheet is: <?xml version="1.0"?> <xsl:stylesheet xmlns: </xsl:stylesheet> Once the skeleton is done, you must include your template elements, which are the basic units of operation for the XSLT language. Each template is matched against the occurence of some elements in the original document and the element is replaced with the child elements of the template, if they belong to other namespaces, or, if they belong to the XSLT namespace, they are further processed in a recursive way. template Let's see an example: in our HelloWorld.xml document page is the root element. This must be transformed into all those tags that identify a good HTML page. Your template becomes: HelloWorld.xml page <xsl:template <html> <head> <title><xsl:value-of</title> </head> <body bgcolor="#ffffff"> <xsl:apply-templates/> </body> </html> </xsl:template> where some elements belong to the standard namespace (which we mentally associate with HTML) and some others to the xsl: namespace. Here we find two of those XSLT elements: value-of and apply-templates. While the first searches the page element's direct children for the title element and replaces it with the content of the retrieved element, the second indicates to the processor that it should continue the processing of the other templates described in the stylesheet from that point on in the input document (known as the context node). value-of apply-templates title Some other possible templates are: <xsl:template <h1 align="center"> <xsl:apply-templates/> </h1> </xsl:template> <xsl:template <p align="center"> <i><xsl:apply-templates/></i> </p> </xsl:template> After the XSLT processing, the original document is transformed to <html> <head> <title>Hello</title> </head> <body bgcolor="#ffffff"> <h1 align="center">Hello</h1> <p align="center"> <i>This is my first Cocoon page!</i> </p> </body> </html> When a document is processed by an XSLT processor, its output is exactly the same for every browser that requested the page. Sometimes it's very helpful to be able to discover the client's capabilities and transform content layer into different views/formats. This is extremely useful when we want to serve content do very different types of clients (fat clients like desktop workstations and thin clients like wireless PDAs), but we want to use the same information source and create the smallest possible impact on the site management costs. Cocoon is able to discriminate between browsers, allowing the different stylesheets to be applied. This is done by indicating in the stylesheet linking PI the media type. For example, continuing with the HelloWorld.xml document, these PIs <?xml version="1.0"?> <?xml-stylesheet href="hello.xsl" type="text/xsl"?> <?xml-stylesheet href="hello-text.xsl" type="text/xsl" media="lynx"?> ... would tell Cocoon to apply the hello-text.xsl stylesheet if the Lynx browser is requesting the page. This powerful feature allows you to design your content independently and to choose its presentation depending on the capabilities of the browser agent. hello-text.xsl The media type of each browser is evaluated by Cocoon at request time, based on their User-Agent http header information. Cocoon is preconfigured to handle these browsers: User-Agent explorer - any Microsoft Internet Explorer, searches for MSIE (before searching for Mozilla, since IE pretends to be Mozilla too) opera - the Opera browser (before searching for Mozilla, since Opera pretends to be Mozilla too) lynx - the text-only Lynx browser java - any Java code using standard URL classes wap - the Nokia WAP Toolkit browser netscape - any Netscape Navigator or Mozilla, searches for Mozilla but you can add your own by personalizing the cocoon.properties file, modifying the browser properties. For example: cocoon.properties browser browser.0=explorer=MSIE browser.1=opera=Opera browser.2=lynx=Lynx browser.3=java=Java browser.4=wap=Nokia-WAP-Toolkit browser.5=netscape=Mozilla indicates that Cocoon should look for the token MSIE inside the User-Agent HTTP request header first, then Opera and so on, until Mozilla. If you want to recognize different generations of the same browser you should find the specific string you should look for, and - this is very important - indicate the order of matching, since other browsers' User Agent strings may contain the same string (see examples above). Quite often you want to create pages that depend on some user-supplied data. One way to do this is using HTML forms. Cocoon provides you with a simple way to use this data. Let's assume you've got the following list and you want the user to choose a country code and be shown the name of the corresponding country. <?xml version="1.0"?> <?cocoon-process type="xslt"?> <?xml-stylesheet href="page.xsl" type="text/xsl"?> <page> <country code="ca">Canada</country> <country code="de">Germany</country> <country code="fr">France</country> <country code="uk">United Kingdom</country> <country code="us">United States</country> <country code="es">Spain</country> </page> You now use the following XSL stylesheet with it <?xml version="1.0"?> <xsl:stylesheet xmlns: <xsl:param <xsl:template <html> <body> <xsl:choose> <xsl:when <p>Choose a country:</p> <form action="countries.xml" method="get"> <select name="countrycode" size="1"> <xsl:apply-templates <input type="submit"/> </form> </xsl:when> <xsl:when <xsl:apply-templates </xsl:when> <xsl:otherwise> <p>Unknown country code <em><xsl:value-of</em>. </p> </xsl:otherwise> </xsl:choose> </body> </html> </xsl:template> <xsl:template <option><xsl:value-of</option> </xsl:template> <xsl:template <p><em><xsl:value-of</em> stands for <xsl:value-of</p> </xsl:template> </xsl:stylesheet> Viewing countries.xml now will yield different results. When no parameter is given (i.e. using the URL countries.xml) the browser will receive the following page: countries.xml <html> <body> <p>Choose a country:</p> <form action="countries.xml" method="get"> <select name="countrycode" size="1"> <option>ca</option> <option>de</option> <option>fr</option> <option>uk</option> <option>us</option> <option>es</option> <input type="submit"> </form> </body> </html> Choosing one of the options in the list will result in a request for a URL like countries.xml?countrycode=fr and this page will look like: countries.xml?countrycode=fr <html> <body> <p><em>fr</em> stands for France</p> </body> </html> If for some reason no country element matching the countrycode parameter is found (e.g. countries.xml?countrycode=foo, you will get the following page: countrycode countries.xml?countrycode=foo <html> <body> <p>Unknown country code <em>foo</em>.</p> </body> </html> The Cocoon publishing system has an engine based on the reactor design pattern which is described in the picture below: Let's describe the components that appear on the schema: Request - Wraps around the client's request and contains all the information needed by the processing engine. The request must indicate which client generated the request, which URI is being requested and which producer should handle the request. Producer - Handles the requested URI and produces an XML document. Since producers are pluggable, they work like subservlets for this framework, allowing users to define and implement their own producers. A producer is responsible for creating the XML document which is fed into the processing reactor. It's up to the producer implementation to define the function that produces the document from the request object. Reactor - Responsible for evaluating which processor should work on the document by reacting on XML processing instructions. The reactor pattern is different from a processing pipeline since it allows the processing path to be dynamically configurable and it increases performance since only those required processors are called to handle the document. The reactor is also responsible for forwarding the document to the appropriate formatter. Formatter - Transforms the memory representation of the XML document into a stream that may be interpreted by the requesting client. Depending on other processing instructions, the document leaves the reactor and gets formatted for its consumer. The output MIME type of the generated document depends on the formatter implementation. Response - Encapsulates the formatted document along with its properties (such as length, MIME type, etc..) Loader - Responsible for loading the formatted document when this is executable code. This part is used for compiled server pages (principally XSP) where the separation of content and logic is merged and compiled into a Producer. When the formatter output is executable code, it is not sent back to the client directly, but it gets loaded and executed as a document producer. This guarantees both performance improvement (since the producers are cached) as well as easier producer development, following the common compiled server pages model. The Cocoon reactor uses XML processing instructions to forward the document to the right processor or formatter. These processing instructions are: <?cocoon-process type="xxx"?> for processing and <?cocoon-format type="yyy"?> for formatting These PIs are used to indicate the processing and formatting path that the document should follow to be served. In the example above, we didn't use them and Cocoon wouldn't know (despite the presence of the XSL PIs) that the document should be processed by the XSLT processor. To do this, the HelloWorld.xml document should be modified like this: <?xml version="1.0"?> <?cocoon-process type="xslt"?> <?xml-stylesheet href="hello.xsl" type="text/xsl"?> <page> <title>Hello World!</title> <content> <paragraph>This is my first Cocoon page!</paragraph> </content> </page> The other processing instruction is used to indicate what formatter should be used to transform the document tree into a suitable form for the requesting client. For example, in the document below that uses the XSL formatting object namespace, the Cocoon PI indicates that this document should be formatted using the formatter associated to the text/xslfo document type. text/xslfo <?xml version="1.0"?> <?cocoon-format type="text/xslfo"?> <fo:root xmlns: <fo:layout-master-set> <fo:simple-page-master <fo:region-body </fo:simple-page-master> </fo:layout-master-set> <fo:page-sequence> <fo:sequence-specification> <fo:sequence-specifier-repeating </fo:sequence-specification> <fo:flow <fo:block>Welcome to Cocoon</fo:block> </fo:flow> </fo:page-sequence> </fo:root> In a complex server environment like Cocoon, performance and memory usage are critical issues. Moreover, the processing requirement for both XML parsing, XSLT transformations, document processing and formatting are too heavy even for the lightest serving environment based on the fastest virtual machine. For this reason, a special cache system was designed underneath the Cocoon engine and its able to cache both static and dynamically created pages. Its operation is simple but rather powerful: This special cache system is required since the page is processed with the help of many components which, independently, may change over time. For example, a stylesheet or a file template may be updated on disk. Every processing logic that may change its behavior over time it's considered changeable and checked at request time for change. Each changeable point is queried at request time and it's up to the implementation to provide a fast method to check if the stored page is still valid. This allows even dynamically generated pages (for example, an XML template page created by querying a database) to be cached and, assuming that request frequency is higher than the resource changes, it greatly reduces the total server load. Moreover, the cache system includes a persistent object storage system which is able to save stored objects in a persistent state that outlives the JVM execution. This is mainly used for pages that are very expensive to generate and last very long without changes, such as compiled server pages. The store system is responsible for handling the cached pages as well as the pre-parsed XML documents. This is mostly used by XSLT processors which store their stylesheets in a pre-parsed form to speed up execution in those cases where the original file has changed, but the stylesheet has not (which is a rather frequent case).
http://cocoon.apache.org/1.x/guide.html
CC-MAIN-2016-07
refinedweb
2,573
50.87
POP3 Transport Reference Introduction The POP3 transport can be used for receiving messages from POP3 inboxes. The POP3S tranport connects to POP3 mailboxes using the javax.mail API. 2 POP3 flow-based version: Here is the secure version of the same configuration: logs into the bob mailbox on pop.gmail.com using password foo (using the default port 995 for the POP3S endpoint). xslt: Read error because of: java.io.IOException: Server returned HTTP response code: 401 for URL: Here is how you define transformers in your Mule configuration file: Each transformer supports all the common transformer attributes and properties: xslt: Read error because of: java.io.IOException: Server returned HTTP response code: 401 for URL: The object-to-mime-transformer has the following attributes: To use these transformers, make sure you include the 'email' namespace in your mule configuration. Filters Filters. Schema Reference POP3 Schema: POP3 Structure: POP3S Schema: POP3S Structure:: Mule-Maven Dependencies. To add the 'mule-deps' repository to your Maven project, add the following to your pom.xml: Limitations The following known limitations affect email transports:
https://docs.mulesoft.com/mule-user-guide/v/3.2/pop3-transport-reference
CC-MAIN-2017-51
refinedweb
180
54.63
SYNOPSIS #include <dtk_video.h> dtk_htex dtk_load_video_gst(int flags, const char *desc); DESCRIPTION This function loads as a dynamic texture the video stream arriving into the sink of a gstreamer pipeline. The pipeline description desc should follows the syntax used in the gst-launch(1) utility and it must contains an video sink of type appsink named dtksink that will be used to get the data for the dynamic texture. The texture will then been tracked by an internal texture manager so that the next call using the same desc argument will return the same texture handle, thus sparing the resources resources_gst() is thread-safe.
http://manpages.org/dtk_load_video_gst/3
CC-MAIN-2020-10
refinedweb
104
54.76
Posted in Cho Hisabi. I was so surprised and irritated. Discovered while working on migrating a Java-built system from Amazon Linux to Amazon Linux 2. In Amazon Linux 2, execute the following command to set the time zone of the system. #timedatectl set-timezone "Asia/Tokyo" Normally, this should be all OK, and even if you search the net, only Amazon Linux 2 and CentOS 7 series articles only mention Cocomade. However, in the Java program, TimeZone cannot be acquired correctly and becomes UTC. sample: TimeZoneInfo.java import java.util.Calendar; class TimeZoneInfo { public static void main(String[] args) { Calendar cal = Calendar.getInstance(); System.out.println(cal.getTimeZone()); System.out.println(System.getProperty("user.timezone")); } } Execution result: sun.util.calendar.ZoneInfo[id="UTC",offset=0,dstSavings=0,useDaylight=false,transitions=0,lastRule=null] UTC There are several ways to deal with it. (It took a long time to find it) etc. I think there are some problems. --timedatectl will rewrite / etc / localtime, but not `` `/ etc / sysconfig / clock . (It seems that there is no file itself in CentOS 7 in the first place ...) --Java doesn't look at / etc / localtime```. It seems that it depends on the version. It may be a change in the behavior of glibc, but the detailed cause is unknown. --Since it became systemd, / etc / sysconfig is not often referred to, but the legacy of the past remains, so please consider compatibility a little more. .. .. .. --The old system runs on Java 1.6, and the new system runs on Java 7. I don't know what happens with the latest Java 8 or later. Which is the most correct (or should be) response method? .. .. .. Or rather, standardize the Time Zone settings. Recommended Posts
https://linuxtut.com/timedatectl-and-java-timezone-b023e/
CC-MAIN-2020-50
refinedweb
287
51.95
IE.au3 Alert stops executing script By ADIN, in AutoIt General Help and Support Recommended Posts Similar Content - By Jemboy Hi, At work we have some proprietary website, users have to login to. I have "made" an autoit executable to start IE, go the website, login, so the user do not have input their credentials every time. By NDA I am not allowed disclosed the URL of the website nor the login credentials So I made a fake website and an autoitscript to illustrate my question. #include <ie.au3> $oIE = _IECreate ("about:blank", 0, 1, 1, 1) $HWND = _IEPropertyGet($oIE, "hwnd") WinActivate ($HWND,"") WinSetState($HWND, "", @SW_MAXIMIZE) _IENavigate ($oIE, "",1) The above start my demo website. The actual website has some links in the footer that I do not want most people click on. I contacted the developers of the website and they are thinking of making an option to configure what links to show in the footer, but they said it's not a high priority for them. I discovered, that by click F12 and deleting the <footer> element the footer is delete from the live page view (until the page is reloaded off course) I want to automate the removal of the footer, without using things like send(). I tried getting the footer with _IEGetObjById and deleting it with _IEAction, but that didn't work. Does any one has an idea how I could delete the footer directly from view with an autoit script? TIA, Jem. - Dequality I don't really have any code cuz whatever i try it doesnt work -.- Can anyone please make me a sample i can study? <,< i need to make a simple script to open explorer click some objects on the site , rinse repeat.. a example with autoit' website is just fine i just need to study it.. the i can't seem to find any yt video on this.. '-_- And yes i tried looking at the manual .. but i dont get anything from it <.<' Any help highly appreciatet <3 - By XinYoung Hello again I need to click this Submit button, but it's in a lightbox and has no ID (i think) and I can't figure out a way to target it. _IEFormSubmit doesn't work. Thank you - By lenclstr746 <span class="fr66n"><button class="dCJp8 afkep coreSpriteHeartOpen _0mzm-"><span class="glyphsSpriteHeart__outline__24__grey_9 u-__7" aria-</span></button></span> I want to click this button ?
https://www.autoitscript.com/forum/topic/186909-ieau3-alert-stops-executing-script/
CC-MAIN-2019-04
refinedweb
403
67.99
Hi - good afternoon Hi good afternoon write a java program that Implement an array ADT with following operations: - a. Insert b. Delete c. Number of elements d. Display all elements e. Is Empty... developer : First of all the JDK (Java Development Kit) must be available in your system and its path must be located in the Environment variable. For detail hi all - Java Beginners /Good_java_j2ee_interview_questions.html?s=1 Regards, Prasanth HI...hi all hi, i need interview questions of the java asap can u please sendme to my mail Hi, Hope you didnt have this eBook. You Which is the good website for struts 2 tutorials? Which is the good website for struts 2 tutorials? Hi, After... for learning Struts 2. Suggest met the struts 2 tutorials good websites. Thanks Hi, Rose India website is the good Ple help me its very urgent Ple help me its very urgent Hi.. I have one string 1)'2,3,4' i want do like this '2','3','4' ple help me very urgent Help Very Very Urgent - JSP-Servlet requirements.. Please please Its Very very very very very urgent... Thanks...Help Very Very Urgent Respected Sir/Madam, I am sorry..Actually... contains all the ID's present in the database.. When I click or select any , please reply my posted question its very urgent Thanks...struts we are using Struts framework for mobile applications,but we are not using jsps for views instead of jsps we planning to use xhtmls.In struts - Struts Struts hi can anyone tell me how can i implement session tracking in struts? please it,s urgent........... session tracking? you mean... that session and delete its value like this... session.removeAttribute hi! hi! public NewJFrame() { initComponents(); try { Class.forName("java.sql.Driver"); con=DriverManager.getConnection("jdbc... hi the total cost of the two items;otherwise,display the total for all three Very Very Urgent -Image - JSP-Servlet with some coding, its better.. PLEASE SEND ME THE CODING ASAP BECAUSE ITS VERY VERY URGENT.. Thanks/Regards, R.Ragavendran... Hi friend, Code...Very Very Urgent -Image Respected Sir/Madam, I am which data structure is good in java..? - Java Beginners which data structure is good in java..? Hi frends, Actually i... and reasons why it is good.... Hi Friend, To learn Data Structures... and vector ...etc........ i wanted to know, which technique is good to store Programming help Very Urgent - JSP-Servlet Please please its very urgent.. Thanks/Regards, R.Ragavendran..  ...Programming help Very Urgent Respected Sir/Madam, Actually my code shows the following output: There is a combo box which contains all the ID Struts Alternative Struts Alternative Struts is very robust and widely used framework, but there exists the alternative to the struts framework... of struts. stxx sits on top of Struts, extending its existing functionality to allow Struts Struts How Struts is useful in application development? Where to learn Struts? Thanks Hi, Struts is very useful in writing web applications easily and quickly. Developer can write very extensible and high...: Please help me... its very urgent Please help me... its very urgent Please send me a java code to check whether INNODB is installed in mysql... If it is there, then we need to calculate the number of disks used by mysql Hi.. - Java Beginners Hi.. Hi, I got some error please let me know what is the error integrity constraint (HPCLUSER.FAFORM24GJ2_FK) violated - parent key its very urgent Hi Ragini can u please send your complete source JSP and AJAX- very urgent - Ajax Plz its very very Urgent.. Regards, Ragavendran...JSP and AJAX- very urgent Respected Sir/Madam, I am..." button, I have to get a table from the database where all the names starting Very Big Problem - Java Beginners Very Big Problem Write a 'for' loop to input the current meter... with the following conditions(all must be fulfilled in one class only): (i)if unit(current.... Please give my answer in a for loop only and as soon as you can. Hi Hello - Struts Hello Hi Friends, Thakns for continue reply I want to going with connect database using oracle10g in struts please write the code and send me its very urgent only connect to the database code Hi hi roseindia - Java Beginners are direct interacted with its class that mean almost all the properties of the object...hi roseindia what is class? Hi Deepthi, Whatever we can see in this world all the things are a object. And all the objects HI Jsp check box..! HI Jsp check box..! Hi all.. I want to update the multiple values of database table using checkbox..after clicking submit the edited field has to update and rest has to enable to update...please help me..its urgent hi - Development process hi hi sir. i was used one openbiblio site it has all projects screens shots only it is very useful to me for developing my projects.now that site... screenshots for all modules... wait for ur reply the code immediately. Please its urgent. Regards, Valarmathi Hi Friend...struts Hi, I am new to struts.Please send the sample code for login....shtml http Top 10 Tips for Good Website Design Designing a good website as to come up with all round appreciation, traffic... presence. Good website design tips reflect on the art of mastering the traffic... to content structure to device friendliness and all of these factors addressed Struts - Framework Struts Good day to you Sir/madam, How can i start struts application ? Before that what kind of things necessary to learn and can u tell me clearly sir/madam? Hi friend Articles it. Struts is a very popular framework for Java Web applications... these gaps that involves an extension to the Struts framework. All Web applications...? Remember that form populate happens very early in the Struts request processor2 - Struts list. The drop down works very well in Mozilla, but in IE7 it behaves very... in Mozilla displays very well. Im trying to figure out from a long time, not succesful. Please can i get some help? Hi friend, please, Can you send The second Question is very tuff for me - Java Beginners The second Question is very tuff for me You are to choose between.... One procedure returns the smallest integer if its array argument is empty...; Hi Friend, Try the following: public class Small{ public static Struts Book - Popular Struts Books Struts and its supporting technologies, including JSPs, servlets, Web applications.... The book begins with a discussion of Struts and its Model-View-Controller... the development of a non-trivial sample application - covering all the Struts components implementing DAO - Struts to care. Yet, for me its very important. Any kind response will be very much...implementing DAO Hi Java gurus I am pure beginner in java, and have to catch up with all java's complicated theories in a month for exam. Now (very urgent) - Design concepts & design patterns (very urgent) hi friends, This is my code in html... replace all those " " with a single sign or character. thanks in advance hi srikala if u want to display submit button in center of web (very urgent) - Java Server Faces Questions (very urgent) hi friends, This is my code in JSF  ...; In the above code is there any way i can replace all those " " with a single sign Hello - Struts of project....open pop-up menu and access all page store in the database and operate all thing....its urgent please let me know...i know in VB it is possible...Hello Hi friends, I ask some question please Hi..Again Doubt .. - Java Beginners Hi..Again Doubt .. Thank u for ur Very Good Response...Really great.. i have completed that.. If i click the RadioButton,,ActionListenr should get... WORRY WE'ILL ROCK "); } } } Hi Can u send your MainMenu.java Diff between Struts1 and struts 2? - Struts Diff between Struts1 and struts 2? What are the difference in between Struts 1 and Struts 2? Hi Gaurav There is a big difference.../viewanswers/246.html But the best part is, struts 2 has more features and its Reply Me - Struts Reply Me Hi Friends, I am new in struts please help me... file,connection file....etc please let me know its very urgent Hi Soniya, I am sending you a link. This link will help you. Please Simple Program Very Urgent.. - JSP-Servlet Simple Program Very Urgent.. Respected Sir/Madam, I am... coding asap because its most urgent.. Thanks/Regards, R.Ragavendran.. Hi friend, Step To solve the problem: "winopenradio.jsp" 1 Calling Action on form load - Struts Calling Action on form load Hi all, is it possible to call... Hi friends, When the /editRegistration action is invoked, a registrationForm is created and added to the request, but its validate method Struts Struts Tell me good struts manual
http://roseindia.net/tutorialhelp/comment/2675
CC-MAIN-2013-48
refinedweb
1,471
67.35
Difference in local vs exported naming conventions for JMSDaniel Abramovich May 9, 2012 5:56 PM Hi, I'm knee deep in a migration to JBoss 7.1.x (from 4.04) and have question about the naming conventions of JMS destinations in local java: namespace versus the java:jboss/exported context. In the default standalone-full.xml the example destinations are defined as: {code:xml}> {code} I'm curious about why the the root java: context entry does is missing the jms portion, or conversely why the exported context has it. Unless this is for backwards compatibility or some such, it seems like naming it two different ways is confusing. Code executing inside JBoss creates an new InitialContext gets a Context which is rooted at "java:". Code that looks up the Context using remoting gets a Context rooted at java:jboss/exported. It seems if there is some code which is given a Context, it should (as much as possible) be agnostic about whether is running in or out of the JBoss instance. Adding the extra "jms" namespace to the destination name requires that the same object be looked up using difference names. I realize I can name my destination however I like, but I'd like to understand the reason for the convention. Also, I noticed that in the JNDI view (either from the CLI or web console), the exported portion of the JNDI tree just resolves to the implementation object. It seems like it should be possible to expand that to see a view of what is exported. Anyone know if there is already an issue for this? Thanks -Dan 1. Re: Difference in local vs exported naming conventions for JMSJeff Mesnil May 10, 2012 9:28 AM (in response to Daniel Abramovich)1 of 1 people found this helpful I would not read too much in the examples provided in AS7. In latest AS7 release, we have removed the destination examples from the standalone-full.xml file. You would have to use your own JNDI naming conventions when you add the destinations. FWIW, I agree with you that it's better to have the same tree hierarchy between local and exported names. 2. Re: Difference in local vs exported naming conventions for JMSDaniel Abramovich May 10, 2012 11:24 AM (in response to Jeff Mesnil) Jeff - thanks for the perspective.
https://developer.jboss.org/thread/199464?tstart=0
CC-MAIN-2018-17
refinedweb
394
60.95
Code. Collaborate. Organize. No Limits. Try it Today. I: "Creating a mathematical expression evaluator is one of the most interesting exercises in computer science, whatever the language used. This is the first step towards really understanding what sort of magic is hidden behind compilers and interpreters....". I agree completely, and hope that you do too. advantage of the first method is that it allows you to store the parsed expressions and re-evaluate them without re-parsing the same string several times. However, the second method is more convenient, and because the CalcEngine has a built-in expression cache, the parsing overhead is very small.]); } Function names are case-insensitive (as in Excel), and the parameters are themselves expressions. This allows the engine to calculate expressions such as "=ATAN(2+2, 4+4*SIN(4))". The CalcEngine class also provides a Functions property that returns a dictionary containing all the functions currently defined. This can be useful if you ever need to enumerate remove functions from the engine. Functions Notice how the method implementation listed above casts the expression parameters to the expected type (double). This works because the Expression class implements implicit converters to several types (string, double, bool, and DateTime). I find that the implicit converters allow me to write code that is concise and clear. This approach is similar to the binding mechanism used in WPF and Silverlight, and is substantially more powerful than the simple value approach described in the previous section. However, it is also slower than using simple values as variables. For example, if you wanted to perform calculations on an object of type Customer, you could do it like this:> CalcEngine supports binding to sub-properties and collections. The object assigned to the DataContext property can represent complex business objects and entire data models. This approach makes it easier to integrate the calculation engine into the application, because the variables it uses are just plain old CLR objects. You don't have to learn anything new in order to apply validation, notifications, serialization, etc. The original usage scenario for the calculation engine was an Excel-like application, so it had to be able to support cell range objects such as "A1" or "A1:B10". This requires a different approach, since the cell ranges have to be parsed dynamically (it would not be practical to define a DataContext object with properties A1, A2, A3, etc). To support this scenario, the CalcEngine implements a virtual method called GetExternalObject. Derived classes can override this method to parse identifiers and dynamically build objects that can be evaluated. For example, the sample application included with this article defines a DataGridCalcEngine class that derives from CalcEngine and overrides GetExternalObject to support Excel-style ranges. This is described in detail in a later section ("Adding Formula Support to the DataGridView Control"). DataGridCalcEngine I mentioned earlier that the CalcEngine class performs two main functions: parsing and evaluating. If you look at the CalcEngine code, you will notice that the parsing methods are written for speed, sometimes even at the expense of clarity. The GetToken method is especially critical, and has been through several rounds of profiling and tweaking.: The parsing process typically consumes more time than the actual evaluation, so it makes sense to keep track of parsed expressions and avoid parsing them again, especially if the same expressions are likely to be used over and over again (as in spreadsheet cells or report fields, for example). The CalcEngine class implements an expression cache that handles this automatically. The CalcEngine.Evaluate method looks up the expression in the cache before trying to parse it. The cache is based on WeakReference objects, so unused expressions eventually get removed from the cache by the .NET garbage collector. (This technique is also used in the NCalc library.); } ... The method calls the Optimize method on each of the two operand expressions. If the resulting optimized expressions are both literal values, then the method calculates the result (which is a constant) and returns a literal expression that represents the result. To illustrate further, function call expressions are optimized as follows:; } ... First, all parameters are optimized. Next, if all optimized parameters are literals, the function call itself is replaced with a literal expression that represents the result. Expression optimization reduces evaluation time at the expense of a slight increase in parse time. It can be turned off by setting the CalcEngine.OptimizeExpressions property to false. CalcEngine.OptimizeExpressions The CalcEngine class has a CultureInfo property that allows you to define how the engine should parse numbers and dates in expressions. CultureInfo By default, the CalcEngine.CultureInfo property is set to CultureInfo.CurrentCulture, which causes it to use the settings selected by the user for parsing numbers and dates. In English systems, numbers and dates look like "123.456" and "12/31/2011". In German or Spanish systems, numbers and dates look like "123,456" and "31/12/2011". This is the behavior used by Microsoft Excel. The sample included with this article shows how the CalcEngine class can be used to extend the standard Microsoft DataGridView control to support Excel-style formulas. The image at the start of the article shows the sample in action. Note that the formula support described here is restricted to typing formulas into cells and evaluating them. The sample does not implement Excel's more advanced features like automatic reference adjustment for clipboard operations, selection-style formula editing, reference coloring, and so on. The sample defines a DataGridCalcEngine class that extends CalcEngine with a reference to the grid that owns the engine. The grid is responsible for storing the cell values which are used in the calculations. The DataGridCalcEngine class adds cell range support by overriding the CalcEngine.GetExternalObject method as follows:; } The method analyzes the identifier passed in as a parameter. If the identifier can be parsed as a cell reference (e.g., "A1" or "AZ123:XC23"), then the method builds and returns a CellRangeReference object. If the identifier cannot be parsed as an expression, the method returns The CellRangeReference also implements the IEnumerable interface to return the value of all cells in the range. This allows the calculation engine to evaluate expressions such as "Sum(A1:B10)". Notice that the GetValue method listed above uses an _evaluating flag to keep track of ranges that are currently being evaluated. This allows the class to detect circular references, where cells contain formulas that reference the cell itself or other cells that depend on the original cell.); } The method starts by retrieving the value stored in the cell. If the cell is not in edit mode, and the value is a string that starts with an equals sign, the method uses CalcEngine to evaluate the formula and assigns the result to the cell. If the cell is in edit mode, then the editor displays the formula rather than the value. This allows users to edit the formulas by typing into in the cells, just like they do in Excel. If the expression evaluation causes any errors, the error message is displayed in the cell. At this point, the grid will evaluate expressions and show their results. But it does not track dependencies, so if you type a new value into cell "A1" for example, any formulas that use the value in "A1" will not be updated. To address this, the DataGridCalc class overrides the OnCellEditEnded method to invalidate the control. This causes all visible cells to be repainted and automatically recalculated after any edits. OnCellEditEnded // invalidate cells with formulas after editing protected override void OnCellEndEdit(DataGridViewCellEventArgs e) { this.Invalidate(); base.OnCellEndEdit(e); } Let's not forget that implementation of the Evaluate method used by the CellRangeReference class listed earlier. The method starts by retrieving the cell content. If the content is a string that starts with an equals sign, the method evaluates it and returns the result; otherwise it returns the content itself: // gets the value in a cell public object Evaluate(int rowIndex, int colIndex) { // get the value var val = this.Rows[rowIndex].Cells[colIndex].Value; var text = val as string; return !string.IsNullOrEmpty(text) && text[0] == '=' ? _ce.Evaluate(text) : val; } That is all there is to the DataGridCalc class. Notice that calculated values are never stored anywhere . All formulas are parsed and evaluated on demand. The sample application creates a DataTable with 50 columns and 50 rows, and binds that table to the grid. The table stores the values and formulas typed by users. DataTable The sample also implements an Excel-style formula bar across the top of the form that shows the current cell address, content, and has a context menu that shows the functions available and their parameters. Finally, the sample has a status bar along the bottom that shows summary statistics for the current selection (Sum, Count, and Average, as in Excel 2010). The summary statistics are calculated using the grid's CalcEngine as well. } This ensures that tests are performed whenever the class is used (in debug mode), and that derived classes do not break any core functionality when they override the base class methods. The Test method is implemented in a Tester.cs file that extends the CalcEngine using partial classes. All test methods are enclosed in an #if DEBUG/#endif block, so they are not included in release builds. Test #if DEBUG/#endif This mechanism worked well during development. It helped detect many subtle bugs that might have gone unnoticed if I had forgotten to run my unit tests when working on separate projects. While implementing the CalcEngine class, I used benchmarks to compare its size and performance with alternate libraries and make sure CalcEngine was doing a good job. A lot of the optimizations that went into the CalcEngine class came from these benchmarks. I compared CalcEngine with two other similar libraries which seem to be among the best available. Both of these started as CodeProject articles and later moved to CodePlex: The benchmarking method was similar to the one described by Gary Beene in his 2007 Equation Parsers article. Each engine was tested for parsing and evaluating performance using three expressions. The total time spent was used to calculate a "Meps" (million expressions parsed or evaluated per second) index that represents the engine speed. The expressions used were the following:
http://www.codeproject.com/Articles/246374/A-Calculation-Engine-for-NET?fid=1648564&df=90&mpp=25&sort=Position&spc=Relaxed&tid=4306378
CC-MAIN-2014-15
refinedweb
1,716
54.42
Pretty much as it says. I tried through every little Form1.(names).blah.blah But I could not find how to access a listbox on Form2 from form1. Please help. Tim Pretty much as it says. I tried through every little Form1.(names).blah.blah But I could not find how to access a listbox on Form2 from form1. Please help. Tim Tim, this question has come up a number of times, and it requires a decent understanding of object oriented programming to answer. All you need to know should be here: I haven't seen anything on the internet about it, so I'm going to write an article on doing this soon. If the link above doesn't answer your question, be patient and I'll let you know when it's done. Remember - each form is just a class. Its controls are its member variables. Use normal OO to interact with the forms just like you would any other class. Aye, sorry. I have a good understanding of OOP in C++. Just a little confused with the way it's set up in C#. I tried changing the members to protected instead of private on the class form. But that didn't work either. I'll just wait until you have finished the article. Thanks! Tim Well, I finally figured out how to access them. I just turned them to public controls. But, even though I can access them and give them a command, such as form2.txtOutput.Text = "Hi";, it doesn't do it. I dunno, very very confusing... Tim Don't expose ANY variables as public. Thats about as contra-OOP as it gets. If you need to set something in Form2 from an instance of Form1, write a method or property in Form2 that takes the text as parameter and then sets it's own textbox. Call this method on your instance of Form2. hth -nv She was so Blonde, she spent 20 minutes looking at the orange juice can because it said "Concentrate." When in doubt, read the FAQ. Then ask a smart question. Correct. This really is the best simple way of doing things, and its the method I use 90% of the time.Correct. This really is the best simple way of doing things, and its the method I use 90% of the time. Originally Posted by nvoigtOriginally Posted by nvoigt The parent (creating) form should set properties on the child form, and the child form should fire events that the parent is interested in knowing if something happens. So for instance, as nvoigt said, if you are in MainForm, and you wish to set a textbox on Form2, Form2 really should have a property like this: If Form2 has a button and MainForm wants to know when that button is clicked, Form2 should have an event:If Form2 has a button and MainForm wants to know when that button is clicked, Form2 should have an event:Code:public class Form2 : Form { [...Form2's Code...] public string TheText { get { return textBox1.Text; } set { textBox1.Text = value; } } [...The rest of Form2's Code...] } This has the added advantage of allowing multiple forms know when the button is clicked.This has the added advantage of allowing multiple forms know when the button is clicked.Code:public class Form2 : Form { [...Form2's Code...] public event EventHandler ButtonClicked; // called in Form2 when the button is clicked void button1_Click(object sender, EventArgs e) { // Tell our listeners, in this case MainForm, the button was clicked if (ButtonClicked != null) { ButtonClicked(this, EventArgs.Empty); } } [...The rest of Form2's Code...] } In MainForm, this is how you could use these new properties of Form2: Code:public class MainForm() { [...MainForm's Code...] void CreateNewForm2() { Form2 form2 = new Form2(); form2.TheText = "G'day world!"; // add a new event handler form2.ButtonClicked += new EventHandler(Form2_ButtonClicked); } // We then have the event handler: void Form2_ButtonClicked(object sender, EventArgs e) { MessageBox.Show("OMG! The button in Form2 was clicked, but I'm showing you this from MainForm!!!!1"); } [...The rest of MainForm's Code...] } Last edited by nickname_changed; 06-09-2005 at 01:27 AM. Right, I switcheds back to private, and I learned how to make the methods and things. But now, of course there's another problem, I ALWAYS have form2 and form1 showing at the same time, and the only way it works is when I do the change, and then hit form2.show. THen it upgrades it to the new change(ie, text = "Hello"), However, is there a way to refresh the form, without having to .show it everytime? I tried the refresh and update command, but that didn't work either. Hope this is clear of what i'm doing. Thanks! Tim
https://cboard.cprogramming.com/csharp-programming/66393-accessing-controls-different-form-one-form.html
CC-MAIN-2017-13
refinedweb
791
75.1
.naming.core;18 19 import javax.naming.directory.BasicAttributes ;20 21 /**22 * Extended version for Attribute. All our dirContexts should return objects23 * of this type. Methods that take attribute param should use this type24 * of objects for performance.25 *26 * This is an extension of the 'note' in tomcat 3.3. Each attribute will have an27 * 'ID' ( small int ) and an associated namespace. The attributes are recyclable.28 *29 * The attribute is designed for use in server env, where performance is important.30 *31 * @author Costin Manolache32 */33 public class ServerAttributes extends BasicAttributes 34 {35 36 // no extra methods yet. 37 }38 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/apache/naming/core/ServerAttributes.java.htm
CC-MAIN-2013-20
refinedweb
117
61.53
----- Original Message ----- From: "D.J. Barrow" <barrow_dj@yahoo.com> To: "Kernel Mailing List" <linux-kernel@vger.kernel.org>; "Netfilter" <netfilter-devel@lists.samba.org> Sent: Monday, May 13, 2002 6:12 AM Subject: Mips scalibility problems & softirq.c improvments > Hi, > While testing the SMP performance of iptables with a lot of rules on a mips based cpu, > I found that the SMP performance was 40% lower on 2 cpus than 1 cpu. > > There is a number of reasons for this the primary being that the rules were bigger > than the shared L2 cache, little enough can be done about this. > > The second is that interrupts are on every mips port I bothered checking > are only delivered on cpu 0 ( this is really pathetic ). > > > See the code that prints /proc/interrupts in arch/mips/kernel/irq.c > int get_irq_list(char *buf) > { > struct irqaction * action; > char *p = buf; > int i; > > p += sprintf(p, " "); > for (i=0; i < 1 /*smp_num_cpus*/; i++) > > Need I say more..... > > As softirqs are usually bound to the same > cpu that start the softirqs softirqs performs really really badly, > also the fact that the softirq.c code checks in_interrupt on > entry means that it frequently does a quick exit. > > > I also will be providing a patch I developed to the developers of a mips based > system on chip which distributes the irqs over all cpus using 2 polices > even interrupts to cpu 0 odd interrupts to cpu 1 or leaving the interrupts > enter in all cpus & only call do_IRQ on the cpu with the lowest local_irq_count > & local_bh_count this should cause softirqs to perform will on this > system anyway. > > > I've provided a small patch to irq.c which fixes /proc/interrupts in 2.4.18 mips32 > hopefully somebody will be kind enough to fix up the 64 bit & > the latest stuff in mips64 & the latest oss.sgi.com cvs trees. > > --- linux.orig/arch/mips/kernel/irq.c Sun Sep 9 18:43:01 2001 > +++ linux/arch/mips/kernel/irq.c Mon May 13 10:34:15 2002 > @@ -71,13 +71,13 @@ > > int get_irq_list(char *buf) > { > + int i, j; > struct irqaction * action; > char *p = buf; > - int i; > > p += sprintf(p, " "); > - for (i=0; i < 1 /*smp_num_cpus*/; i++) > - p += sprintf(p, "CPU%d ", i); > + for (j=0; j<smp_num_cpus; j++) > + p += sprintf(p, "CPU%d ",j); > *p++ = '\n'; > > for (i = 0 ; i < NR_IRQS ; i++) { > @@ -85,7 +85,13 @@ > if (!action) > continue; > p += sprintf(p, "%3d: ",i); > +#ifndef CONFIG_SMP > p += sprintf(p, "%10u ", kstat_irqs(i)); > +#else > + for (j = 0; j < smp_num_cpus; j++) > + p += sprintf(p, "%10u ", > + kstat.irqs[cpu_logical_map(j)][i]); > +#endif > p += sprintf(p, " %14s", irq_desc[i].handler->typename); > p += sprintf(p, " %s", action->name); > > @@ -93,7 +99,7 @@ > p += sprintf(p, ", %s", action->name); > *p++ = '\n'; > } > - p += sprintf(p, "ERR: %10lu\n", irq_err_count); > + p += sprintf(p, "ERR: %10u\n", atomic_read(&irq_err_count)); > return p - buf; > } > > > > I also provide a small patch for softirq.c which makes sure the > the softirqs stay running if in cpu_idle & no reschedule is pending. > This improves softirq.c performance a small bit as it usually exits > after calling each softirq once rather than staying in the loop > if it has nothing better to do. > > --- linux.old/kernel/softirq.c Tue Jan 15 04:13:43 2002 > +++ linux.new/kernel/softirq.c Thu May 9 12:36:46 2002 > @@ -95,7 +95,8 @@ > local_irq_disable(); > > pending = softirq_pending(cpu); > - if (pending & mask) { > + if ((pending && current==idle_task(cpu) && !current->need_resched ) > + || (pending & mask) ) { > mask &= ~pending; > goto restart; > } > diff -u -r linux.old/include/linux/sched.h linux.new/include/linux/sched.h > --- linux.old/include/linux/sched.h Thu May 9 18:08:42 2002 > +++ linux.new/include/linux/sched.h Thu May 9 10:30:34 2002 > @@ -936,6 +936,19 @@ > return res; > } > > +#ifdef CONFIG_SMP > + > +#define idle_task(cpu) (init_tasks[cpu_number_map(cpu)]) > +#define can_schedule(p,cpu) \ > + ((p)->cpus_runnable & (p)->cpus_allowed & (1 << cpu)) > + > +#else > + > +#define idle_task(cpu) (&init_task) > +#define can_schedule(p,cpu) (1) > + > +#endif > + > #endif /* __KERNEL__ */ > > #endif > > diff -u -r linux.old/kernel/sched.c linux.new/kernel/sched.c > --- linux.old/kernel/sched.c Wed May 1 10:40:26 2002 > +++ linux.new/kernel/sched.c Thu May 9 10:30:26 2002 > @@ -112,18 +112,7 @@ > struct kernel_stat kstat; > extern struct task_struct *child_reaper; > > -#ifdef CONFIG_SMP > > -#define idle_task(cpu) (init_tasks[cpu_number_map(cpu)]) > -#define can_schedule(p,cpu) \ > - ((p)->cpus_runnable & (p)->cpus_allowed & (1 << cpu)) > - > -#else > - > -#define idle_task(cpu) (&init_task) > -#define can_schedule(p,cpu) (1) > - > -#endif > > void scheduling_functions_start_here(void) { } > > > Also find the patches sent as attachments. > > > ===== > D.J. Barrow Linux kernel developer > Home: +353-22-47196. > Work: +353-91-758353 > > __________________________________________________ > Do You Yahoo!? > LAUNCH - Your Yahoo! Music Experience > softirq_fix.diff Description: Binary data mips32_irq.c_fix.diff Description: Binary data
http://www.linux-mips.org/archives/linux-mips/2002-05/msg00060.html
CC-MAIN-2015-06
refinedweb
776
63.49
As we all know, that java compiler ignores the comments written in the java code file. But using a trick we can execute the code present in a comment section. Consider the following program − public class Tester { public static void main(String[] args) { // The comment below is magic.. // \u000d System.out.println("Hello World"); } } This will produce the following result − Hello World The reason behind this behaviour is the use of \u000d character in comment which is a new line character. As Java compiler parses the new line character, the put the println command to next line resulting in the following program. public class Tester { public static void main(String[] args) { // The comment below is magic.. // System.out.println("Hello World"); } } Reasoning behind this unicode parsing before source code processing is as follows − To keep java source code to be written using any unicode character. To make java code processing easier by ASCII based editors. Helps in writing documentation in unicode supporting languages.
https://www.tutorialspoint.com/Executable-Comments-in-Java
CC-MAIN-2021-49
refinedweb
163
54.02
Some of your Python scripts may wish to be informed when various events in VMD occur. The mechanism for expressing this interest is to register a callback function with a special module supplied by VMD. When the event of interest occurs, all registered will functions will be called; VMD will pass the functions information specific to the event. The set of callbacks events is listed in Table 10.4. All callback functions must take two arguments. The first argument will be an object given at the time the function is registered; VMD makes no use of this object, it simply saves it and passes it to the callback when the callback function is called. The second argument to the callback function will be a tuple containing 0 or more elements, depending on the type of callback. The type of information for each callback is listed in the third column of Table 10.4. Callbacks are registered/deregistered using the add_callback/del_callback methods of the VMD.vmdcallbacks module. The syntax for these methods is: def add_callback(name, func, userdata = None): def del_callback(name, func, userdata = None):name should be one of the callback names in Table 10.4. func is the function object. userdata is any object; if no object is supplied, None will be passed as the first argument to the callback function. To unregister a callback, use the same name, func, and userdata as were used when the callback was registered. The same function may be registered multiple times with different userdata objects.
http://www.ks.uiuc.edu/Research/vmd/vmd-1.9.1/ug/node167.html
CC-MAIN-2015-32
refinedweb
254
62.17
Introduction to NLTK – Natural Language Toolkit. Let’s being with importing NLTK. Then, we will download required packages of NLTK which include prepossessed data as well as data of many renowned books. import nltk # Downloading required packages nltk.download('all') True When we import book from NLTK, it always shows us examples as shown above and if we want to call any one of those sentences, then we can simply type its key in python and it will provide us its data. In this case keys correspond to “text1”, “text2” and so on. Lets try calling text1 text2 <Text: Sense and Sensibility by Jane Austen 1811> It simply provided me text corresponding to key text2. Searching data in book/text provided by NLTK To be specific text2 in it self contains complete book, that is, it contains complete book Sense and Sensibility by Jane Austen 1811 And we can traverse that book and search for words and sentences related to those words. So, we will be using a function concordance() over text2 and will search a word in it let’s say love. We can do it as shown below. text2.concordance("love") Displaying 25 of 77 matches: priety of going , and her own tender love for all her three children determine es ." " I believe you are right , my love ; it will be better that there shoul . It implies everything amiable . I love him already ." " I think you will li sentiment of approbation inferior to love ." " You may esteem him ." " I have n what it was to separate esteem and love ." Mrs . Dashwood now took pains to oner did she perceive any symptom of love in his behaviour to Elinor , than sh how shall we do without her ?" " My love , it will be scarcely a separation . ise . Edward is very amiable , and I love him tenderly . But yet -- he is not ll never see a man whom I can really love . I require so much ! He must have a ry possible charm ." " Remember , my love , that you are not seventeen . It is f I do not now . When you tell me to love him as a brother , I shall no more s hat Colonel Brandon was very much in love with Marianne Dashwood . She rather e were ever animated enough to be in love , must have long outlived every sens hirty - five anything near enough to love , to make him a desirable companion roach would have been spared ." " My love ," said her mother , " you must not pect that the misery of disappointed love had already been known to him . This most melancholy order of disastrous love . CHAPTER 12 As Elinor and Marianne hen she considered what Marianne ' s love for him was , a quarrel seemed almos ctory way ;-- but you , Elinor , who love to doubt where you can -- it will no man whom we have all such reason to love , and no reason in the world to thin ded as he must be of your sister ' s love , should leave her , and leave her p cannot think that . He must and does love her I am sure ." " But with a strang I believe not ," cried Elinor . " I love Willoughby , sincerely love him ; an or . " I love Willoughby , sincerely love him ; and suspicion of his integrity deed a man could not very well be in love with either of her daughters , witho It not only provided us search results for word love, it also provided us some text around our searched word. Finding similar words in book using NLTK We can also find similar words from book, like what other words may mean love in whole book or some other word of your choice. For doing so we will use function similar() over text2 and will provide word “love” just like earlier to fetch similar words. text2.similar('love') affection sister heart mother time see town life it dear elinor marianne me word family her him do regard head From above result we can see that we are getting words as well as parts of sentences which may represent word love in one way or another. For example “affection”, “dear”, “family”, “regard” etc. Similarly we can try it for other words too, like “danger”, “laugh” etc. text2.similar('danger') house time letter living world furniture visit change family alteration person it children heart half first whole hope day moment text2.similar('laugh') be look manner house person mother day moment concern subject ruin smile week letter tears and by of in at So, basically similar() function provides us other words from book which may have been used in context of searched word. Common words similar to 2 or more words By using common_contexts([list_of_words]) we can check which words are used as common for 2 or words in same context. text2.common_contexts(['joy', 'love']) her_and text2.common_contexts(['very', 'too']) was_young was_well was_much know_well is_well it_much him_well are_good be_much been_much had_much the_great you_far From above examples we can see that by using function common_contexts(), it is providing us all those words which have been used in similar context of 2 given words. Set of punctuation & words used We can also get set of all words & punctuation that are used in a book (‘text’ in this case). To get that you can use set(text) function. import pandas as pd temp_df = pd.DataFrame(set(text2)) temp_df.head() temp_df = pd.DataFrame(sorted(set(text2))) temp_df.head() We can see that set() function is providing us complete set of words. You can try running just set(text2), it will print complete list. To get the number of different words or vocabulary including punctuation you can use len() function. number_of_distinct_elements = len(set(text2)) number_of_distinct_elements 6833 total_words_in_book = len(text2) total_words_in_book 141576 Now, to check the % of distinct words we can simply divide them. print((number_of_distinct_elements /total_words_in_book) * 100 ) 4.826383002768831 It shows a very interesting result to us, that is, complete book comprises only 4.83% distinct words & punctuation. Whole book is just an amalgamation of of such few words. Checking dispersion of specific words We can also check dispersion of specific words throughout book on a plot, using dispersion_plot([list_of_words]) function import matplotlib.pyplot as plt plt.figure(figsize=(20, 8)) # to increase default size of figure text2.dispersion_plot(["love", "very", "too", "hate"]) Every vertical line in above graph, shows us the location of of each word in actual book. Let’s apply above techniques on our own." tokens = nltk.word_tokenize(smaple_words) tokens[:10] ['It', 'is', 'good', 'to', 'be', 'someone', "'s", 'charioteer', 'but', 'in'] print("Number of words and punctuations in above text = " + str(len(tokens))) Number of words and punctuations in above text = 94 In above steps, we have converted our sentence into 94 tokens, that is, basically we have divided complete text into parts. Why we did this? – NLTK requires RAW text to be converted to NLTK TEXT class type text. For doing so we will be using following function Text(tokens) nltk_text = nltk.Text(tokens) print(nltk_text) print("Type of nltk_text = " + str(type(nltk_text))) <Text: It is good to be someone 's charioteer...> Type of nltk_text = <class 'nltk.text.Text'> Now, since we have obtained NLTK “TEXT” class type text, we can proceed with using all of its library functions on it. # Findiang a word nltk_text.concordance('NLTK') Displaying 2 of 2 matches: Natural Language Toolkit - 'NLTK ' . NLTK is one of the key libraries which is tural Language Processing in Python . NLTK can be used in a variety of ways imp # Number of distinct words number_of_distinct_words = len(set(nltk_text)) number_of_distinct_words 70 # Total number of words total_words=len(nltk_text) total_words 94 # % Richness of words richness_of_words = ((number_of_distinct_words/ total_words) * 100) print("% Richness of words = " + str(richness_of_words)) % Richness of words = 74.46808510638297 # Dispersion plot for our text plt.figure(figsize=(12, 4)) # to increase default size of figure nltk_text.dispersion_plot(["NLTK", "in", "someone"]) In our next tutorial we will dive further in understanding use of NLTK & Natural Language Processing. So, stay tuned and keep learning. You can also checkout our interesting video tutorials on YouTube ML For Analytics
https://mlforanalytics.com/2020/05/17/introduction-to-nltk/
CC-MAIN-2021-21
refinedweb
1,340
63.19
Templating the Rich Text & Title fields The Rich Text fieldHtml() method to transform the field into HTML code. The following is an example that would display the title of a blog post. ${document.getStructuredText("blog_post.title").asHtml(linkResolver)} In the previous example when calling the asHtml() method, you need to pass in a Link Resolver function. A Link Resolver is a function that determines what the url of a document link will be. The following example shows how to display the rich text body content of a blog post. ${document.getStructuredText("blog_post.body").asHtml(linkResolver)} You can customize the HTML output by passing an HTML serializer to the method as shown below. This example will edit how an image in a Rich Text field is displayed while leaving all the other elements in their default output. // In your controller HtmlSerializer serializer = new HtmlSerializer() { public String serialize(Fragment.StructuredText.Element element, String content) { if (element instanceof Fragment.StructuredText.Block.Image) { Fragment.StructuredText.Block.Image image = (Fragment.StructuredText.Block.Image)element; return (image.getView().asHtml(linkResolver)); } return null; } }; // In JSP ${doc.getStructuredText("blog_post.body").asHtml(linkResolver, serializer)} The getText() method will convert and output the text in the Rich Text / Title field as a string. You need to specify which text block you wish to output. In the example below we use the getBlocks() method to get all the blocks then the get() select the first block. Note you will need to do this even if there is only one block. <h3 class="author"> ${document.getStructuredText("page.author").getBlocks().get(0).getText()} </h3> The getTitle() method will find the first heading block in the Rich Text field. Here's an example of how to integrate this. <h2>${document.getStructuredText("page.body").getTitle().getText()}</h2> The getFirstParagraph() method will find the first paragraph block in the Rich Text field. Here's an example of how to integrate this. <p>${document.getStructuredText("page.body").getFirstParagraph().getText()}</p> The getFirstPreformatted() method will find the first preformatted block in the Rich Text field. Here's an example of how to integrate this. <pre>${document.getStructuredText("page.body").getFirstPreformatted().getText()}</pre> The getFirstImage() method will find the first image block in the Rich Text field. You can then use the getUrl() method to display the image as shown here. <img src="${document.getStructuredText('page.body').getFirstImage().getUrl()}" /> Was this article helpful? Can't find what you're looking for? Get in touch with us on our Community Forum.
https://prismic.io/docs/technologies/templating-rich-text-and-title-fields-java
CC-MAIN-2022-05
refinedweb
411
50.73
From the Source LANGUAGES: VB.NET ASP.NET VERSIONS: ALL Vision Quest Understanding ViewState By Dave Reed Most ASP.NET developers are familiar with ViewState. It is the mechanism used by ASP.NET to persist changes to the controls on a page across postbacks. It allows Web forms to behave more like Windows forms, where modifications to the form remain active despite the stateless nature of Web requests. For the most part, developers take the mechanism for granted. ViewState is a great tool but like most tools, it can be misused. It is important to have an understanding of precisely how it works when developing highly optimized sites and for avoiding issues you may encounter when working with dynamically created controls or other advanced operations. Without this understanding, you may be misusing ViewState, bloating it with unnecessary data. The ViewState mechanism can be described as having the following four functions: - Storing data on a per-control basis by key name, like a Hashtable. - Tracking changes from the initial state of the control s data. - Serializing dirty data into a hidden form field on the client. - Automatically deserializing data on postbacks and restoring it to each control. While ViewState does have one overall purpose in the ASP.NET Framework, these four main roles in the page lifecycle are quite distinct from each other. We can separate them and try to understand them individually. Storing Data into ViewState ViewState is a protected property defined on the base class for all controls, System.Web.UI.Control. The type of this property is System.Web.UI.StateBag. Each control has its own instance of a StateBag. Strictly speaking, the StateBag has little to do with ASP.NET. It happens to be defined in the System.Web assembly, but there s no reason why the StateBag class couldn t live alongside ArrayList in the System.Collections namespace. The StateBag has an indexer that accepts a string as the key and any object as the value: ViewState("Key1") = 123.45; ' store a number ViewState("Key2") = "abc"; ' store a string ViewState("Key3") = DateTime.Now ' store a DateTime In practice, controls use ViewState as the backing store for most, if not all their properties. This is true of almost all properties of all controls, built-in or otherwise. To use ViewState as the backing store for a property means the value of the property is stored in the StateBag rather than a field, as you typically would have (see Figure 1). ' using a private field Private field As String = "default" Public Property Text() As String Get Return field End Get Set(ByVal value As String) field = value End Set End Property ' using viewstate Public Property Text() As String Get Dim o As Object = ViewState("Text") If (o Is Nothing) Then Return "default" Else Return o End If End Get Set(ByVal value As String) ViewState("Text") = value End Set End Property Figure 1: Note that in both cases the default value is the same, because accessing a value in a StateBag with a non-existent key will return null. Tracking Changes from the Initial State The StateBag has a tracking ability. Tracking is either on, or off. Tracking can be turned on by calling TrackViewState on the StateBag, but once on, it cannot be turned off. When tracking is on, any writes to the StateBag will cause that item to be marked as dirty. The StateBag even has a method you can use to detect if an item is dirty, IsItemDirty (see Figure 2). You can also manually cause an item to be considered dirty by calling SetItemDirty. ' before tracking begins... stateBag.IsItemDirty("key") 'false stateBag("key") = "foo" stateBag.IsItemDirty("key") 'false ' after tracking begins... stateBag.IsItemDirty("key") 'false stateBag("key") = "bar" stateBag.IsItemDirty("key") 'true stateBag.SetItemDirty("key", False) stateBag.IsItemDirty("key") 'false Figure 2: How items in a StateBag are marked. Tracking allows the StateBag to keep track of what values have been changed since tracking started. Tracking begins after the internal method TrackViewState has been called. It is important to know that any assignment will mark the item as dirty even if the value given matches the value it already has. ASP.NET could simply serialize the entire collection of items, but it actually only serializes the items marked as dirty. This is why tracking is important. To appreciate the benefit this provides, you must understand a little bit about how ASP.NET parses markup: <asp:Label ASP.NET parses the above markup, generating code that creates an instance of the specified control, a Label. The Text attribute declared within the tag prompts ASP.NET to use reflection to detect whether the control has a property by that name. It does, so the generated code sets its value to the declared value. Because most properties of controls use their StateBag as their backing store, the value of the Text property, Hello World , is saved as an entry in the StateBag. However, the StateBag during this stage in the page lifecycle is not yet tracking. ASP.NET recursively calls TrackViewState on every control s StateBag during the Init phase, and markup is processed before that. This is what allows ASP.NET to detect the difference between a declaratively set value and a dynamically set value. Entries in the StateBag that correspond to declared attributes will not be marked as dirty. The Init and Load events are interesting in that the order in which they occur is reversed. Init begins at the bottom of the tree and works its way up. Load begins at the top and works its way down. The consequence of this is that when the Init event occurs in a control or page, the Init phase has already occurred for all of its child controls. It also means while the control firing the event is still not tracking changes to its state, its child controls are! Keep that in mind when manipulating the state of child controls. Any properties you modify are going to be marked as dirty and will therefore be serialized into ViewState. Serializing Dirty Data into a Hidden Form Field on the Client If you ve ever viewed the HTML source of an ASP.NET page, you ve likely noticed the hidden form field named __VIEWSTATE. That data is the result of base-64 encoding the serialized StateBag for all the controls on the page. How this process occurs is interesting. Controls on a page are structured as a tree, where the Page is the root of that tree. Each control has child controls, which form branches of the tree. Recall that each control in this tree has its own instance of a StateBag. During the SaveViewState phase of the page lifecycle, which is after PreRender, ASP.NET recursively calls the SaveViewState method on every control s StateBag in the control tree. What results from this process is a tree that is structured not unlike the control tree itself, except instead of a tree of controls, it is a tree of data. The data at this point is not yet serialized into the string you see in the hidden form field. The key here is in understanding how it is determined which data winds up in the tree and which data doesn t. When the StateBag is asked to save its state, it only saves items in it that are marked as dirty. That means the serialized state is a representation of only the data on the page, which is different than how it was declared. Take, for example, a Label control. Whether you declare the Text of the label to be abc or the full text of War and Peace, the size of the serialized ViewState is going to be the same because it won t contain the declared value at all. Only if the Label s text is changed from the declared value dynamically will the value be serialized into ViewState. Only serializing dirty data makes sense. It would only be a waste of resources to serialize the Label s Text: put it in the hidden form field, allow the client to post it back to the server, then deserialize it. The value is declared in the markup, so it s going to be repopulated whether it exists in ViewState or not! Automatically Restoring State on Postbacks When a postback occurs, the serialized, base-64 encoded, and optionally encrypted string is sent to the server, along with the rest of the form. ASP.NET retrieves the value during the LoadViewState phase in the page lifecycle, which is after Init and before Load. It s also before postback data is loaded for any controls that are postback data handlers. The string is unencoded, unencrypted, and deserialized back into the original tree of data from which it was serialized. The StateBag has a method, LoadViewState, which simply iterates over the items in the given state object and adds them back into its collection. Remember that each control has its own instance of the StateBag, so this process is recursive. ASP.NET begins tracking during the Init phase by recursively calling TrackViewState on each StateBag in the control tree. So all StateBags are tracking changes when ViewState is deserialized. When the page first begins to load during a postback, all properties are set to their natural declared values, prior to when tracking begins. ASP.NET then enables tracking during the Init phase. The LoadViewState phase then reassigns any values from the deserialized data tree, which are only those values that were marked dirty from the previous request. Because tracking is enabled, this causes those items to be marked as dirty once again. The fact they are marked as dirty means they ll be persisted in the next postback once again, even if the value isn t changed again during this request. Misusing ViewState Without understanding the tracking mechanism of ViewState, it is easy to accidentally cause data to become serialized that doesn t need to be. Figure 3 illustrates a common example of misusing ViewState. It is a custom control that specifies a default value for its Text property by setting it from the Load phase. Because ViewState begins tracking in the Init phase, this is going to have the unfortunate side effect of marking the Text entry in the StateBag as dirty. Hello World will be serialized into ViewState, despite the fact it is the control s default value and does not need to be serialized. In fact, doing nothing more than dropping this control on a page will increase the size of the page s ViewState (unless the page developer disables ViewState for the control). Public Class MyControl Inherits Control Public Property Text() As String Get Return ViewState("Text") End Get Set(ByVal value As String) ViewState("Text") = value End Set End Property Protected Overrides Sub OnLoad(ByVal args As EventArgs) Me.Text = "Hello World" End Sub End Class Figure 3: A common example of misusing ViewState. The correct way to specify a default value is as described in Figure 1, by returning the default in the entry as null. This avoids setting the value into the StateBag while it is tracking. Interestingly, the correct way is also more compact than the incorrect way. It is not necessary to override OnLoad. You might also be tempted to do the assignment from the Init phase, as ViewState isn t yet being tracked at that point. That is true, but recall child controls are tracking state by that time. So if you create a composite control, even the Init phase is too late to modify the properties of child controls with default values without unnecessarily bloating ViewState! Instead, specify defaults in child controls as they are created, before they are added to the control tree. Be ViewState-friendly ViewState optimization is easy when you understand what s going on. Now that you have a complete understanding of how ViewState works, and how it interacts with the page lifecycle, it should be easy to be ViewState-friendly! Besides, ViewState-friendly solutions often are simpler and more compact. Dave Reed is a member of the ASP.NET team at Microsoft. He has a degree in Computer Science from California State University, Northridge in southern California. Dave maintains a blog on .NET and ASP.NET topics at. The ASP.NET dev team welcomes suggestions for topics you d like to see covered in this column. You can reached Matt Gibbs, Development Manager for the ASP.NET team at Microsoft, at mailto:[email protected].
http://www.itprotoday.com/web-development/vision-quest
CC-MAIN-2018-17
refinedweb
2,089
63.8
New lower price for Axon II ($78) and Axon Mote ($58). 0 Members and 1 Guest are viewing this topic. #include <avr/io.h>#include <util/delay.h>void main(void){ DDRB = 0xFF; while (1) { PORTB = 0xFF; _delay_us(1500); //Also tried _delay_ms(1.5); PORTB = 0x00; _delay_ms(20); // Tried from 10 ms to 20 ms, but still results are the same }} Anybody help...I have also tried the code below. After uploading the hex file, it slows down at a particular position, but just keeps vibrating. Admin's .hex file stops exactly without any movement and the very reason I need to use that code. I need to do this for a college project and hence need to document the code I use and so cannot just use the .hex file from admin. You might have to adjust the potentiometer. There should be a little screw showing somewhere on the servo. Adjust that until it stops. I suspect he's using a timer to produce the correct pulses. That's a better way to do it than a manual timing loop. Take a look at for how to do this kind of thing. I think it will have the answers you're looking for. It has an example of driving a servo. I am not sure if he uses timer. Admin has used PD0 and PD1 whereas the timer pins are PB0 and PB1. Not sure if we can use timer for other pins. Take a look at the tutorial I mentioned last time QuoteTake a look at the tutorial I mentioned last timeI did go through that tutorial. It uses timer and the pins used are PB1 and PB2 pins. That is the reason I do not want it Maybe if you could explain more fully what you actually want to do it would be easier to help. Thanks for your patience. Also, to make it clear, I am not looking for other options as I am aware of other options. I just need to use the same specific or equivalent code which admin has used, and understand how it works. I may be stubborn, but I love learning like this. //AVR includes#include <avr/io.h> // include I/O definitions (port names, pin names, etc)#include <avr/interrupt.h> // include interrupt support//AVRlib includes#include "global.h" // include global settings//define port functions; example: PORT_ON( PORTD, 6 );#define PORT_ON( port_letter, number ) port_letter |= (1<<number)#define PORT_OFF( port_letter, number ) port_letter &= ~(1<<number)//************DELAY FUNCTIONS************//wait for X amount of cycles (23 cycles is about .992 milliseconds)//to calculate: 23/.992*(time in milliseconds) = number of cyclesvoid delay_cycles(unsigned long int cycles) { while(cycles > 0) cycles--; }//***************************************//*********hold servo********************void hold_servo(signed long int speed) { PORT_ON(PORTD, 0); delay_cycles(speed); PORT_OFF(PORTD, 0); delay_cycles(200); }//***************************************int main(void) { DDRD = 0xFF; //configure all D ports for output while(1) { //hold servo hold_servo(34.7782); } return 0; } hold_servo(34.7782);
http://www.societyofrobots.com/robotforum/index.php?topic=14198.0
CC-MAIN-2014-52
refinedweb
487
75.3
Silverlight 5.0 RC: Implicit Templates and Effective Data Representation Posted by: Mahesh Sabnis , on 9/18/2011, in Category Silverlight 2, 3, 4 and 5 Views: 81692 Tweet Abstract: Silverlight 5 RC Implicit Templates are DataTemplates and are associated with a specific DataType. They are a powerful addition to the Silverlight 5 templating abilities and in this article, we will explore the same A couple of months ago, I was working on a requirement using Silverlight 4, which contained a ListBox with Employees information in it. This ListBox contained Employee’s category information like Manager, Operator etc and the data was displayed in the ListBox using DataTemplates. A part of the requirement was to use a style for every Employee Category, which I found difficult to implement in SL 4, once the DataTemplate for the ListBox was set. I somehow implemented the styling programmatically. A couple of days ago, while exploring Silverlight 5 RC and looking at the new Implicit Templates feature, I remembered the requirement I just described. Implicit Templates are DataTemplates and are associated with a specific DataType. They are applied on the control which has a DataTemplate containing this DataType. I decided to use Implicit Templates for the above requirement. For this article, you must have Silverlight 5 RC and Silverlight 5 RC Tools for Visual Studio 2010 SP1 . By the way, you can develop Silverlight 3, 4 and 5 applications side-by-side. On a side note, make sure you check our latest article Microsoft Silverlight 4 Tutorials You Must Read Step 1: Open VS2010 and create a new Silverlight Application, name it as ‘SL5_Implicit_Template’. Make sure that you select Silverlight 5 version as shown below: Step 2: In this project add a new class file, name it as ‘SourceClass.cs’. This file contains classes used for the DataBinding: The above code defines an ‘Employee’ base class and Manager and Operator classes inherit from it. Step 3: Open MainPage.xaml.cs and write the following code: The above code defines a collection of Employees with various categories. Step 4: Open MainPage.Xaml and declare an Application namespace in the UserControl XAML tag as below: xmlns:source="clr-namespace:SL5_Implicit_Template" Step 5: Add the TextBlock and ListBox on the Page and Set the ItemsSource property of the ListBox to the ‘Employees’ collection. The XAML code is as shown below: Step 6: Run the application and the following result will be displayed: The above result is logically correct, since the Employees collection is defined for an Employee base class and Operator and Manager Classes are inherited from it. Step 7: Now to apply different look and feel for every Employee based upon the category, we need to define separate DataTemplate for each category Data type e.g. Employee, Manager and Operator. To do this in the UserControl Resources, add the DataTemplates given below. Make sure that these templates need to be automatically applied to the specific DataType, so you should not define ‘x:key’ for any DataTemplate. Step 8: Now run the application again and the result will be as shown below: The above image proves two things – Implicit templates in Silverlight is cool and that I am not a good UI designer :) Conclusion: Using the Implicit Template feature, we can provide a rich UI display to the DataTemplates. The entire source code of this article can be downloaded over here Follow @dotnetcurry Recommended Articles Silverlight 5: Some New Features For Developers Encoding Video using Expression Encoder 4.0 and using it in an ASP.NET Website Silverlight: Basic Integration with SharePoint 2010 WCF Data Services Silverlight 4: Consuming Workflow Service Silverlight 4: Applying Style to the Selected DataGrid Cell during Edit Operation Silverlight 4.0: Applying Data Templates on ListBox at Runtime Author Mahesh Sabnis is a Microsoft MVP having 14 years of experience in IT education and development. He is a Microsoft Certified Trainer (MCT) since 2005 and has conducted various Corporate Training programs for .NET Technologies (all versions). Follow him on twitter @ maheshdotnet Leave us some feedback Post your Comments Old Comments (Read Only) Please enable JavaScript to view the Disqus Comment posted by Bhavesh Rana on Wednesday, September 21, 2011 1:43 AM useful...This features is missing..and because of that we need to deal wiht Conveter..DataTemplateSelected etc... This is easy to use and quick to code.. Comment posted by Ashutosh nigam on Thursday, September 22, 2011 7:40 AM very nice post.....
http://www.dotnetcurry.com/ShowArticle.aspx?ID=762
CC-MAIN-2015-27
refinedweb
742
50.67
Morning! I've run into a snag when running the peach fuzzer. The python script runs fine when you don't evoke any command line options, but crashes when you attempt to test any of the pits. For example, when I attempt to test any of the samples that come with peach, it cannot find the module 'psyco'. I've searched my whole BT4 R2 install, and it doesn't exist. Here's the output: I am running a pretty vanilla HDD install of BT2 R2 and I did attempt to update the system via apt-get update ; apt-get upgradeI am running a pretty vanilla HDD install of BT2 R2 and I did attempt to update the system via apt-get update ; apt-get upgradeCode: root@bt:/pentest/fuzzers/peach# ./peach.py -t samples/HTTP.xml ] Peach 2.2.1 Runtime Traceback (most recent call last): File "./peach.py", line 251, in <module> from Peach.Engine import * File "/pentest/fuzzers/peach/Peach/__init__.py", line 41, in <module> import Generators, Publishers, Transformers File "/pentest/fuzzers/peach/Peach/Publishers/__init__.py", line 37, in <module> import file, sql, stdout, tcp, udp, com, process, http, icmp, raw, remote File "/pentest/fuzzers/peach/Peach/Publishers/remote.py", line 36, in <module> from Peach.Engine.engine import Engine File "/pentest/fuzzers/peach/Peach/Engine/engine.py", line 755, in <module> from Peach.Engine.parser import * File "/pentest/fuzzers/peach/Peach/Engine/parser.py", line 53, in <module> from Peach.Engine.dom import * File "/pentest/fuzzers/peach/Peach/Engine/dom.py", line 3511, in <module> import psyco ImportError: No module named psyco I did see another user having this issue in what looked to be an older version of BT4 but the thread seems to just die. Anyone else seeing this behavior? Mods: My apologies if this is in the wrong forum. Wasn't sure where to put it.
http://www.backtrack-linux.org/forums/printthread.php?t=38423&pp=10&page=1
CC-MAIN-2017-17
refinedweb
315
59.8
Splinter makes acceptance testing of Web applications simpler by automating interactions with the browser. In Splinter, actions such as visiting a link, clicking on a particular component or links on the page can be automated with very few lines of code. This article provides insights into the world of automatic Web application testing using Splinter. Automating the sequence of actions to be carried out when interacting with a browser is important in testing Web applications. The sequence of actions ranges from simply opening the Web page to see whether it is loading properly in a browser, to advanced activities such as iterating through all possible visual states of the Web page. There are many options when it comes to carrying out Web application testing, some of which are illustrated in Figure 1. - Selenium enables the automation of tasks across many platforms with various browsers. Its key feature is the integration with many programming languages and testing frameworks (). - Mechanize facilitates programmatic Web browsing using Python. It is the equivalent of the Perl Module WWW::Mechanize. Convenient link parsing and automatic observance of robots.txt are some of the key highlights of Mechanize (). - Zope.testbrowser is a programmable browser, which carries out functional testing of Web applications. Along with many other automation features, Zope.testbrowser provides performance related details such as how much time each request takes (). - Windmill: The major features of the Windmill test framework are the cross-browser record, edit and playback, its interactive service shell, customisable proxy handling, and JavaScript test integration (). - PhantomJS provides methods to perform headless website testing, capturing of screens, page automation, monitoring of page loading, etc (). Splinter Splinter is a Web application testing tool with easy-to-use functions for most of the frequently performed tasks. What makes Splinter stand out is the ease with which it allows the automation of tasks. Splinter was developed as an easily usable abstraction layer on top of the available automation tools such as Selenium, PhantomJS, etc. The following code sequence, to automatically fill a value into a text box, will provide a glimpse of how simple it is to perform tasks using Splinter: browser.fill(search, open source for you) To perform the same thing in Selenium, the code is shown below: my_node = browser.find_element.by_name(search) my_node.send_keys(open source for you) The features of Splinter are illustrated in Figure 2. Splinter installation As Splinter is Python based, using it requires Python to be installed in your system. Splinter is supported by Python version 2.7+. If your system has Python installed in it, then Splinter installation can be carried out with the help of pip as shown below: $ pip install splinter If you are behind a proxy server, then the corresponding proxy settings need to be made before executing the aforementioned pip command. Supported Web drivers Splinter has support for all the major Web drivers listed below: - Chrome Web driver - Firefox Web driver - PhantomJS Web driver - Remote Web driver - Zoptest browser As the support spans a spectrum of drivers, Splinter can be utilised in various scenarios that require any of the above-mentioned drivers. Splinter demo with the OSFY page To get familiar with Splinter, lets write a few lines of Python code which automate the following tasks: 1. Open the Firefox browser 2. Navigate to 3. Enter the search query in the provided search input box on the Web page 4. Click on the Submit button (the magnifier icon, in this case) 5. Check whether the resultant page has a specific keyword As you will infer from the following code, first an instance of the browser is built using Splinter, and then from splinter import Browser with Browser() as browser: # Step 1 & 2 url = browser.visit(url) # Step 3 browser.fill(s, zotero) # Step 4 button = browser.find_by_tag(button).first button.click() # Step 5 if browser.is_text_present(Research): print(Yes, the Article Found) else: print(No, it wasnt found) raw_input(); the specified OSFY page is loaded, as follows: from splinter import Browser with Browser() as browser: url = browser.visit(url) Filling the search query in this case is carried out very quickly by calling the fill method of the browser object with the name of the textbox ( s in this case) and query as parameters: browser.fill(s, zotero) Then, finding the Submit button would have been very simple, if a name for it was provided in the page code. But, as the Submit buttons name is missing, it can be found by selecting the items of type button and choosing the first among the resultant collection. This is clicked by calling the click method! button = browser.find_by_tag(button).first button.click() Next, in the resultant page, the keyword research is searched for and if it is found, then the message Yes is provided; otherwise, the message you get is No. When you execute the following code in a proxy environment if browser.is_text_present(Research): print(Yes, the Article Found) else: print(No, it wasnt found) then the corresponding settings need to be made in the code, as shown below: proxyIP = specify your proxy IP here proxyPort = specify your proxy port here proxy_settings = {network.proxy.type: 1, network.proxy.http: proxyIP, network.proxy.http_port: proxyPort, network.proxy.ssl: proxyIP, network.proxy.ssl_port:proxyPort, network.proxy.socks: proxyIP, network.proxy.socks_port:proxyPort, network.proxy.ftp: proxyIP, network.proxy.ftp_port:proxyPort } Finding elements Splinter provides various ways of finding the elements in a Web page as listed in Table 1. Finding hyperlinks To find hyperlinks, Splinter provides the functions listed in Table 2 that can be chosen based on the nature of the data such as a link text, , which is used as the input to find a link. Executing JavaScript Splinter allows the execution of JavaScript with supported browser objects. A simple code to execute JavaScript is shown below: browser.execute_script($(body).empty()) Similarly, evaluation of the script can also be carried out using the following code segment: browser.evaluate_script(10+70) == 80 Headless mode When you wish to perform the operations without explicitly opening the browsers, then drivers such as PhantomJS and Zope.testbrowser can be used. To use these drivers, the corresponding dependencies need to be satisfied. Handling cookies Splinter allows you to handle cookies using simple functions. To do this, the cookies attribute from the browser instance is utilised. For creating a cookie, use the following code: browser.cookies.add({country: India}) To fetch all the cookies, use the code below: browser.cookies.all() For deletion of one or more cookies, use the following code: browser.cookies.delete(test) # deletes the cookie test browser.cookies.delete(test1, test2) # deletes two cookies test1, test2 browser.cookies.delete() # deletes all cookies Handling iframes and alerts To handle iframes, Splinter provides a method for the browser instance called get_iframe. The argument in this method is the name or ID or index: with browser.get_iframe(testframe) as iframe: iframe.do_stuff() For handling alerts and prompts, the following fragments of code C can be employed: # For alerts alert = browser.get_alert() alert.text alert.accept() alert.dismiss() # For prompts prompt = browser.get_alert() prompt.text prompt.fill_with(this is a sample text) prompt.accept() prompt.dismiss() Handling HTTP exceptions Splinter allows you to handle the exceptions that might arise while trying to perform operations: try: browser.visit() except HttpResponseError, e: print The Opeartion Failed with status code %s and reason %s % (e.status_code, e.reason) The complete API documentation for Splinter is available at. To summarise, Splinter makes the automation of Web application acceptance testing simple. Connect With Us
http://opensourceforu.com/2016/06/splinter-makes-web-application-testing-simple/
CC-MAIN-2017-04
refinedweb
1,259
55.95