text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Richard Conway, Co-founder, Elastacloud and the UK Microsoft Azure Users Group. This article is about how you can meet scale out Big Data challenges using a Linux backbone on Azure. Using Apache Spark This is the first in a series of articles on building open source application patterns for scale out challenges. It will challenge the orthodoxy and offer alternatives to the Microsoft SQL stack and hopefully show you, the reader, how mature Linux on Azure is. All examples are built on top of our customisable Big Data platform, Brisk. Brisk supports out of the box clusters of Apache Spark, Apache Kafka and InfluxDb that can be wired together in VNETs using complex topologies as well custom access to Azure Storage, Service Bus and the EventHub. In order to “spin up” an Apache Spark cluster set yourself up an account on Brisk. For this you’ll need a live id and a valid Azure subscription. We’ll manage all the rest for you and your first cluster is free. Apache Spark allows you to query large in-memory datasets using combinations of SQL, Scala and Python. For the purposes of this article we’ll focus on using python to visualise our data but in others in the series we’ll consider both Scala, Java and SQL. Once you’re set up with Brisk which will entail that you upload a “publishsettings file”, which will allow us to deploy Apache Spark in your subscription, you can follow the wizard steps to create a cluster. To setup your publish settings see here: And then you can go through the Apache Spark cluster creation process here: When the cluster is running you can download the .pem file which will give you SSH access to the cluster. We’ll need this to proceed so click on the download button as peer the image. Once you’ve done this you should be able to SSH into the cluster using this username, password and certificate. Depending on whether you’re using a mac or Windows machine you’ll use SSH in different ways. Mac users can add the downloaded file to their keychain and then use ssh from the terminal prompt. Windows users can download a great graphical tool called bitvise and load the key into the key table. By default the Spark master can be accessed through <clustername>.cloudapp.net on port 22 where clustername is the cluster name you selected during the cluster setup steps earlier. When we’ve SSH’d into the cluster we can enter the following command to start iPython notebook. startipython.sh This will then start the iPython notebook server and bind this to Apache Spark. In this instance we’re going to use iPython notebook to visualize data that we’ll be processing across several Spark cluster nodes. 2015-02-08 14:36:24.839 [NotebookApp] Using existing profile dir: u'/home/azurec oder/.ipython/profile_default' 2015-02-08 14:36:24.842 [NotebookApp] WARNING | base_project_url is deprecated, use base_url 2015-02-08 14:36:24.851 [NotebookApp] Using MathJax from CDN: x.org/mathjax/latest/MathJax.js 2015-02-08 14:36:24.872 [NotebookApp] Serving notebooks from local directory: /h ome/azurecoder 2015-02-08 14:36:24.872 [NotebookApp] 0 active kernels 2015-02-08 14:36:24.872 [NotebookApp] The IPython Notebook is running at: http:/ /sparktest123.cloudapp.net:8888/ipython/ 2015-02-08 14:36:24.873 [NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). The iPython notebook application is now available on http://<clustername>.cloudapp.net:8888/ipython through the web. If you haven’t used Python before, don’t worry, you’ll love it. It’s great for slicing and dicing data and doesn’t have the steep learning curve that other programming languages have. After navigating to the web page we should see the following: At the prompt you should enter the password you used earlier when you created the cluster. On the right-hand side we’ll click on the new notebook button to create a new iPython notebook. This will open a new webpage. By default the notebook is called Untitled0 but if we click on the notebook name it will pop up a dialogue. In this instance I’ve changed the name to MSDN notebook. One of the key features of Python is its ability to easily transform into something visible. This is because of the sheer number of available libraries. We’ll be using the matplotlib library to do this as well as numpyto access some maths functions. import matplotlib.pyplot as plt import numpy as np To execute these commands from now on we’ll just hit the triangle button. Now press execute. For this article we’ll consider a simple dataset called AirPassengers.csv containing air passenger numbers between 1949 and 1960. There is more than one entry for each year so we’ll have to combine and average numbers for each year. Copy this data from and copy up to the default container you selected through the cluster wizard steps. Enter the following: Import re lcsbasic = sc.textFile("/AirPassengers.csv") withheader = lcsbasic.map(lambda line: re.split(‘,’', line)) withheader.count() The output is arrays of Unicode strings including a header and should give us a count of 145 lines. Let’s get rid of the header now. lcs = withheader.zipWithIndex().filter(lambda line: line[1] >= 1).map(lambda line: line[0]).persist() The above is fairly interesting. We use the zipWithIndex function to pass an additional index value after every line and then discard the first index 0 which is the header row. We map back everything other than the index and the header row and then call persist()function which tell Apache Spark to store this filtered version of the datast in memory as an RDD (Resilient Distributed Dataset). Running lcs.count()should reveal a count value of 144 now. We can look at the first row with the following: lcs.first() Which should return us something like this: [u'1', u'1949', u'112'] For now we can simply get the second and third values in the row and discard the first. import math mappedvalues = lcs.map(lambda line: (math.trunc(float(line[1])),math.trunc(float(line[2])))) Now that we have our values mapped correctly to floats and truncated we can then build a Map Reduce function which will enable us to average all of the values over the years we’re looking at. sumCount = mappedvalues.combineByKey((lambda x: (x, 1)), (lambda x, y: (x[0] + y, x[1] + 1)), (lambda x, y: (x[0] + y[0], x[1] + y[1]))) averageByKey = sumCount.map(lambda (label, (value_sum, count)): (label, value_sum / count)) sorted = averageByKey.sortByKey() This is quite a lot to take in but what we’re doing is quite simple. The mapped values are being combined by the key values to give us a sum and count over the year. We can take this and use both these values in a second map function to calculate the mean values by year. Sometimes shuffling between nodes can skew the order so we can then perform a sort on the keys. yearseries = sorted.map(lambda (key, value): int(key)) passengerseries = sorted.map(lambda (key, value): int(value)) The return value will now be perfect to fit back as some sort of plot so that we can determine the moving average over years. plt.plot(yearseries.collect(), passengerseries.collect()) plt.axis([1949, 1960, 0, 500]) plt.title('passenger per plane moving average') plt.ylabel('avg passenger number per plane') plt.xlabel('year') plt.show() If we look at the graph it produces you can see that we get a nice pattern of climbing passenger numbers per plane as the years advance. In this post we’ve looked at how we can use an Apache Spark cluster through Brisk and easily give visualization to data by leveraging the power of scale out over Linux nodes on Azure. Brisk has also enabled us not to know about the internals of Spark. In the next post we’ll look at how we can leverage large datasets with Apache Spark and how to use in combination with InfluxDb, a new open-source time-series database. Encounter error "TclError: no display name and no $DISPLAY environment variable" in the last step. Was the graph shown above in the browser or in X? Encounter the error: "TclError: no display name and no $DISPLAY environment variable" in the last step when drawing the graph. Was the graph show above in browser or in X env? Chao – was this using Brisk, iPython or a custom deployment. The graph is in the browser no desktop is installed on the Linux nodes for a Brisk deployment. Please confirm.
https://blogs.technet.microsoft.com/uktechnet/2015/02/18/visualising-data-at-scale-with-ubuntu-14-04-apache-spark-and-ipython-notebook/
CC-MAIN-2017-43
refinedweb
1,475
73.58
Q: How can I return multiple values from a function? A: There are several ways of doing this. (These examples show hypothetical polar-to-rectangular coordinate conversion functions, which must return both an x and a y coordinate.) #include <math.h> polar_to_rectangular(double rho, double theta, double *xp, double *yp) { *xp = rho * cos(theta); *yp = rho * sin(theta); } ... double x, y; polar_to_rectangular(1., 3.14, &x, &y); struct xycoord { double x, y; }; struct xycoord polar_to_rectangular(double rho, double theta) { struct xycoord ret; ret.x = rho * cos(theta); ret.y = rho * sin(theta); return ret; } ... struct xycoord c = polar_to_rectangular(1., 3.14); polar_to_rectangular(double rho, double theta, struct xycoord *cp) { cp->x = rho * cos(theta); cp->y = rho * sin(theta); } ... struct xycoord c; polar_to_rectangular(1., 3.14, &c);(Another example of this technique is the Unix system call stat.) See also questions 2.7, 4.8, and 7.5a. Q: What's a good data structure to use for storing lines of text? I started to use fixed-size arrays of arrays of char, but they're just too restrictive. A:. This code is written in terms of the agetline function from question 7.30. #include <stdio.h> #include <stdlib.h> extern char *agetline(FILE *); FILE *ifp; /* assume ifp is open on input file */ char **lines = NULL; size_t nalloc = 0; size_t nlines = 0; char *p; while((p = agetline(ifp)) != NULL) { if(nlines >= nalloc) { nalloc += 50; #ifdef SAFEREALLOC lines = realloc(lines, nalloc * sizeof(char *)); #else if(lines == NULL) /* in case pre-ANSI realloc */ lines = malloc(nalloc * sizeof(char *)); else lines = realloc(lines, nalloc * sizeof(char *)); #endif if(lines == NULL) { fprintf(stderr, "out of memory"); exit(1); } } lines[nlines++] = p; }(See the comments on reallocation strategy in question 7.30.) See also question 6.16. Q: How can I open files mentioned on the command line, and parse option flags? A: Here is a skeleton which implements a traditional Unix-style argv parse, handling option flags beginning with -, and optional filenames. (The two flags accepted by this example are -a and -b; -b takes an argument.) #include <stdio.h> #include <string.h> #include <errno.h> main(int argc, char *argv[]) { int argi; int aflag = 0; char *bval = NULL; for(argi = 1; argi < argc && argv[argi][0] == '-'; argi++) { char *p; for(p = &argv[argi][1]; *p != '\0'; p++) { switch(*p) { case 'a': aflag = 1; printf("-a seen\n"); break; case 'b': bval = argv[++argi]; printf("-b seen (\"%s\")\n", bval); break; default: fprintf(stderr, "unknown option -%c\n", *p); } } } if(argi >= argc) { /* no filename arguments; process stdin */ printf("processing standard input\n"); } else { /* process filename arguments */ for(; argi < argc; argi++) { FILE *ifp = fopen(argv[argi], "r"); if(ifp == NULL) { fprintf(stderr, "can't open %s: %s\n", argv[argi], strerror(errno)); continue; } printf("processing %s\n", argv[argi]); fclose(ifp); } } return 0; }(This code assumes that fopen sets errno when it fails, which is not guaranteed, but usually works, and makes error messages much more useful. See also question 20.4.) There are several canned functions available for doing command line parsing in a standard way; the most popular one is getopt (see also question 18.16). Here is the above example, rewritten to use getopt: extern char *optarg; extern int optind; main(int argc, char *argv[]) { int aflag = 0; char *bval = NULL; int c; while((c = getopt(argc, argv, "ab:")) != -1) switch(c) { case 'a': aflag = 1; printf("-a seen\n"); break; case 'b': bval = optarg; printf("-b seen (\"%s\")\n", bval); break; } if(optind >= argc) { /* no filename arguments; process stdin */ printf("processing standard input\n"); } else { /* process filename arguments */ for(; optind < argc; optind++) { FILE *ifp = fopen(argv[optind], "r"); if(ifp == NULL) { fprintf(stderr, "can't open %s: %s\n", argv[optind], strerror(errno)); continue; } printf("processing %s\n", argv[optind]); fclose(ifp); } } return 0; } The examples above overlook a number of nuances: a lone ``-'' is often taken to mean ``read standard input''; the marker ``--'' often signifies the end of the options (proper versions of getopt do handle this); it's traditional to print a usage message when a command is invoked with improper or missing arguments. If you're wondering how argv is laid out in memory, it's actually a ``ragged array''; see the picture in question 20.2. Q: Q:, 12.38, and 12.42. References: PCS Sec. 6 pp. 86, 88 Q:. A: By the time a program is running, information about the names of its functions and variables (the ``symbol table'') is no longer needed, and may therefore not be available. The most straightforward thing to do, therefore, is to maintain that information yourself, with, Q: Q: Q: How can I implement sets or arrays of bits? A: Use arrays of char or int, with a few macros to access the desired bit in the proper cell of the array. Here are some simple macros to use with arrays of char: #include <limits.h> /* for CHAR_BIT */ #define BITMASK(b) (1 << ((b) % CHAR_BIT)) #define BITSLOT(b) ((b) / CHAR_BIT) #define BITSET(a, b) ((a)[BITSLOT(b)] |= BITMASK(b)) #define BITCLEAR(a, b) ((a)[BITSLOT(b)] &= ~BITMASK(b)) #define BITTEST(a, b) ((a)[BITSLOT(b)] & BITMASK(b)) #define BITNSLOTS(nb) ((nb + CHAR_BIT - 1) / CHAR_BIT)(If you don't have <limits.h>, try using 8 for CHAR_BIT.) Here are some usage examples. To declare an ``array'' of 47 bits: char bitarray[BITNSLOTS(47)];To set the 23rd bit: BITSET(bitarray, 23);To test the 35th bit: if(BITTEST(bitarray, 35)) ...To compute the union of two bit arrays and place it in a third array (with all three arrays declared as above): for(i = 0; i < BITNSLOTS(47); i++) array3[i] = array1[i] | array2[i];To compute the intersection, use & instead of |. As a more realistic example, here is a quick implementation of the Sieve of Eratosthenes, for computing prime numbers: #include <stdio.h> #include <string.h> #define MAX 10000 int main() { char bitarray[BITNSLOTS(MAX)]; int i, j; memset(bitarray, 0, BITNSLOTS(MAX)); for(i = 2; i < MAX; i++) { if(!BITTEST(bitarray, i)) { printf("%d\n", i); for(j = i + i; j < MAX; j += i) BITSET(bitarray, j); } } return 0; } See also question 20.7. Additional links: further explanation References: H&S Sec. 7.6.7 pp. 211-216 Q: Q: How do I swap bytes? A:. Question 20.9 shows how, but it's a nuisance. A better solution is to define functions which convert between the known byte order of the data and the (unknown) byte order of the machine in use, and to arrange for these functions to be no-ops on those machines which already match the desired byte order. A set of such functions, introduced with the BSD networking code but now in wide use, is ntohs, htons, ntohl, and htonl. These are intended to convert between ``network'' and ``host'' byte orders, for ``short'' or ``long'' integers, where ``network'' order is always big-endian, and where ``short'' integers are always 16 bits and ``long'' integers are 32 bits. (This is not the C definition, of course, but it's compatible with the C definition; see question 1.1.) So if you know that the data you want to convert from or to is big-endian, you can use these functions. (The point is that you always call the functions, making your code much cleaner. Each function either swaps bytes if it has to, or does nothing. The decision to swap or not to swap gets made once, when the functions are implemented for a particular machine, rather than being made many times in many different calling programs.) If you do have to write your own byte-swapping code, the two obvious approaches are again to use pointers or unions, as in question 20.9. Here is an example using pointers: void byteswap(char *ptr, int nwords) { char *p = ptr; while(nwords-- > 0) { char tmp = *p; *p = *(p + 1); *(p + 1) = tmp; p += 2; } } And here is one using unions: union word { short int word; char halves[2]; }; void byteswap(char *ptr, int nwords) { register union word *wp = (union word *)ptr; while(nwords-- > 0) { char tmp = wp->halves[0]; wp->halves[0] = wp->halves[1]; wp->halves[1] = tmp; wp++; } } These functions swap two-byte quantities; the extension to four or more bytes should be obvious. The union-using code is imperfect in that it assumes that the passed-in pointer is word-aligned. It would also be possible to write functions accepting separate source and destination pointers, or accepting single words and returning the swapped values. References: PCS Sec. 11 p. 179 Q: Q: Q: What is the most efficient way to count the number of bits which are set in an integer? A:; } Q: are.) Here are some things not to worry about: (These are examples of optimizations which compilers regularly perform for you; see questions 20.14 and 20.15.) achieved. For more discussion of efficiency tradeoffs, as well as good advice on how to improve efficiency when it is important, see chapter 7 of Kernighan and Plauger's The Elements of Programming Style, and Jon Bentley's Writing Efficient Programs. See also question 17.11. Q:. ..) Q: I've been replacing multiplications and divisions with shift operators, because shifting is more efficient. A: This is an excellent example of a potentially risky and usually unnecessary optimization. Any compiler worthy of the name can replace a constant, power-of-two multiplication with a left shift, or a similar division of an unsigned quantity with a right shift. (Ritchie's original PDP-11 compiler, though it ran in less than 64K of memory and omitted several features now considered mandatory, performed both of these optimizations, without even turning on its optional optimization pass.) Furthermore, a compiler will make these optimizations only when they're correct; many programmers overlook the fact that shifting a negative value to the right is not equivalent to division. (Therefore, when you need to make sure that these optimizations are performed, you may have to declare relevant variables as unsigned.) Q:. Q:. Additional links: further reading Q: Which is more efficient, a switch statement or an if/else chain? A: The). See also questions 20.17 and 20.18. Q: Is there a way to switch on strings? A: Not directly. Sometimes, it's appropriate to use a separate function to map strings to integer codes, and then switch on those: #define CODE_APPLE 1 #define CODE_ORANGE 2 #define CODE_NONE 0 switch(classifyfunc(string)) { case CODE_APPLE: ... case CODE_ORANGE: ... case CODE_NONE: ... }where classifyfunc looks something like static struct lookuptab { char *string; int code; } tab[] = { {"apple", CODE_APPLE}, {"orange", CODE_ORANGE}, }; classifyfunc(char *string) { int i; for(i = 0; i < sizeof(tab) / sizeof(tab[0]); i++) if(strcmp(tab[i].string, string) == 0) return tab[i].code; return CODE_NONE; } Otherwise, of course, you can fall back on a conventional if/else chain: if(strcmp(string, "apple") == 0) { ... } else if(strcmp(string, "orange") == 0) { ... }(A macro like Streq() from question 17.3 can make these comparisons a bit more convenient.) See also questions 10.12, 20.16, 20.18, and 20.29. References: K&R1 Sec. 3.4 p. 55 K&R2 Sec. 3.4 p. 58 ISO Sec. 6.6.4.2 H&S Sec. 8.7 p. 248 Q: questions 20.16 and 20.17. References: K&R1 Sec. 3.4 p. 55 K&R2 Sec. 3.4 p. 58 ISO Sec. 6.6.4.2 Rationale Sec. 3.6.4.2 H&S Sec. 8.7 p. 248 Q: Q:. (It is hard to imagine why anyone would want or need to place a comment inside a quoted string. It is easy to imagine a program needing to print "/*".) Q: Why isn't there a numbered, multi-level break statement to break out of several loops at once? What am I supposed to use instead, a goto? A:.) Q: There seem to be a few missing operators, like ^^, &&=, and ->=. A: A logical exclusive-or operator (hypothetically ``^^'') would be nice, but it couldn't possibly have short-circuiting behavior analogous to && and || (see question 3.6). */The first is straight from the definition, but is poor because it may evaluate its arguments multiple times (see question 10.1). The second and third ``normalize'' their operands [footnote] to strict 0/1 by negating them twice--the second then applies bitwise exclusive or (to the single remaining bit); the third one implements exclusive-or as !=. The fourth and fifth are based on an elementary identity in Boolean algebra, namely that _ _(where (+) is exclusive-or and an overbar indicates negation). Finally, the sixth one, suggested by Lawrence Kirby and Dan Pop, uses the ?: operator to guarantee a sequence point between the two operands, as for && and ||. (There is still no ``short circuiting'' behavior, though, nor can there be.) a (+) b = a (+) b Additional links: A definitive answer from Dennis Ritchie about ^^ Q: */ Q: ++ ++ + band cannot be parsed as a valid expression. References: K&R1 Sec. A2 p. 179 K&R2 Sec. A2.1 p. 192 ISO Sec. 6.1 H&S Sec. 2.3 pp. 19-20 Q: If the assignment operator were :=, wouldn't it then be harder to accidentally write things like if(a = b) ? A: Yes, but it would also be just a little bit more cumbersome to type all of the assignment statements which a typical program contains. In any case, it's really too late to be worrying about this sort of thing now. The choices of = for assignment and == for comparison were made, rightly or wrongly, over two decades ago, and are not likely to be changed. (With respect to the question, many compilers and versions of lint will warn about if(a = b) and similar expressions; see also question 17.4.) As a point of historical interest, the choices were made based on the observation that assignment is more frequent than comparison, and so deserves fewer keystrokes. In fact, using = for assignment in C and its predecessor B represented a change from B's own predecessor BCPL, which did use := as its assignment operator. (See also question 20.38). Q:; Q:.) Q: Q:. Besides arranging calling sequences correctly, you may also have to conspire between the various languages to get aggregate data structures declared compatibly.. In Ada, you can use the Export and Convention pragmas, and types from the package Interfaces.C, to arrange for C-compatible calls, parameters, and data structures. References: H&S Sec. 4.9.8 pp. 106-7 Q: Does anyone know of a program for converting Pascal or FORTRAN (or LISP, Ada, awk, ``Old'' C, ...) to C? A: Several freely distributable programs are available:16 499 4461 See also questions 11.31 and 18.16. Q: Is C++ a superset of C? What are the differences between C. Q:'' Q:. An extremely simple hash function for strings is simply to add up the values of all the characters: unsigned hash(char *str) { unsigned int h = 0; while(*str != '\0') h += *str++; return h % NBUCKETS; }A somewhat better hash function is unsigned hash(char *str) { unsigned int h = 0; while(*str != '\0') h = (256 * h + *str++) % NBUCKETS; return h; }which actually treats the input string as a large binary number (8 * strlen(str) bits long, assuming characters are 8 bits) and computes that number modulo NBUCKETS, by Horner's rule. (Here it is important that NBUCKETS be prime, among other things. To remove the assumption that characters are 8 bits, use UCHAR_MAX+1 instead of 256; the ``large binary number'' will then be CHAR_BIT * strlen(str) bits long. UCHAR_MAX and CHAR_BIT are defined in <limits.h>.) When the set of strings is known in advance, it is also possible to devise ``perfect'' hashing functions which guarantee a collisionless, dense mapping. References: K&R2 Sec. 6.6 Knuth Sec. 6.4 pp. 506-549 Volume 3 Sedgewick Sec. 16 pp. 231-244 Q: How can I generate random numbers with a normal or Gaussian distribution? A: See question 13.20. Q: How can I find the day of the week given the date? A: Here are three methods: ("%s\n",. J is the number of the century [i.e. the year / 100], K the year within the century [i.e. the year % 100], m the month, q the day of the month, h the day of the week [where 1 is Sunday]; q + 26(m + 1) / 10 + K + K/4 + J/4 - 2J h = (q + 26 * (m + 1) / 10 + K + K/4 + J/4 + 5*J) % 7;(where we use +5*J instead of -2*J to make sure that both operands of the modulus operator % are positive; this bias totalling 7*J will obviously not change the final value of h, modulo 7). Chr. Zeller, ``Kalender-Formeln'' Q:. Q: Why can tm_sec in the tm structure range from 0 to 61, suggesting that there can be 62 seconds in a minute? A: That's actually a buglet in the Standard. There can be 61 seconds in a minute during a leap second. It's possible for there to be two leap seconds in a year, but it turns out that it's guaranteed that they'll never both occur in the same day (let alone the same minute). Q:")");}) Q:.'') Additional links: longer explanation Q: When will the next International Obfuscated C Code Contest (IOCCC) be held? How do I submit contest entries? Who won this year's IOCCC? . References: Don Libes, Obfuscated C and Other Mysteries Q: Q:++. References: Dennis Ritchie, ``The Development of the C Language'' Q:.) Q:). Q: Where can I get extra copies of this list? A: An up-to-date copy may be obtained from in directory home home/scs/ftp/C-faq/book/Errata . Read sequentially: prev next up about this FAQ list about eskimo search feedback copyright Hosted by
http://c-faq.com/~scs/cgi-bin/faqcat.cgi?sec=misc
CC-MAIN-2017-22
refinedweb
2,972
64
Subject: [ublas] bounded_vector behaviour From: Nasos Iliopoulos (nasos_i_at_[hidden]) Date: 2009-09-07 12:44:56 Hello all, I am playing a bit with bounded_vector's under the hood structure and found a bit of a strange behaviour. #define NDEBUG #include <boost/numeric/ublas/vector.hpp> #include <boost/numeric/ublas/io.hpp> namespace ublas = boost::numeric:ublas; int main() { ublas::bounded_vector<double,2> a; ublas::vector<double> b(4); b(0)=1.0; b(1)=2.0; b(2)=0.0; b(3)=0.1; a(0)=33.0; a(1)=44.0; std::cout << a.size() << std::endl; std::cout << a << std::endl; a=b; std::cout << a.size() << std::endl; std::cout << a << std::endl; return 0; } output: 2 [2](33,44) 4 [4](1,2,0,0.1) if I don't use NDEBUG I get an assertion about the size of the vectors in the assignment. On the other hand when using NDEBUG I understand that not checking for the bounded_vector at run time may be a design choice so that it makes bounded_vector a bit faster when assigning to another type, but correctness here may be a bit important (for example by throwing an exception at run-time). I am developing fixed_storage and fixed_vector types, and I would like your opinion on how the same assignment could be implemented. Should the resize(..) check for compatible sizes in release mode or not (in which case it can easily cause a segmentation fault, or even write at memory locations not in the container of fixed_storage)? Maybe the debug-time assertion is enough (as in the case of bounded_vector) for most people. Thanks in advance Nasos Iliopoulos _________________________________________________________________ Windows Live: Make it easier for your friends to see what youre up to on Facebook.
https://lists.boost.org/ublas/2009/09/3673.php
CC-MAIN-2020-24
refinedweb
296
62.88
#include <StelMainGraphicsView.hpp> Reimplement a QGraphicsView for Stellarium. It is the class creating the singleton GL Widget, the main StelApp instance as well as the main GUI. Update the mouse pointer state and schedule next redraw. This method is called automatically by Qt.. Save a screen shot. The format of the file, and hence the filename extension depends on the architecture and build type. emitted when saveScreenShot is requested with saveScreenShot(). doScreenshot() does the actual work (it has to do it in the main thread, where as saveScreenShot() might get called from another one. Set the maximum frames per second. Set the minimum frames per second. Usually this minimum will be switched to after there are no user events for some seconds to save power. However, if can be useful to set this to a high value to improve playing smoothness in scripts.
http://www.stellarium.org/doc/0.11.1/classStelMainGraphicsView.html
CC-MAIN-2015-14
refinedweb
143
67.96
i would like to extract the manifest and upload in my mysql database and from there to extract some walues like resolution if is present end toouch screen. Post your Comment Viewing contents of a JAR File Viewing contents of a JAR File  ... the contents of the jar file without extracting it. You can easily understand about... takes takes a jar file name which has to be viewed. It shows the contents jar file - Java Beginners .* To create a JAR filejar cf jar-file input-file(s)* To view the contents of a JAR filejar tf jar-file* To extract the contents of a JAR filejar xf jar-file* To extract...jar file jar file When creating a jar file it requires a manifest What is a JAR file in Java file viewing contents of a jar file viewing contents... What is a JAR file in Java JAR file is the compressed file format. You can store many jar file jar file how to create a jar file in java jar file jar file steps to create jar file with example jar file jar file jar file where it s used Creating JAR File - Java Beginners to change the contents of my manifest file. I read somewhere regarding i have to add a line to my manifest file (i.e. Main-Class: mypackage.MyClass...Creating JAR File Respected Sir, Thankyou very much for your WHAT IS JAR FILE,EXPLAIN IN DETAIL? A JAR file... and applications. The Java Archive (JAR) file format enables to bundle multiple... link: Jar File Explanation How to use JAR file in Java file - jar cf jar-file input-file(s) View the contents of a JAR file - jar tf jar-file Extract the contents of a JAR file - jar xf jar-file... you want to view the table of contents of the JAR file and "f" parameter jar file jar file how to run a java file by making it a desktop icon i need complete procedur ..through cmd the contents of a JAR file from any file that can be opened... The JarInputStream class is used to read the contents of a JAR file... the contents of a JAR file to any output stream PHP file_get_contents() Function PHP file_get_contents() Function Hi, How do i read the content of file using PHP program? So kindly provide any example or online help reference for filegetcontents() Function in PHP. Please feel free to suggest. Thanks How to Create Jar File using Java write the contents of a JAR file to the output stream and Manifest class which...How to Create Jar File using Java Programming Language A Jar file combines... in the jar file. In this section, we are going to create our own jar file How to create a jar file ??? The given code creates a jar file using java. import java.io.*; import...(); System.out.println("Jar File is created successfully."); } catch (Exception ex...How to create a jar file Hello!!!! I have a project which has how to use that code a small example will be godviorel December 24, 2012 at 5:38 PM i would like to extract the manifest and upload in my mysql database and from there to extract some walues like resolution if is present end toouch screen. Post your Comment
http://www.roseindia.net/discussion/18290-Viewing-contents-of-a-JAR-File.html
CC-MAIN-2015-22
refinedweb
551
71.14
Opened 7 years ago Closed 7 years ago Last modified 7 years ago #4435 closed (fixed) [unicode] get_FIELD_display() does not work for None values Description If you have a model field with choices and null=True, the model function get_FIELD_display() returns u'None' if the attribute is None. It should return None django.db.models.base, lines ~ 322 ff: def _get_FIELD_display(self, field): value = getattr(self, field.attname) return force_unicode(dict(field.choices).get(value, value)) Either this function, or force_unicode, needs a special check for None. Attachments (0) Change History (5) comment:1 Changed 7 years ago by mir@… - Needs documentation unset - Needs tests unset - Patch needs improvement unset - Version changed from SVN to other branch comment:2 Changed 7 years ago by mtredinnick - Owner changed from adrian to mtredinnick - Triage Stage changed from Unreviewed to Accepted comment:3 Changed 7 years ago by mtredinnick Change of mind to the previous comment. I cannot see anywhere at all that we are relying on mapping None -> 'None' via force_unicode. So special-casing None seems reasonable. comment:4 Changed 7 years ago by mtredinnick - Resolution set to fixed - Status changed from new to closed comment:5 Changed 7 years ago by mtredinnick Note: See TracTickets for help on using tickets. We can't make force_unicode() do this by default, because it has to behave like str() in a bunch of places. I'll add a parameter to ignore such non-strings, though, just like smart_str() currently has. Then we can nail them one-by-one (there shouldn't be too many).
https://code.djangoproject.com/ticket/4435
CC-MAIN-2014-23
refinedweb
260
60.04
exceptions in Java? - How and when to use “throws” and “throw” keywords? By understanding Java exception handling, we can make our programs terminate more gracefully when errors occur in our program both due to programmatic errors and unforeseen errors. The exceptions could range from a simple NullPointerException, IndexOutOfBoundsException, IOException, FileNotFoundException, OutOfMemoryError,etc. Lets start with the basics of Try-Catch block. import java.io.File; import java.util.Scanner; public class JavaException { /** * @param args */ public static void main(String[] args) { String filename = "/Users/test.txt"; File textFile = new File(filename); Scanner in = new Scanner(textFile); while(in.hasNext()) { String line = in.nextLine(); System.out.println(line); } in.close(); } } This program won’t compile in Java without proper exception handling. But for now, assume that this code compiles. If we look closely, as a user several questions may come to our mind on what if the file doesn’t exist at all?. The program will crash and prints out an exception trace. An end user who doesn’t know the technical details of Java doesn’t like to see an exception being printed to him. But rather would like a more elegant way of being informed of errors and exceptions from the application. So we will see if by adding try-catch blocks, we can achieve our objectives. String filename = "/Users/test.txt"; try { File textFile = new File(filename); Scanner in = new Scanner(textFile); while(in.hasNext()) { String line = in.nextLine(); System.out.println(line); } in.close(); } catch (Exception e) { System.out.println("Sorry! File Not Found"); } We surround the code with try-catch block like the one shown above where the exceptions are likely to occur. This program is better than the previous one as the user would only see the message “Sorry! File Not Found!” error instead of any stack trace. We can internally print a stack trace to a file or server for the Java developers to debug the issue rather than displaying the stack trace to the user. Are we done? No. There are some flaws in the above program. Remember that a file is not found, and then program doesn’t continue execution beyond the Scanner line. Why? Because Java couldn’t locate the file and so threw an exception and the next execution jumps to the Catch block and prints the error message. If we notice carefully, we haven’t executed the line, in.close() which closes the input stream which is not desirable. Of course, we can close this in the Catch block but Java has a separate block called “Finally”, which is generally used to do resources cleanup code. String filename = "/Users/test.txt"; File textFile = null; Scanner in = null; try { textFile = new File(filename); in = new Scanner(textFile); while(in.hasNext()) { String line = in.nextLine(); System.out.println(line); } } catch (Exception e) { System.out.println("Sorry! File Not Found"); } finally { in.close(); } I just added a finally block of code to make sure that the input stream if exception occurs or doesn’t occur. So remember that code under finally block is always executed irrespective of whether an exception occurs or not in try block. But if an exception occurs, further statements in the try block are not executed and both Catch, Finally block of code is executed. Lets go a step further. Java Exception Hierarchy Just like every object in java extends Object class, all exception classes extend Throwable by default as it is under java.lang package. There is a hierarchy to the Java exceptions. Here is a simple chart to explain the exception hierarchy. The hierarchy is pretty simple. - When errors occur, programmers needn’t handle them but rather the JVM does that for us. Why? Because they could occur because of reasons beyond the control of programmers. Errors like OutOfMemoryError fall under that category. Developers needn’t handle them explicitly whereas some types of exception need to be explicitly handled by developers. - Unchecked or runtime exceptions generally occur because of bad programming. For example, accessing null values, divide by zero error, accessing array values beyond the array length, which are some instances of bad programming. It is upto Java programmers to either handle it or JVM does that for us. Hence Java doesn’t enforce RunTimeExceptions during compile time and it is left to developer’s choice. - But Checked exceptions are those that Java developers are expected to handle in their code. Because these checked exceptions occur because of invalid inputs/conditions provided generally beyond the program’s control. File not found in the path provided by user, user providing wrong parameters to a SQL are such instances. Java checks for these exceptions at compile time and so Java developers should handle them appropriately in their code. We will see some examples below to illustrate the differences. Throws Keyword If we know that Checked exception that can occur, Java will checks this at compile time if they have been handled properly. So Java developers need to either handle checked exceptions with try-catch block or use “throws” keyword to throw that checked exception back to the program that called this method. public static void main(String[] args) { String filename = "/Users/test.txt"; try { handleFile(filename); } catch (FileNotFoundException e) { System.out.println(“Error in file”); } } public static void handleFile(String fileName) throws FileNotFoundException { File textFile = null; Scanner in = null; textFile = new File(fileName); in = new Scanner(textFile); while(in.hasNext()) { String line = in.nextLine(); System.out.println(line); } } I have modified my earlier program to illustrate “throws” keyword. Since FileNotFoundException is a checked exception, we need to handle it either with try-catch or throw that exception to main() program which called this method. I have decided to throw the exception back to the caller and so I have used “throws” keyword to throw this checked exception back to main(). The calling method needs to know that this handleFile() method throws a checked exception and so this has to be explicitly used as part of method definition. Java forces only Checked Exceptions to be handled and hence I have used public static void handleFile(String fileName) throws FileNotFoundException. main() must handle that FileNotFoundException and hence a try-catch block surrounds that because main() is the starting point of my program in my case and so will be handling all exceptions. Throw Keyword public static void main(String[] args) { String filename = "/Users/test.txt"; try { handleFile(filename); } catch (IOException e) { e.printStackTrace(); } } public static void handleFile(String fileName) throws IOException { // some logic for processing ….. throw new IOException("File Not Found"); } Suppose we decide that we need to raise some exceptions based on certain failures occurring in our applications. In such cases, we use the “throw” keyword to throw the exception of our case. This exception can be either checked/unchecked exceptions using “throw” keyword. Again, if it is a checked exception, this exception can be thrown back to main() method or use try-catch block to handle this thrown exception. Catch Blocks Hierachy String filename = "/Users/test.txt"; try { handleFile(filename); } catch (IOException e) { System.out.println(“IOException”); } catch (FileNotFoundException e) { System.out.println(“File not found”); } One more thing in catch blocks is the hierarchical nature of exceptions. In the above case, FileNotFoundException extends IOException which means that all FileNotFoundExceptions are all IOExceptions but not vice-versa as there could be several other classes which also extend Java IOException. IOException is the parent class. Hence the above code will not compile as the first catch block will catch all IOExceptions including FileNotFoundException and so the second class will never be reached by any Java code. The correct form of hierarchy is: try { handleFile(filename); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } If there is a FileNotFoundException error, Java matches the first catch block and executes that. But if any other IOException occurs (excluding FileNotFoundException), Java matches the second catch block. So remember that the parent Exception class should be always at the bottom, so that if none of the exceptions match, the parent class Exception will handle the exception. Java Exception Wrapping Having looked at “throw”, “throws” keywords usage, lets look at this example. public static void handleFile(File file) throws FileNotFoundException,EOFException { if (!file.exists()) throw new FileNotFoundException("Invalid file"); if (file.canRead()) throw new EOFException("End of file"); } } This example looks good first but what if some other method throws ten different exceptions. Do we use 10 different “throws” in method definition? Of course we could but it doesn’t look good as it looks clumsy and code becomes unreadable as more checked exceptions are added to that method. There is a much cleaner way to do this and that is using Java exception wrapping. All we do is to wrap exceptions in another exception and then throw only that one exception instead of ‘n’ number of exception. We have catch block for all checked exceptions and then use a custom exception to inform the caller of any exceptions. So now, we have only one “throws” exception, which makes the code look much cleaner and understandable. For example, public static void handleFile(File file) throws CustomException { try { if (!file.exists()) throw new FileNotFoundException("Invalid file"); else if (!file.read()) throw new EOFException("File doesn't exist"); } catch (FileNotFoundException e) { throw new CustomException("File Not found"); } catch (EOFException e) { throw new CustomException("EOF found"); } catch (IOException e){ throw new CustomException("IO Exception occured"); } } Of course, this is not a compulsion. Based on requirements, these can be modified accordingly. Custom Exception Class Having looked at Java in build exceptions until this point, lets create a sample custom exception class. An exception class can either extend RunTimeException if it has to be a runtime exception or Exception if it has to be case checked exceptions. Checked exceptions will be checked at compiler stage and so a method throwing this custom checked exception has to either use “throws” keyword to throw the exception back to caller or handler the exception with a try-catch block. Always remember this golden rule. So based on design needs, choose the parent Exception class. public class CustomException extends RuntimeException{ private static final long serialVersionUID = 1L; /** * @param args */ public CustomException() { super(); } public CustomException(String msg) { System.out.println("Custom Exception called"); } public String getMessage() { return super.getMessage(); } } A straightforward implementation of CustomException class. It extends RunTimeException and overrides some of the methods that are required. Summary That’s it about exceptions for now. Hope you enjoyed reading the article and learned more about exceptions before you began reading this. You will learn more once you start writing programs in Java. So start incorporating exception handling in your Java code from now and post your comments. This article is originally published at Java tutorials – Lets jump into the ocean, re-posted here with authors permission and as part of the JBC program. Speak Your Mind
http://www.javabeat.net/java-exception-handling/
CC-MAIN-2014-42
refinedweb
1,811
57.06
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project. On Mon, Aug 05, 2013 at 06:25:44PM -0400, Rich Felker wrote: > Hi, > > [I'm resending this to linux-api instead of linux-kernel on the advice > of Joseph Myers on libc-alpha. Please see the link to the libc-alpha > thread at the bottom of this message for discussion that has already > taken place.] As told you earlier on linux-kernel just send a patch with your semantics to lkml. We're not going to reserve a value for a namespace that is reserved for the kernel to implement something that should better be done in kernel space.
https://sourceware.org/ml/libc-alpha/2013-08/msg00033.html
CC-MAIN-2018-43
refinedweb
117
70.63
This article was contributed by Abhinav Keswani Abhinav heads up the Bespoke Solutions team at Trineo, a boutique software development shop based in New Zealand and Australia. After way too many years wrangling the vagrancies of massive public facing web infrastructure, Abhinav and the Trineo team focus on developing applications that use modern cloud technology. Trineo are listed Heroku Dev Partners, and Salesforce Consulting Partners. Queuing in Ruby with Redis and Resque Table of Contents The delegation of long running, computationally expensive and high latency jobs to a background worker queue is a common pattern used to create scalable web applications. The aim is to serve end-user requests with fastest possible responses, by ensuring that all long running jobs are handled outside the request/response cycle. Isolated longer running jobs can be handled by a separate worker pool that can scale independently. These jobs can asynchronously handle tasks like fetching data from remote APIs, reading RSS feeds or handling long running uploads to S3. Resque (pronounced ‘rescue’) is a Redis-backed library for creating background jobs, placing those jobs on multiple queues, and handling those jobs outside of the user request/response cycle. If you have questions about Ruby on Heroku, consider discussing it in the Ruby on Heroku forums. Overview Resque is designed to be used in scenarios that require a high volume of job entries, and provides mechanisms to ensure visibility and reliability of behavior while providing statistics using a web dashboard. Consider Resque as a good choice for applications that run multiple queues each with many thousands of job entries, where worker behavior can be volatile. Volatile worker behavior is mitigated by forking children processes to handle jobs which ensures that any out of control workers can be dealt with in isolation. Install locally To install Redis locally, use the following approach on Mac OS X. $ brew install redis To see if you’re already running Redis, check the output of the redis-cli ping command. A PONG response indicates you already have Redis installed and running. To test your install, firstly ensure that the Redis server is running. $ redis-server Then use the following simple commands. $ redis-cli ping PONG $ redis-cli redis 127.0.0.1:6379> set foo bar OK redis 127.0.0.1:6379> get foo "bar" To install Redis on Linux, and to see more details on an effective quickstart, refer to the documentation on the Redis site. The example application mentioned later in this article has a dependency on ImageMagick. To install ImageMagick on OS X: $ brew install imagemagick To install ImageMagick on other operating systems, refer to the ImageMagic site for installation of binary releases or from source. Resque on Heroku Prior to v1.22 there was a disparity between Resque’s signal handling and Heroku’s process management that caused orphaned and incomplete jobs between worker restarts. Starting with v1.22 Resque’s handling of TERM and QUIT signals is in accordance with UNIX standards and maintains compatibility with the Heroku runtime that previous versions lacked. To run Resque on Heroku you need to specify the correct Resque version and provide a couple options to the worker process. Specify version In your application Gemfile specify Resque v1.22.0. gem 'resque', "~> 1.22.0" Run bundle install to update your gems. Process options To opt-in to UNIX compatible signal handling in Resque v1.22 you will also need to provide a TERM_CHILD environment variable to the resque worker process. This can be done inline in your application Procfile. resque: env TERM_CHILD=1 bundle exec rake resque:work TERM_CHILD tells Resque to pass SIGTERM from the parent to child process to ensure that all child worker process have time to execute an orderly shutdown. The default period Resque waits before sending SIGKILL to its child processes is 4 seconds. To modify this value and give your workers more time to gracefully shutdown, modify the RESQUE_TERM_TIMEOUT environment variable. resque: env TERM_CHILD=1 RESQUE_TERM_TIMEOUT=7 bundle exec rake resque:work Heroku waits 10 seconds after sending SIGTERM to send SIGKILL, so be sure to use a RESQUE_TERM_TIMEOUT value less than 10. Reference application For the remainder of this article, a reference application will be cited to illustrate the working parts of an application that uses Resque. The main premise of the application is to watermark uploaded images and store them on AWS S3. When a user uploads an image, it is saved to S3, and then a Resque job is enqueued to create a watermarked copy of themis user’s image, which is also stored on AWS S3. Local installation To set this app up locally firstly fork it on GitHub and then clone it locally, for example: $ git clone git://github.com/trineo/resque-example.git To install application dependencies run: $ bundle install Local setup Sign up to AWS S3 if you haven’t already done so, following these instructions. To set up your local environment, and connectivity to all backing services, you need to create a .env file. $ cp .env.example .env Create two buckets on S3 that respectively will be used to store original and watermarked images. Edit the .env file appropriately to specify: - the names of these buckets - AWS S3 secret access key, and access key id - your local Redis URL (already set to the default Redis value) Running the app locally Start Redis in a separate shell. $ redis-server Then start the watermarked application. $ foreman start This should now start a web process as well as a worker process, both of which are specified in the Procfile of the application. Verify and use local setup Using the Resque dashboard, check that the app is running: - Visit the following URL: - This page should load the Resque web dashboard and say “0 of 1 Workers Working” - Visit the following URL: - See what the worker is up to on this page, and more upon clicking on the worker link To interact with the application and queue a job, get a test image and run the following curl command (where src.jpeg is the test image): $ curl -F "file=@src.jpeg" "" This should have the following effect: - The worker should show it is working - An image called src.jpeg should be present in both “originals” and “watermarked” buckets - The image in the watermarked bucket should be … watermarked! - The index page of the reference app () should show some statistics about file upload and watermarking activity Queueing jobs Jobs are queued by the upload post method in the main Sinatra file, resque-example-app.rb. The upload method saves the file to S3 and then enqueues a job to watermark the uploaded file. Resque.enqueue(Watermark, file_token.key) Now, any worker that is bound to the same Redis backing service can process this job. The enqueue method has the following method signature: Resque.enqueue(klass, *args) In calling enqueue, the perform method of klass is called, with all the original enqueue arguments. These arguments are serialized as JSON, and therefore it is important to ensure that the arguments can in fact be serialized as JSON. Arguments like symbols, or entire ActiveRecord objects will not work. Try instead to send object IDs or as is the case in the example application, a AWS file token key is sent. Processing jobs The Watermark worker The Watermark class is responsible for watermarking files in the background. Once a job has been enqueued as shown above, and when a worker is ready, the “perform” method of the Watermark class is invoked with the S3 file token being sent as an argument. Watermark#perform is a class method that instantiates a new instance of this class, such that useful class variables are set up. def self.perform(key) (new key).apply_watermark end Perform then chains the invocation of the ‘apply_watermark’ method on this new instance, which then applies and saves the watermarked file. def apply_watermark Dir.mktmpdir do |tmpdir| ... save_watermarked_file(watermarked_local_file) end end def save_watermarked_file(watermarked_local_file) ... end Note that it is considered bad practice to store any files on the Heroku filesystem. What is happening in the reference application, is: - The file is transient, which is to say that the filesystem is used to store the file so that it can be watermarked, and once this is complete the file is deleted. (refer to Dir#mktmpdir) - Local file processing that is briefly conducted in the background, using the filesystem of a process is not considered to be bad practice (reference) Define and provision workers Resque workers are defined the reference application’s Procfile: resque: env TERM_CHILD=1 bundle exec rake resque:work These workers can be provisioned by running a simple scaling command such as: heroku ps:scale resque=2 --app appname Alternatively, one can run multiple worker threads within a single process/dyno by modifying the Procfile accordingly: resque: env TERM_CHILD=1 COUNT=2 bundle exec rake resque:workers Note two main differences: * The COUNT variable specifies the number of worker threads to run * The resque:workers rake task is invoked Running locally As per prior instructions, start the application, and issue the given curl command to post a file to the upload handler. Subsequently, this is the log stream generated by the web and resque workers. 20:47:15 web.1 | 127.0.0.1 - - [14/Aug/2012 20:47:15] "POST /upload HTTP/1.1" 200 - 3.1093 20:47:19 resque.1 | Initialized Watermark instance 20:47:19 resque.1 | Opening original file locally: /var/folders/kq/src.jpeg 20:47:19 resque.1 | Writing watermarked file locally: /var/folders/kq/watermarked_src.jpeg 20:47:22 resque.1 | Persisted watermarked file to S3: Job failures Resque leaves it to the developer to decide on how to best handle job failure, using a ‘on_failure’ hook. A common approach is to re-enqueue a job if it fails. The reference application shows a simple example of this in the context: module RetriedJob def on_failure_retry(e, *args) ... Resque.enqueue self, *args end end class Watermark extend RetriedJob ... end More complex implementations would involve stipulating a number of retries, so that a job is given n number of attempts before giving up, and additionally stipulating a delay between retries. The resque-retry plugin provides much of this functionality. Failed jobs are visible on the Resque dashboard, which is described below. Job termination When worker termination is requested via the SIGTERM signal, Resque throws a Resque::TermException exception. Handling this exception allows the worker to cease work on the currently running job and gracefully save state by re-enqueueing the job so it can be handled by another worker. Handling this exception on Heroku is recommended due to the frequency of dyno restarts and the likelihood that high-throughput worker processes will encounter termination requests. The reference application indicates the method to cleanly shut a worker down on receiving the SIGTERM signal. require 'resque/errors' ... def self.perform(key) (new key).apply_watermark rescue Resque::TermException Resque.enqueue(self, key) end Introspection There are a number of ways to introspect Resque’s behavior. The best place to do this, the in-built Resque web dashboard. As mentioned earlier, the reference application exposes the dashboard here:. The dashboard allows you to inspect the following: - queues - workers - failed jobs and stack traces - current working jobs - useful redis stats Using the console, you can inspect the above by using the following commands: Resque.info Resque.queues Resque.redis Resque.size(queue_name) Resque.peek(queue_name, start=1, count=1) Resque.workers Resque.working Finally, the reference application itself has a small dashboard on its index page that shows file upload and watermarking activity:. Deployment To deploy the reference application to Heroku, after it has been cloned locally, firstly create a new Heroku app, and provision a new instance of Redis: $ heroku create --stack cedar-14 $ heroku addons:add rediscloud Push the app to heroku. $ git push heroku master Set up the required AWS bucket and security config vars. $ heroku config:set \ > AWS_S3_BUCKET_ORIGINALS=<insert your originals bucket name> \ > AWS_S3_BUCKET_WATERMARKED=<insert your watermarked bucket name> \ > AWS_ACCESS_KEY_ID=<insert your access key id> \ > AWS_SECRET_ACCESS_KEY=<insert your secret access key> Check that this worked. $ heroku config | grep AWS AWS_ACCESS_KEY_ID: access key id AWS_S3_BUCKET_ORIGINALS: your originals bucket name AWS_S3_BUCKET_WATERMARKED: your watermarked bucket name AWS_SECRET_ACCESS_KEY: secret access key Scale up the web and background workers. $ heroku scale web=1 resque=1 Open the app in a browser. $ heroku open Open the resque web dashboard in a new browser tab using the “Resque Dashboard” link. Open your AWS Management Console, and open the watermarked bucket. Taking note of your heroku app’s name (eg vast-sierra-1234), post an image to the upload endpoint. $ curl -F "file=@src.jpeg" "" A watermarked version of the image that was posted should be in your watermarked bucket in a few seconds (refresh the bucket view in the AWS console). Furthermore, check the index page of the reference application to see a list of public URLs of recently watermarked images. Or, by examining the app logs. $ heroku logs --tail Look for a line such as this that indicates a public URL that indicates the location of your watermarked image. app[resque.1]: Persisted watermarked file to S3: ... Visit the public URL to see the watermarked image. Keep reading Your feedback has been sent to the Dev Center team. Thank you.
https://devcenter.heroku.com/articles/queuing-ruby-resque
CC-MAIN-2015-14
refinedweb
2,227
53
This For help getting started with Flutter, view our online documentation. For help on editing plugin code, view the documentation. example/README.md Demonstrates how to use the wifi plugin. For help getting started with Flutter, view our online documentation. Add this to your package's pubspec.yaml file: dependencies: wifi: ^0.1.5 You can install packages from the command line: with Flutter: $ flutter pub get Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:wifi/wifi.dart'; We analyzed this package on Jul 15, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: Flutter References Flutter, and has no conflicting libraries. Document public APIs. (-1 points) 19 out of 19 API elements have no dartdoc comment.Providing good documentation for libraries, classes, functions, and other API elements improves code readability and helps developers find and use your API. Format lib/wifi.dart. Run flutter format to format lib/wifi.dart. The package description is too short. (-20 points) Add more detail to the description field of pubspec.yaml. Use 60 to 180 characters to describe the package, what it does, and its target use case.
https://pub.dev/packages/wifi
CC-MAIN-2019-30
refinedweb
214
60.21
This module bundles everything that you might need in order to implement a Froody service. To start with, we provide an API definition in Time::API. We have to provide an XML description of the publicly facing methods for our service. In this case, our API methods are: froody.demo.hostname froody.demo.localtime froody.demo.uptime We implement all the methods in the froody.demo namespace, as defined with Time::API As per the documentation in Froody::Quickstart, you can see that for simple values, we can just return a scalar, which will be magicly placed inside the top level node of our response. More complex structures require returning a HASHREF populated with the secondary elements and attributes of the top level XML node. After we've loaded the implementation, we can start the standalone server. The current implementation of the standalone server will walk @INC to discover all Froody::Implementation subclasses, and register all required implementations. Once the server has started, you can test the functionality of the server by using the froody script to connect to the server: froody -u'' froody.demo.localtime to get the current time.
http://search.cpan.org/~fotango/Froody-42.034/examples/time/Time.pm
CC-MAIN-2017-43
refinedweb
191
55.13
Object Oriented Python – Object Serialization In the context of data storage, serialization is the process of translating data structures or object state into a format that can be stored (for example, in a file or memory buffer) or transmitted and reconstructed later. In serialization, an object is transformed into a format that can be stored, so as to be able to deserialize it later and recreate the original object from the serialized format. Pickle Pickling is the process whereby a Python object hierarchy is converted into a byte stream (usually not human readable) to be written to a file, this is also known as Serialization. Unpickling is the reverse operation, whereby a byte stream is converted back into a working Python object hierarchy. Pickle is operationally simplest way to store the object. The Python Pickle module is an object-oriented way to store objects directly in a special storage format. What can it do? - Pickle can store and reproduce dictionaries and lists very easily. - Stores object attributes and restores them back to the same State. What pickle can’t do? - It does not save an objects code. Only it’s attributes values. - It cannot store file handles or connection sockets. In short we can say, pickling is a way to store and retrieve data variables into and out from files where variables can be lists, classes, etc. To Pickle something you must − - import pickle - Write a variable to file, something like >pickle.dump(mystring, outfile, protocol), where 3rd argument protocol is optional To unpickling something you must − Import pickle Write a variable to a file, something like >myString = pickle.load(inputfile) Methods The pickle interface provides four different methods. dump() − The dump() method serializes to an open file (file-like object). dumps() − Serializes to a string load() − Deserializes from an open-like object. loads() − Deserializes from a string. Based on above procedure, below is an example of “pickling”. Output >My Cat pussy is White and has 4 legs Would you like to see her pickled? Here she is! b'\x80\x03c__main__\nCat\nq\x00)\x81q\x01}q\x02(X\x0e\x00\x00\x00number_of_legsq\x03K\x04X\x05\x00\x00\x00colorq\x04X\x05\x00\x00\x00Whiteq\x05ub.' So, in the example above, we have created an instance of a Cat class and then we’ve pickled it, transforming our “Cat” instance into a simple array of bytes. This way we can easily store the bytes array on a binary file or in a database field and restore it back to its original form from our storage support in a later time. Also if you want to create a file with a pickled object, you can use the dump() method ( instead of the dumps*()* one) passing also an opened binary file and the pickling result will be stored in the file automatically. [….] binary_file = open(my_pickled_Pussy.bin', mode='wb') my_pickled_Pussy = pickle.dump(Pussy, use the load function in our previous example. Output >MeOw is black Pussy is white JSON JSON(JavaScript Object Notation) has been part of the Python standard library is a lightweight data-interchange format. It is easy for humans to read and write. It is easy to parse and generate. Because of its simplicity, JSON is a way by which we store and exchange data, which is accomplished through its JSON syntax, and is used in many web applications. As it is in human readable format, and this may be one of the reasons for using it in data transmission, in addition to its effectiveness when working with APIs. An example of JSON-formatted data is as follow − {"EmployID": 40203, "Name": "Zack", "Age":54, "isEmployed": True} Python makes it simple to work with Json files. The module sused for this purpose is the JSON module. This module should be included (built-in) within your Python installation. So let’s see how can we convert Python dictionary to JSON and write it to a text file. JSON to Python Reading JSON means converting JSON into a Python value (object). The json library parses JSON into a dictionary or list in Python. In order to do that, we use the loads() function (load from a string), as follow − Output Below is one sample json file, data1.json {"menu": { "id": "file", "value": "File", "popup": { "menuitem": [ {"value": "New", "onclick": "CreateNewDoc()"}, {"value": "Open", "onclick": "OpenDoc()"}, {"value": "Close", "onclick": "CloseDoc()"} ] } }} Above content (Data1.json) looks like a conventional dictionary. We can use pickle to store this file but the output of it is not human readable form. JSON(Java Script Object Notification) is a very simple format and that’s one of the reason for its popularity. Now let’s look into json output through below program. Output Above we open the json file (data1.json) for reading, obtain the file handler and pass on to json.load and getting back the object. When we try to print the output of the object, its same as the json file. Although the type of the object is dictionary, it comes out as a Python object. Writing to the json is simple as we saw this pickle. Above we load the json file, add another key value pair and writing it back to the same json file. Now if we see out data1.json, it looks different .i.e. not in the same format as we see previously. To make our Output looks same (human readable format), add the couple of arguments into our last line of the program, >json.dump(conf, fh, indent = 4, separators = (‘,’, ‘: ‘)) Similarly like pickle, we can print the string with dumps and load with loads. Below is an example of that, YAML YAML may be the most human friendly data serialization standard for all programming languages. Python yaml module is called pyaml YAML is an alternative to JSON − Human readable code − YAML is the most human readable format so much so that even its front-page content is displayed in YAML to make this point. Compact code − In YAML we use whitespace indentation to denote structure not brackets. Syntax for relational data − For internal references we use anchors (&) and aliases (*). One of the area where it is used widely is for viewing/editing of data structures − for example configuration files, dumping during debugging and document headers. Installing YAML As yaml is not a built-in module, we need to install it manually. Best way to install yaml on windows machine is through pip. Run below command on your windows terminal to install yaml, pip install pyaml (Windows machine) sudo pip install pyaml (*nix and Mac) On running above command, screen will display something like below based on what’s the current latest version. Collecting pyaml Using cached pyaml-17.12.1-py2.py3-none-any.whl Collecting PyYAML (from pyaml) Using cached PyYAML-3.12.tar.gz Installing collected packages: PyYAML, pyaml Running setup.py install for PyYAML ... done Successfully installed PyYAML-3.12 pyaml-17.12.1 To test it, go to the Python shell and import the yaml module, import yaml, if no error is found, then we can say installation is successful. After installing pyaml, let’s look at below code, script_yaml1.py Above we created three different data structure, dictionary, list and tuple. On each of the structure, we do yaml.dump. Important point is how the output is displayed on the screen. Output Dictionary output looks clean .ie. key: value. White space to separate different objects. List is notated with dash (-) Tuple is indicated first with !!Python/tuple and then in the same format as lists. So let’s say I have one yaml file, which contains, --- # An employee record name: Raagvendra Joshi job: Developer skill: Oracle employed: True foods: - Apple - Orange - Strawberry - Mango languages: Oracle: Elite power_builder: Elite Full Stack Developer: Lame education: 4 GCSEs 3 A-Levels MCA in something called com Now let’s write a code to load this yaml file through yaml.load function. Below is code for the same. As the output doesn’t looks that much readable, I prettify it by using json in the end. Compare the output we got and the actual yaml file we have. Output One of the most important aspect of software development is debugging. In this section we’ll see different ways of Python debugging either with built-in debugger or third party debuggers. PDB – The Python Debugger The module PDB supports setting breakpoints. A breakpoint is an intentional pause of the program, where you can get more information about the programs state. To set a breakpoint, insert the line pdb.set_trace() Example pdb_example1.py import pdb x = 9 y = 7 pdb.set_trace() total = x + y pdb.set_trace() We have inserted a few breakpoints in this program. The program will pause at each breakpoint (pdb.set_trace()). To view a variables contents simply type the variable name. c:\Python\Python361>Python pdb_example1.py > c:\Python\Python361\pdb_example1.py(8)<module>() -> total = x + y (Pdb) x 9 (Pdb) y 7 (Pdb) total *** NameError: name 'total' is not defined (Pdb) Press c or continue to go on with the programs execution until the next breakpoint. (Pdb) c --Return-- > c:\Python\Python361\pdb_example1.py(8)<module>()->None -> total = x + y (Pdb) total 16 Eventually, you will need to debug much bigger programs – programs that use subroutines. And sometimes, the problem that you’re trying to find will lie inside a subroutine. Consider the following program. import pdb def squar(x, y): out_squared = x^2 + y^2 return out_squared if __name__ == "__main__": #pdb.set_trace() print (squar(4, 5)) Now on running the above program, c:\Python\Python361>Python pdb_example2.py > c:\Python\Python361\pdb_example2.py(10)<module>() -> print (squar(4, 5)) (Pdb) We can use ? to get help, but the arrow indicates the line that’s about to be executed. At this point it’s helpful to hit s to s to step into that line. (Pdb) s --Call-- >c:\Python\Python361\pdb_example2.py(3)squar() -> def squar(x, y): This is a call to a function. If you want an overview of where you are in your code, try l − (Pdb) l 1 import pdb 2 3 def squar(x, y): 4 -> out_squared = x^2 + y^2 5 6 return out_squared 7 8 if __name__ == "__main__": 9 pdb.set_trace() 10 print (squar(4, 5)) [EOF] (Pdb) You can hit n to advance to the next line. At this point you are inside the out_squared method and you have access to the variable declared inside the function .i.e. x and y. (Pdb) x 4 (Pdb) y 5 (Pdb) x^2 6 (Pdb) y^2 7 (Pdb) x**2 16 (Pdb) y**2 25 (Pdb) So we can see the ^ operator is not what we wanted instead we need to use ** operator to do squares. This way we can debug our program inside the functions/methods. Logging The logging module has been a part of Python’s Standard Library since Python version 2.3. As it’s a built-in module all Python module can participate in logging, so that our application log can include your own message integrated with messages from third party module. It provides a lot of flexibility and functionality. Benefits of Logging Diagnostic logging − It records events related to the application’s operation. Audit logging − It records events for business analysis. Messages are written and logged at levels of “severity” &minu DEBUG (debug()) − diagnostic messages for development. INFO (info()) − standard “progress” messages. WARNING (warning()) − detected a non-serious issue. ERROR (error()) − encountered an error, possibly serious. CRITICAL (critical()) − usually a fatal error (program stops). Let’s looks into below simple program, import logging logging.basicConfig(level=logging.INFO) logging.debug('this message will be ignored') # This will not print logging.info('This should be logged') # it'll print logging.warning('And this, too') # It'll print Above we are logging messages on severity level. First we import the module, call basicConfig and set the logging level. Level we set above is INFO. Then we have three different statement: debug statement, info statement and a warning statement. Output of logging1.py >INFO:root:This should be logged WARNING:root:And this, too As the info statement is below debug statement, we are not able to see the debug message. To get the debug statement too in the Output terminal, all we need to change is the basicConfig level. >logging.basicConfig(level = logging.DEBUG) And in the Output we can see, >DEBUG:root:this message will be ignored INFO:root:This should be logged WARNING:root:And this, too Also the default behavior means if we don’t set any logging level is warning. Just comment out the second line from the above program and run the code. #logging.basicConfig(level = logging.DEBUG) Output >WARNING:root:And this, too Python built in logging level are actually integers. >>> import logging >>> >>> logging.DEBUG 10 >>> logging.CRITICAL 50 >>> logging.WARNING 30 >>> logging.INFO 20 >>> logging.ERROR 40 >>> We can also save the log messages into the file. logging.basicConfig(level = logging.DEBUG, filename = 'logging.log') Now all log messages will go the file (logging.log) in your current working directory instead of the screen. This is a much better approach as it lets us to do post analysis of the messages we got. We can also set the date stamp with our log message. logging.basicConfig(level=logging.DEBUG, format = '%(asctime)s %(levelname)s:%(message)s') Output will get something like, >2018-03-08 19:30:00,066 DEBUG:this message will be ignored 2018-03-08 19:30:00,176 INFO:This should be logged 2018-03-08 19:30:00,201 WARNING:And this, too Benchmarking Benchmarking or profiling is basically to test how fast is your code executes and where the bottlenecks are? The main reason to do this is for optimization. timeit Python comes with a in-built module called timeit. You can use it to time small code snippets. The timeit module uses platform-specific time functions so that you will get the most accurate timings possible. So, it allows us to compare two shipment of code taken by each and then optimize the scripts to given better performance. The timeit module has a command line interface, but it can also be imported. There are two ways to call a script. Let’s use the script first, for that run the below code and see the Output. >import timeit print ( 'by index: ', timeit.timeit(stmt = "mydict['c']", setup = "mydict = {'a':5, 'b':10, 'c':15}", number = 1000000)) print ( 'by get: ', timeit.timeit(stmt = 'mydict.get("c")', setup = 'mydict = {"a":5, "b":10, "c":15}', number = 1000000)) Output >by index: 0.1809192126703489 by get: 0.6088525265034692 Above we use two different method .i.e. by subscript and get to access the dictionary key value. We execute statement 1 million times as it executes too fast for a very small data. Now we can see the index access much faster as compared to the get. We can run the code multiply times and there will be slight variation in the time execution to get the better understanding. Another way is to run the above test in the command line. Let’s do it, c:\Python\Python361>Python -m timeit -n 1000000 -s "mydict = {'a': 5, 'b':10, 'c':15}" "mydict['c']" 1000000 loops, best of 3: 0.187 usec per loop c:\Python\Python361>Python -m timeit -n 1000000 -s "mydict = {'a': 5, 'b':10, 'c':15}" "mydict.get('c')" 1000000 loops, best of 3: 0.659 usec per loop Above output may vary based on your system hardware and what all applications are running currently in your system. Below we can use the timeit module, if we want to call to a function. As we can add multiple statement inside the function to test. import timeit def testme(this_dict, key): return this_dict[key] print (timeit.timeit("testme(mydict, key)", setup = "from __main__ import testme; mydict = {'a':9, 'b':18, 'c':27}; key = 'c'", number = 1000000)) Output >0.7713474590139164
https://scanftree.com/tutorial/python/object-oriented-python/object-oriented-python-object-serialization/
CC-MAIN-2022-40
refinedweb
2,670
65.83
With more than two decades of history, the Linux kernel is one of the biggest and fastest developing open source projects, with about 53,600 files and around 20 million lines of code. To understand the story of Linux better and to learn about future open source technologies, Ankita K.S. from OSFY interacted with Kaiwan N. Billimoria, proprietor of kaiwanTECH and author of the book, ‘Hands-On System Programming with Linux’. Q What technologies are Indian developers focusing on? Many developers are now diving into what is considered by pundits as the ‘hot and exciting’ technologies — machine learning (ML), artificial intelligence (AI), VR, and possibly, blockchain and industrial IoT, would top this list. However, in all the excitement over the ‘latest and greatest’, it should not be forgotten that certain fundamental ‘old’ technologies continue to be the ones critical for business and academia. To deviate a bit, I find that many freshers nowadays focus on Web technologies, Java, along with ML and AI to some extent. While this is good, especially for young talent, it is also important not to lose sight of the fundamentals of how computer and software engineering works at the ‘bare metal’ level, as it were. Focusing on infrastructure technologies, operating systems (OS), low-level and systems programming (and even the hardware, to some extent) is very important to ensure a strong career in technology. Q Is industry welcoming open source with open arms? Yes, of course. Open source is now no longer seen as something very new and radical. Industry acceptance is growing rapidly. Having said that, there are also players who delay and adopt a ‘let’s wait and watch’ approach, possibly analysing the risks. But most companies have already realised that open source is now a mature and key part of the software ecosystem. So adopting it, or perhaps gently phasing it in, is something that must occur. Q Tell us about your journey with open source I used to work on UNIX (as a developer) back in the early 90s. When I struck out on my own, a pain point (besides the usual suspects) was that I thought my days of enjoying the super powerful UNIX operating system were gone — obviously I couldn’t afford to purchase a UNIX system. Then, around 1997, I discovered this upstart new OS called Linux, and that I could purchase it for just the cost of the media it was on — floppy disks or CD-ROMs! I’ve been hooked to Linux since then. Being a corporate trainer, I still teach, work on projects and write books. So, I quickly ramped up, teaching myself not just systems programming but also delving deep into the internals of the Linux kernel. This had me slowly teaching my hard-won skills — which led to me learning a lot more and doing so at a faster pace. I realised that a quick way to find out how much one knows about something is by trying to teach it. By 2007, I was able to contribute a couple of device drivers to the mainline kernel, and last year, helped with a script aimed at security. I remain very happy with the ability to contribute — it’s one of the best things about open source. Q What is new with respect to Linux today? Everything and nothing. Linux constantly evolves as an OS. There are no ‘road maps’ or rigid rules when it comes to Linux, to define what happens next. This, perhaps, is one of the reasons why it outpaces all others. To give a concrete example, while virtualisation is really nothing new, Linux’s take on very lightweight virtualisation (which is container technology, of course) is a relatively new and rapidly evolving space. Technically, at the level of the OS, containers are nothing but a marriage, and a brilliant one, of two Linux kernel technologies that have existed for many years – namespaces and control groups. The bringing together of cool tech to create something even better is a great example of how Linux just evolves. Q What makes Linux technology better than various others in the market? It comes down to how the open source model, in general, and the Linux kernel in particular, is developed. The kernel community is not inherently and even helplessly biased towards one or the other organisation’s goals or to any particular chipset or software design. It is a true meritocracy where good ideas and, more importantly, good code, can automatically move ahead and take their place in the mainline kernel. Not just that, the reality is that several Linux ‘distros’ out there serve as test beds for their enterprise cousins. Interested and motivated users test features and report bugs, which are then addressed. How many organisations can boast of a quality assurance team of thousands of people? Recall Linus’s Law, which is ‘given enough eyeballs, all bugs are shallow’ (Eric S. Raymond said that). Q Can you briefly talk about coding for security? Today, the industry is highly concerned with security. What many do not realise, perhaps, is that security issues are, at heart, nothing but bugs! Linus has himself written about this in the LKML or the Linux Kernel Mailing List (, November 17, 2017: “As a security person, you need to repeat this mantra: ‘security problems are just bugs’, and you need to internalise it, instead of scoffing at it.”) Nevertheless, learning to write not just code, but secure code, is a critical and required skill today. Q Do you see India as a contributor or consumer of open source? And why? India is both, of course. Really, the question isn’t ‘why’, but more about how competitive products and services cannot afford to stay away. Another thing is, it’s not just about India, or any nation; disruptive technology like Linux and the waves of AI that are to come are not localised, not for long at least — all these are very much global disruptors. To take a page from Yuval Noah Harari’s recent book, ‘21 Lessons for the 21st Century’, we today belong to one global civilisation. Having said that, our country has been slow to contribute in the initial years to open source, but that has been slowly changing over the last few years. Most of the kernel contributors are employees of companies. Also, a lot of it is to do with passion at a personal level — the individual must want to give back, just want to ‘scratch an itch’. Doing this takes passion, and a willingness to put in a real effort at the level of the code. Q Any other points that you would like to add? A point I’d like to make is one that’s often swept under the proverbial rug — engineers should enjoy and do with their career what they set out to, not to mention the four to seven years spent studying. Unfortunately, in our country, there tends to be an unspoken negative bias towards that geeky engineer who is still cutting code after working for several years, and who is perceived as not having ‘evolved’ to managerial duties. This certainly is not true of all organisations but is for many. Hopefully, this attitude will change quickly.
https://opensourceforu.com/2019/04/there-are-no-road-maps-or-rigid-rules-when-it-comes-to-linux/?shared=email&msg=fail
CC-MAIN-2019-35
refinedweb
1,208
60.75
model_point¶ - model_point()[source]¶ Target model points Returns as a DataFrame the model points to be in the scope of calculation. By default, this Cells returns the entire model_point_tablewithout change. To select model points, change this formula so that this Cells returns a DataFrame that contains only the selected model points. Examples To select only the model point 1: def model_point(): return model_point_table.loc[1:1] To select model points whose ages at entry are 40 or greater: def model_point(): return model_point_table[model_point_table["age_at_entry"] >= 40] Note that the shape of the returned DataFrame must be the same as the original DataFrame, i.e. model_point_table. When selecting only one model point, make sure the returned object is a DataFrame, not a Series, as seen in the example above where model_point_table.loc[1:1]is specified instead of model_point_table.loc[1]. Be careful not to accidentally change the original table.
https://lifelib.io/libraries/generated/basiclife.BasicTerm_ME.Projection.model_point.html
CC-MAIN-2021-43
refinedweb
147
54.42
accessible Skip over Site Identifier Skip over Generic Navigation Skip over Search Skip over Site Explorer Site ExplorerSite Explorer Joined: 9/12/2019 Last visit: 4/21/2020 Posts: 4 Rating: (0) Hi all Do you have arduino code sample fro get datetime from RTC? ------------------------------------------------------------------------------------------ Split from RTC in IOT2040. New subject after splitting Joined: 2/10/2017 Last visit: 11/20/2020 Posts: 48 Dear K.Chawana, I would recommend to see example from this website. I just remove out the line to set date & time because setting date&time from PuTTY command would be much easier. Per my test, it works well. ///////////////////////////////////////////////////////////////////////////////////////////////// char buf[9]; void setup() { Serial.begin(115200); } void loop() { system("date '+%H:%M:%S' > /home/root/time.txt"); FILE *fp; fp = fopen("/home/root/time.txt", "r"); fgets(buf, 9, fp); fclose(fp); Serial.print("The current time is "); Serial.println(buf); delay(1000); From above example, to get more information such as day,month, year, please refer to % meaning below. Just don't forget to change xxx in char buf[xx] and fgets(buf,xx,fp) to increase the buffer size depends on your maximum string. %a Abbreviated weekday name%A Full weekday name%b Abbreviated month name%B Full month name%c Date and time representation for your locale%d Day of month as, Sunday as first day of week (00-51)%w Weekday as decimal number (0-6; Sunday is 0)%W Week of year as decimal number,)%% Percent sign Thanks, but i 'am beginner to user it and the datetime is not cerrenty how to set date time? please advise - I recommend to set date/time by PuTTY instead of using Arduino Sketch. To set time manually, please see from this post. Be aware to set time in UTC zone (minus 7 hours from Thailand time). Or you can choose to sync time automatically when IOT connects to internet. Please see the method from this post. Joined: 5/2/2019 Last visit: 2/17/2020 Posts: 13 Hey chawana, try this code: #include <iostream> std::string date = "date 091613362019"; //e.g. 16.09.2019 13:36 (Syntax MMDDhhmmYYYY) std::string init = "hwclock --systohc --utc"; system(date.c_str()); system(init.c_str()); 1 thankful Users
https://support.industry.siemens.com/tf/ww/en/posts/arduino-code-sample-to-get-datetime-from-rtc/220285/?page=0&pageSize=10
CC-MAIN-2020-50
refinedweb
372
65.52
This one time, at band camp, Jacek Laskowski said: JL>Aaron Mulder wrote: JL> JL>> My sense is that none of the tools is a "perfect fit" out of the JL>> box, so it's just a matter of working through the necessary JL>> customizations. I think I'm quite close to being satisfied with XMLBeans JL>> -- I suspect I just need to straighten out our Geronimo namespace and then JL>> wait for the XMLBeans project to put something in the Maven repository. JL>> But just to state my requirements, here's what I can think up off JL>> the top of my head: JL> JL>Hi, JL> JL>Since Geronimo "is bringing together leading members of the Castor, JL>JBoss, MX4J and OpenEJB communities" I couldn't believe we couldn't JL>start with Castor as the best fit (even if it's not already). If JL>there're something missing in the tool I'd bet it's going the highest JL>priority to fix it as the tool's members are also Geronimo ones, and who JL>else would support it better. XMLBeans is incubating and the jar's size JL>is equals to the one of Castor, so unless there're features in XMLBeans JL>that outweighs the ones found in Castor I wouldn't even bother to take a JL>look at XMLBeans. Although, I remember I had some problems with XSD of JL>standard DDs, but did manage to work them out very quickly. JL> JL>So, hey, Castor's members, wouldn't you mind to comment on the JL>requirements Aaron was kind to write down? Otherwise, I'll have to JL>figure out if Castor does what Aaron is expecting from the tool (and it JL>will certainly take some time). Jacek, Some very good points, Jacek. Just today both Arnaud and Keith from the Castor Project (committers and architects for Castor XML) have joined the geronimo-dev list. Each of them told me that Castor XML supports the full XML Schema as well as nearly all of the requirements set forth by Aaron. If there are any additional features we'd like to have in Castor XML that they are extremely willing to add those features to Castor XML in a high priority manner. JL>AM>the tool must write DDs with namespace indicators (reading and then JL>AM>rewriting a DD should not lose information in the document element or JL>AM>header) The requirement listed above by Aaron has already been discussed amongst Arnaud, Keith and I and we're talking about some options. Let's discuss anything other questions people may have about Castor XML so that Arnaud and Keith can jump in and lend a hand in answering them and helping out with any issues that may arise. Bruce -- perl -e 'print unpack("u30","<0G)U8V4\@4VYY9&5R\"F9E<G)E=\$\!F<FEI+F-O;0\`\`");' The Castor Project Apache Geronimo
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200310.mbox/%3CPine.BSF.4.58.0310101825040.16765@io.frii.com%3E
CC-MAIN-2016-18
refinedweb
494
60.58
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Team , I am trying to write a script for a field behavior in JIRA. As part of which I am trying to fetch the Summary ( system field ) value from a JIRA ticket. But as result the value in log shows a Null value for Summary. I tried using below but know luck. -- FormField ExtractType = getFieldById("summary") -- FormField ExtractType = getFieldByName("summary") Can someone help me to show how to get values of a summary field. Thanks. It's: getFieldById("summary" or getFieldByName("Summary") Once you have the field, you call getFormValue() on it to get the actual summary. On which field have you attached this script? Summary is the name of the system summary field. Do you have a a custom field called Summary? No we don't have any Summary custom field. We are using Summary system field only. We even tried this with other system fields but getting the same error - Null value > On which field have you attached this script? Please answer my question. Hi Moin, or Jamie - did anything ever come of this question? I believe I am having the same problem, my code is as so: import com.atlassian.jira.component.ComponentAccessor def summary = getFieldById("summary") summary.setHelpText ("Summary is: "+getFieldByName("Summary").getFormValue()) I have tried a range of syntax: getFieldByName("Summary").getFormValue() getFieldById("summary").getFormValue() getFieldByName("Summary").getValue() getFieldById("summary").getValue() Unfortunately in all cases my help text simply states the summary is Null, even when editing a ticket which has a summary set. Any help would be greatly.
https://community.atlassian.com/t5/Marketplace-Apps-questions/how-to-get-values-of-a-summary-field/qaq-p/341246
CC-MAIN-2019-13
refinedweb
276
50.84
# $NetBSD: Makefile,v 1.126 2001/08/02 06:13:33 enami. # # Sub targets of `make build,' in order: # buildstartmsg: displays the start time of the build. # beforeinstall: creates the distribution directories. # do-force-domestic: check's that FORCE_DOMESTIC isn't set (deprecated.) # do-share-mk: installs /usr/share/mk files. # do-cleandir: clean's the tree. # do-make-obj: create's object directories if required. # do-check-egcs: check's that we have a modern enough compiler (deprecated.) # do-make-includes: install include files. # do-lib-csu: build & install startup object files. # do-lib: build & install system libraries. # do-gnu-lib: build & install gnu system libraries. # do-dependall: builds & install the entire system. # do-domestic: build & install the domestic tree (deprecated.) # do-whatisdb: build & install the `whatis.db' man database. # buildendmsg: displays the end time of the build. (cleandir) ||startmsg: @echo -n "Build started at: " @date buildendmsg: @echo -n "Build finished : @${MAKE} ${_M} buildstartmsg @${MAKE} ${_M} beforeinstall @${MAKE} ${_M} do-force-domestic @${MAKE} ${_M} do-share-mk @${MAKE} ${_M} do-cleandir @${MAKE} ${_M} do-make-obj @${MAKE} ${_M} do-check-egcs @${MAKE} ${_M} do-make-includes @${MAKE} ${_M} do-lib-csu @${MAKE} ${_M} do-lib @${MAKE} ${_M} do-gnu-lib @${MAKE} ${_M} do-dependall @${MAKE} ${_M} do-domestic @${MAKE} ${_M} do-whatisdb @${MAKE} ${_M} buildendmsg .endif do-force-domestic: .if defined(FORCE_DOMESTIC) @echo '*** CAPUTE!' @echo ' The FORCE_DOMESTIC flag is not compatible with "make build".' @echo ' Please correct the problem and try again.' @false .endif do-share-mk: .if ${MKSHARE} != "no" (cd ${.CURDIR}/share/mk && ${MAKE} install) .endif do-cleandir: .if !defined(UPDATE) && !defined(NOCLEANDIR) ${MAKE} ${_J} ${_M} cleandir .endif do-make-obj: .if ${MKOBJDIRS} != "no" ${MAKE} ${_J} ${_M} obj .endif do-check-egcs: do-make-includes: .if !defined(NOINCLUDES) ${MAKE} ${_M} includes .endif do-lib-csu: (cd ${.CURDIR}/lib/csu && \ ${MAKE} ${_M} ${_J} MKSHARE=no dependall && \ ${MAKE} ${_M} MKSHARE=no install) do-lib: (cd ${.CURDIR}/lib && \ ${MAKE} ${_M} ${_J} MKSHARE=no dependall && \ ${MAKE} ${_M} MKSHARE=no install) do-gnu-lib: (cd ${.CURDIR}/gnu/lib && \ ${MAKE} ${_M} ${_J} MKSHARE=no dependall && \ ${MAKE} ${_M} MKSHARE=no install) do-dependall: ${MAKE} ${_M} ${_J} dependall && ${MAKE} ${_M} _BUILD= install do-domestic: .if defined(DOMESTIC) && !defined(EXPORTABLE_SYSTEM) (cd ${.CURDIR}/${DOMESTIC} && ${MAKE} ${_M} ${_J} _SLAVE_BUILD= build) .endif do-whatisdb: ${MAKE} ${_M} whatis.db release snapshot: build (cd ${.CURDIR}/etc && ${MAKE} ${_M} INSTALL_DONE=1 release) .include <bsd.subdir.mk>
http://cvsweb.netbsd.org/bsdweb.cgi/src/Makefile?rev=1.126&content-type=text/x-cvsweb-markup&only_with_tag=MAIN
CC-MAIN-2015-32
refinedweb
393
54.39
En Wed, 27 Feb 2008 16:52:57 -0200, mrstephengross <mrstevegross at gmail.com> escribi�: >> class Foo: >> foo = Foo() >> >> You have to live with that. Just do >> Outer.foo = Outer.Parent() >> after your class-statement to achieve the same result. > > Hmmm. Well, I see why that works. It's too bad, though. If I want to > keep all executed code safely within a "if __name__ == '__main__'" > block, it ends up a bit ugly. Then again, I guess this is just an > aspect of python I'll have to get used to. Is there a specific reason > it works this way, by chance? class statements (and def, and almost everything in Python) are *executable* statements, not declarations. When you import a module, it is executed. If it contains a class statement, it is executed as follows: create an empty namespace (a dict), execute the class body in it, create a new class object with __dict__ = that namespace, and finally, bind the class name to the newly created class object in the module namespace. Until that last step, you can't refer to the class being created by name. I don't get your issue with "if __name__==__main__", but I hope that you now understand a bit better why a late initialization is required. (Of course someone could come up with a metaclass to perform that late initialization, but the important thing is to understand that you cannot create an instance before the class itself exists) -- Gabriel Genellina
https://mail.python.org/pipermail/python-list/2008-February/492696.html
CC-MAIN-2014-15
refinedweb
247
72.46
Opened 5 years ago Closed 5 years ago Last modified 3 years ago #10122 closed (wontfix) setting locales for a single HTTP context Description Currently there's no good explenation on how to set the locale temporarily, eg. when rendering one template in a different language used in emails. I recently needed this as my users trigger email notifications for other users. These should be translated into the recieving users' languages. The code I used was: from django.utils.translation import check_for_language, activate, deactivate, to_locale, get_language, ugettext as _ from django.http import HttpRequest from django.template.loader import render_to_string from django.template import RequestContext from django.contrib.sessions.middleware import SessionMiddleware # 1. Get the current language: (if any) cur_locale = to_locale(get_language()) # 2. create a request (& optionally add session middleware) request = HttpRequest() SessionMiddleware().process_request(request) # 3. activate the users language: this loads the translation files if check_for_language(target.user.language): activate(target.user.language) locale = to_locale(get_language()) # 4. set the language code in the request (& optionally session) request.LANGUAGE_CODE = locale request.session['django_language'] = locale # 5. translate any string here (passed into the template): msg_dict = { 'title': _('This is the title and will be translated'), 'body': _('Message body') } # 6. render template with correct language: rendered = render_to_string("some_template.html", msg_dict, RequestContext(request)); # 7. reset back to the old locale: deactivate() I don't know if there is any easier way to do this, if so let me know ;) if not, this could be included in the docs Attachments (0) Change History (8) comment:1 Changed 5 years ago by mathijs - Needs documentation unset - Needs tests unset - Patch needs improvement unset comment:2 Changed 5 years ago by mathijs - milestone set to post-1.0 comment:3 Changed 5 years ago by mathijs - Needs documentation set comment:4 Changed 5 years ago by anonymous - milestone post-1.0 deleted Milestone post-1.0 deleted comment:5 Changed 5 years ago by jacob - milestone set to 1.1 - Triage Stage changed from Unreviewed to Accepted comment:6 Changed 5 years ago by jacob - Owner changed from nobody to jacob - Status changed from new to assigned comment:7 Changed 5 years ago by jacob - Resolution set to wontfix - Status changed from assigned to closed This is a pretty specific need, so it's not of interest to most doc readers. I"d suggest posting this on djangosnippets.org; that's a better place for it. comment:8 Changed 3 years ago by jacob - milestone 1.1 deleted Milestone 1.1 deleted I forgot to mention that target.user is my UserProfile class and the language property is thus the language preference for that user.
https://code.djangoproject.com/ticket/10122
CC-MAIN-2014-23
refinedweb
440
50.33
This page describes how to delete an Anthos clusters on VMware (GKE on-prem) admin cluster. Before you begin Before you delete an admin cluster, complete the following steps: - Delete its user clusters. See Deleting a user cluster. - Delete any workloads that use PodDisruptionBudgets (PDBs) from the admin cluster. - Delete all external objects, such as PersistentVolumes, from the admin cluster. Set a KUBECONFIGenvironment variable pointing to the kubeconfig of the admin cluster that you want to delete: export KUBECONFIG=[ADMIN_CLUSTER_KUBECONFIG] where [ADMIN_CLUSTER_KUBECONFIG] is the path of the admin cluster's kubeconfig file. Deleting logging and monitoring Anthos clusters on VMware's logging and monitoring Pods, deployed from StatefulSets, use PDBs that can prevent nodes from draining properly. To properly delete an admin cluster, you need to delete these Pods. To delete logging and monitoring Pods, run the following commands: kubectl delete monitoring --all -n kube-system kubectl delete stackdriver --all -n kube-system Deleting monitoring cleans up the PersistentVolumes (PVs) associated with StatefulSets, but the PersistentVolume for Stackdriver needs to be deleted separately. Deletion of the Stackdriver PV is optional. If you choose not to delete the PV, record the location and name of the associated PV in an external location outside of the user cluster. Deletion of the PV will get propagated through deleting the Persistent Volume Claim (PVC). To find the Stackdriver PVC, run the following command: kubectl get pvc -n kube-system To delete the PVC, run the following command: kubectl delete pvc -n kube-system [PVC_NAME] Verifying logging & monitoring are removed To verify that logging and monitoring have been removed, run the following commands: kubectl get pvc -n kube-system kubectl get statefulsets -n kube-system Cleaning up an admin cluster's F5 partition Deleting the gke-system namespace from the admin cluster ensures proper cleanup of the F5 partition, allowing you to reuse the partition for another admin cluster. To delete the gke-system namespace, run the following command: kubectl delete ns gke-system Then delete any remaining Services of type LoadBalancer. To list all Services, run the following command: kubectl get services --all-namespaces For each Service of type LoadBalancer, delete it by running the following command: kubectl delete service [SERVICE_NAME] -n [SERVICE_NAMESPACE] Then, from the F5 BIG-IP console: - In the top-right corner of the console, switch to the partition to clean up. - Select Local Traffic > Virtual Servers > Virtual Server List. - In the Virtual Servers menu, remove all the virtual IPs. - Select Pools, then delete all the pools. - Select Nodes, then delete all the nodes. Verifying F5 partition is clean CLI Check that the VIP is down by running the following command: ping -c 1 -W 1 [F5_LOAD_BALANCER_IP]; echo $? which will return 1 if the VIP is down. F5 UI To check that the partition has been cleaned up from the F5 user interface, perform the following steps: - From the upper-right corner, click the Partition drop-down menu. Select your admin cluster's partition. - From the left-hand Main menu, select Local Traffic > Network Map. There should be nothing listed below the Local Traffic Network Map. - From Local Traffic > Virtual Servers, select Nodes, then select Nodes List. There should be nothing listed here as well. If there are any entries remaining, delete them manually from the UI. Powering off admin node machines First, run this command to get the names of the machines, before you power them off. kubectl get machines -o wide The output lists the names of the machines. You can now find them in the vSphere UI. To delete the admin control plane node machines, you need to power off each of the remaining admin VMs in your vSphere resource pool. vSphere UI Perform the following steps: - From the vSphere menu, select the VM from the vSphere resource pool. - From the top of the VM menu, click Actions. - Select Power > Power Off. It may take a few minutes for the VM to power off. Deleting admin node machines After the VM has powered off, you can delete the VM. vSphere UI Perform the following steps: - From the vSphere menu, select the VM from the vSphere resource pool. - From the top of the VM menu, click Actions. - Click Delete from Disk. Deleting the data disk After you have deleted the VMs, you can delete the data disk. vSphere UI Perform the following steps: - From the vSphere menu, select the data disk from the datastore. - From the middle of the datastore menu, click Delete. After you have finished After you have finished deleting the admin cluster, delete its kubeconfig.
https://cloud.google.com/anthos/clusters/docs/on-prem/1.8/how-to/deleting-an-admin-cluster
CC-MAIN-2022-21
refinedweb
759
62.88
/* * . */ /* "@(#)zip_lolist.c: 2.0, 1.12; 2/10/93; Copyright 1988-89, Apple Computer, Inc." */ /* * Title: zip_locallist.c * * Facility: AppleTalk Zone Information Protocol Library Interface * * History: * */ #include <stdio.h> #include <unistd.h> #include <mach/boolean.h> #include <fcntl.h> #include <sys/errno.h> #include <sys/param.h> #include <sys/socket.h> #include <sys/ioctl.h> #include <sys/sockio.h> #include <net/if.h> #include "at_proto.h" #define SET_ERRNO(e) errno = e /* zip_getlocalzones() will return the zone count on success, and -1 on failure. */ int zip_getlocalzones( char *ifName, /* If ifName is a null pointer (ZIP_DEF_INTERFACE) the default interface will be used. */ int *context, /* *context should be set to ZIP_FIRST_ZONE for the first call. The returned value may be used in the next call, unless it is equal to ZIP_NO_MORE_ZONES. */ u_char *zones, /* Pointer to the beginning of the "zones" buffer. Zone data returned will be a sequence of at_nvestr_t Pascal-style strings, as it comes back from the ZIP_GETLOCALZONES request sent over ATP */ int size /* Length of the "zones" buffer; must be at least (ATP_DATA_SIZE+1) bytes in length. */ ) { SET_ERRNO(ENXIO); return (-1); }
http://opensource.apple.com/source/AppleTalk/AppleTalk-95/zip_lolist.c
CC-MAIN-2016-44
refinedweb
181
63.36
#include <channel.h> List of all members. Scan for the next key in the input. Check if the next key in the input is the given key. Write this key to the other side of the connection or the file. Write this numeric key to the other side of the connection or the file. Send the finished data to the other side. Read data from this file. Write the next data to this file. Finish writing to the file or close the socket. Poll for new data, with the current time. Scan this channel for input. Pointer to the current key. Connection class with read/write/close functions Dont change connection type on a running channel Indication that the channel is finished, only needed for communicate class Use finish() to indicate it is finished Function to call when new data is ready to read. type of the connection (1=client_socket, 2=file, 3=server_socket, 4=import) lock till correct credentials are found
http://moros.sourceforge.net/doxygen/classchannel.html
CC-MAIN-2017-13
refinedweb
162
77.13
Find Questions & Answers Can't find what you're looking for? Visit the Questions & Answers page! Hi, I am using a webservice that other applications use already. When I create the datastore, I use a SOAP webservice datastore, providing the user, password, proxy url, proxy user and proxy password. I creates the datastore alright, but when I try to import the metadata I get the error in the file in attachment. Could anybody let me know how could I solve this issue? Thanks in advance. Check Kbase 2322321, may be related to same namespace and multiple schema location in XML. Also, are you using DS 4.2 SP5 or above? Check schema location somewhere on E:\ - does it exists on designer box?
https://answers.sap.com/questions/72304/getting-error-bodi-1111469.html
CC-MAIN-2018-34
refinedweb
122
68.97
The anonymous, unknown client using a service. Anonymous The author of a document or service. Author Roles take on by clients Client Role Part of a policy statement dealing with its rule Component A schema. Grammar Group Identity A policy statement that can be followed or ignored. May Class of meetings Meeting A policy statement that is an absolute requirement. Must A policy that is the result of negotiation. Negotiated Policy Class of jobs Occupation Optional A metadata policy concerning a document. Policy The role with the final word about policies for a service. Policy Authority Part of a policy dealing with one resource. Policy Statement A set of priorities. Priority Recommended The role making a request. Requester Required An abstraction over identities to detail what part an identity plays. Role Roles taken on by a service Server Role Shall A policy statement that is strongly requested, but a service which has other overriding requirements may ignore it. Should A helpful, brief name for a grammar; unnecessary, but more friendly looking than a generated name. Abbreviated As Predicate indicating a user was present at a meeting. Attended Who authored a policy. Authored By Who contributed to a policy. Contributed By Predicate indicating a user's occupation Employed As Which resource's metadata a policy governs. For Metadata Of Which policy a policy statement relates to. For Policy Relating which resource(s) a policy statement is about. For Resource Connecting statment to component. Has Component The priority the author gives to a policy staetment when it comes to negotiating policies. Intended Behavior Relating roles to identities Has Role Predicate indicating a user's username Has Username Flags policy statments as not having any nicely resolvable conflicts with someobdy else's statements. Has Warning A resource is hidden to a role. Hidden To Member Of Name of a policy. Policy Name This is a hack. Because DAML lists don't sit well cwm yet, list processing of a component list to nicely generate rules with multiple predicates can't function. So instead of generating rule subjects, just list the rule subject as the object of this. Sad. Rule Subject Allow the service to specify which namespace of grammars it understands Supports Group the visibilities into one class. Visibility A resource is visible to a role. Visible To Which policy author to flag with a warning. Warn Author Connecting a component to whatever rule component it describes. With Predicate Connecting a component to whatever rule component's range it describes. With Range Relating a statement to whether it is hiding or revealing. With Visibility Ryan Lee 2001-12-20 en The World Wide Web Consortium (W3C) Personal Data Access Logic (PEDAL) Schema
http://www.w3.org/2002/01/pedal/pedal
crawl-002
refinedweb
449
50.33
pagure-2-openshift is webhook proxy service that allow to trigger Openshift build using pagure webhook feature. This service act as a bridge between Pagure and Openshift, it receives Pagure's webhook POST requests sent on every commit made to the git repository and sent a POST request with the format expected by Openshift to trigger a new build. pagure-2-openshift is using the following configuration keys : PAGURE_SECRET : Dictionary with the key being the Pagure Project Name and the value being the Private key provided by pagure. This key is used to authenticate the POST requests received by the service. OPENSHIFT_SECRET : Secret used to trigger the generic webhook of openshift. OPENSHIFT_CLUSTER : hostname of the openshift cluster used to build an application. OPENSHIFT_PROJECT : Name of the project (namespace) in openshift. OPENSHIFT_APP : Name of the application to build with openshift. JENKINS_URL : URL address of the Jenkins instance to use to trigger builds JENKINS_JOB_COMMIT : Name of the Jenkins job to trigger for every commit JENKINS_JOB_PR : Name of the Jenkins job to trigger when a new PR is opened JENKINS_SECRET : Secret token used by Jenkins to authenticate the build requests LOG_DEBUG : Set to True to turn on DEBUG trace In the project's settings generate a Private web-hook key this is your PAGURE_SECRET. Then in the Project options add the following url to the Activate Web-hooks input box. Finally in the Hooks activate the Fedmsg. To install the application use the provided requeriments.txt and pip in a virtual environment. $ python3 -m venv .p2o $ source .p2o/bin/activate (p20) $ pip install -r requirements.txt To run the application use gunicorn web server inside the virtual environment. (p20) $ gunicorn --worker 1 --bind 0.0.0.0:5000 wsgi:application Change the number of worker if you need to scale the service. To deploy a pagure-2-openshift service on Openshift use the provided openshift.json this will create everything required except for the configMap that will hold the configuration items. Before importing the openshift.json file create a configMap using the provided configmap.yml and update the configuration values.
https://pagure.io/pagure-2-openshift/tree
CC-MAIN-2021-21
refinedweb
347
57.16
Debouncing is used for optimizing the performance of a web app. It is done by limiting the rate of execution of a particular function (also known as rate limiting). We will learn about debouncing by implementing it on an input box with an onChange event. When we’ll type something inside this input box, on every keystroke an API call will be made. If we go to the network tab on Chrome, and type results in the input box, we’ll see that on every letter we type, a network/API call is being made. This is not good from a performance point of view because if the user is typing, let’s say, 50 or 100 characters in the input box, then 50-100 API calls will be made. We will fix it with the help of debouncing. Method 1: Implementing from scratch Let’s make a function debounce. It will return us another function (an optimized function). The logic behind this function will be that only when the time between two keypress events is greater than 500 milliseconds, only then will the data be fetched from the API. If the delay is more than 500 milliseconds, only then will the handleChange function be called. We will use apply a function to fix our context. If the delay between these two keystroke events is less than 500 ms, then we should clear this setTimeout (timer variable). We will use this optimized function (returned from debounce function) instead of directly calling the handleChange method. const debounce = (func) => { let timer; return function (...args) { const context = this; if (timer) clearTimeout(timer); timer = setTimeout(() => { timer = null; func.apply(context, args); }, 500); }; }; const handleChange = (value) => { fetch(`[{value}`]({value}`)) .then((res) => res.json()) .then((json) => setSuggestions(json.data.items)); }; Now we can go see in the network tab on Chrome, and type results. Only one or two times is the call made when we fully typed results inside the input box. Our debounce will be returning us a new function on every rendering. That we do not want so that we will use the useCallBack hook. It will provide us the memoized callback. const optimizedFn = useCallback(debounce(handleChange), []); Below is the full code for implementing debouncing from scratch. import React, { useCallback, useState } from "react"; const DebounceSrcatch = () => { const [suggestions, setSuggestions] = useState(""); const debounce = (func) => { let timer; return function (...args) { const context = this; if (timer) clearTimeout(timer); timer = setTimeout(() => { timer = null; func.apply(context, args); }, 500); }; }; const handleChange = (value) => { fetch(`{value}`) .then((res) => res.json()) .then((json) => setSuggestions(json.data.items)); }; const optimizedFn = useCallback(debounce(handleChange), []); return ( <> <h2 style={{ textAlign: "center" }}>Debouncing in React JS</h2> <input type="text" className="search" placeholder="Enter something here..." onChange={(e) => optimizedFn(e.target.value)} /> {suggestions.length > 0 && ( <div className="autocomplete"> {suggestions.map((el, i) => ( <div key={i} <span>{el.name}</span> </div> ))} </div> )} </> ); }; export default DebounceSrcatch; Method 2: Using lodash Another way to implement debouncing is using lodash. Lodash provides a debounce method that we can use to limit the rate of execution of the handleChange function. Just wrap the callback function with the debounce method and provide the amount of delay we want between two events. Now if you go back on Chrome and check, it works the same. import { debounce } from "lodash"; const handleChangeWithLib = debounce((value) => { fetch(`[{value}`]({value}`)) .then((res) => res.json()) .then((json) => setSuggestions(json.data.items)); }, 500); Method 3: Using react-debounce-input There is one more npm package called react-debounce-input that we can use. It is the simplest way compared to the previous two methods. Just use DebounceInput provided by the react-debounce-input library instead of using the normal input tag. And provide delay as an attribute. Now if we go back to Chrome and check again, calls are still made only when we fully type results. <DebounceInput minLength={2} className="search" placeholder="Enter something here..." debounceTimeout={500} onChange={e => handleChange(e.target.value)} /> So these are the 3 different methods of implementing debouncing in React. I hope some of you will find these as useful as they have been for me. Check out this GitHub repository for the source code: source code link Video explanation:
https://plainenglish.io/blog/implementing-debouncing-in-react
CC-MAIN-2022-40
refinedweb
695
57.06
Opened 7 years ago Last modified 3 years ago #15156 new New feature Ordinal numbers in English and in other locales Description Some of the strings on contrib-humanize.po could be not used for Albanian locale (sq). I suppose other locales have the same issue with the way English build ordinal numbers using th, st, nd and rd (as in 4th, 15th, 2nd, etc). To make things even more complicated, there is gender to be considered here. As an example, the word used for "month" and the one used for "week" have different gender in Albanian. Change History (15) comment:1 Changed 7 years ago by comment:2 Changed 7 years ago by comment:3 Changed 7 years ago by Marking this DDN. Like claudep, I'm not convinced this is even possible, but i18n isn't my specialty. comment:4 Changed 6 years ago by comment:5 Changed 6 years ago by We might improve things slightly by using pgettext_lazy instead of ugettext_lazy. That would make it possible to distinguish the 'th' for 0 from the 'th' for 5 in this list: suffixes = (_('th'), _('st'), _('nd'), _('rd'), _('th'), _('th'), _('th'), _('th'), _('th'), _('th')) Would this be useful? If it isn't, then I think we should just close this as "wontfix". Handling gender in the ordinal filter is another topic, I suggest opening another ticket for this feature if you deem it useful. comment:6 Changed 5 years ago by I agree with aausgustin's idea of using pgettext. It's better than nothing. comment:7 Changed 4 years ago by Given the severe differences between how different languages "count" wouldn't that in the worst case require the context to actually include the number itself? I skimmed a little bit through and some pages about the Japanese and Chinese languages going with at worst 1-10+above should be enough except for following exceptions: - Swedish: Here every number that ends with a 1 or 2 has to suffixed with ":a" instead of ":e" - Russian - Catalan - Galician, Italian, Portuguese, and Spanish: The suffix depends on the gender of the noun to be counted That being said, the list on Wikipedia looks very incomplete, but if the information about very widely uses languages like Spanish is correct here, the context would have to include the whole number, which looks rather impractical to me :-/ Since the gender could be indicated by the context it is not as problematic as the irregular suffix/prefix in languages like Russian and Catalan. comment:8 Changed 4 years ago by I'm not suprised at all by your findings. I maintain my original opinion: the rules are too much diverse to be able to solve this by mere translation. comment:9 Changed 4 years ago by Yeah, the fix proposed here is "better than nothing, fixes it for a few languages". It's by no means a definitive solution. comment:10 Changed 4 years ago by comment:11 Changed 4 years ago by Proposal: Allow defining a def ordinal(value): function in conf/locale/<lang>/formats.py. We could then generate the ordinal string by calling get_format('ordinal')(value). comment:12 Changed 4 years ago by Yes, I was thinking along the same line. formats.py is probably the best place for this too. At first I thought that may be localflavor would be better, but since that is based around countries and not languages, formats.py sounds to me like the ideal place. I will try to come up with an implementation in the next week or so. Sorry for the delay on this. comment:13 Changed 4 years ago by One issue I see there so far is how to deal with setups where USE_L10N is disabled. Should in this case a fallback implementation be just used from locales.en.formats or should this be integrated in the global_settings module? IMO the first one makes more sense since this would be the one and only callable in the whole settings module right now which would break the style there. Something else I'd like to get in there is having the system still be based around messages in order to allow users to get their custom "context" (referencing the noun the ordinal is intended for) into the message. This way languages like Spanish that don't have a gender-neutral ordinal-indicator could detect the necessary gender from something like {{ num|ordinal:"email" }}. I don't know if the ordinal would always reference the same noun in a sentence independent of the language but it should get around at least some of the edge cases. @Besnik Would this help in your situation? comment:14 Changed 4 years ago by Sorry, but I didn't find and probably won't find the time this ticket requires. I've therefor deasigned it. The ordinal template tag of contrib.humanize is not fully localizable in several languages, indeed. But I think this is an unresolvable issue. There are just too many different cases to consider as the algorithm may be different for each language. I suggest to look at the code and implement it for your own specific needs (languages). This reminds me about a similar discussion in the GNOME Nautilus File Manager (see e.g. and).
https://code.djangoproject.com/ticket/15156
CC-MAIN-2017-39
refinedweb
892
60.35
I've inherited a moderate size network that I'm trying to bring some sanity to. Basically, its 8 public class Cs and a slew of private ranges all on one vlan (vlan1, of course). Most of the network is located throughout dark sites. I need to start separating some of the network. I've changed the ports from the main cisco switch (3560) to the cisco router (3825) and the other remote switches to trunking with dot1q encapsulation. I'd like to start moving a few select subnets to different vlans. To get some of the different services provided on our address space (and to separate customers) on to different vlans, do I need to create a subinterface on the router for each vlan and, if so, how do I get the switch port to work on a specific vlan? Keep in mind, these are dark sites and geting console access is difficult if not impossible at the moment. I was planning on creating a subinterface on the router for each vlan then setting the ports with services I want to move to a different vlan to allow only that vlan. Example of vlan3: 3825: interface GigabitEthernet0/1.3 description Vlan-3 encapsulation dot1Q 3 ip address 192.168.0.81 255.255.255.240 the connection between the switch and router: interface GigabitEthernet0/48 description Core-router switchport trunk encapsulation dot1q switchport mode trunk show interfaces gi0/48 switchport Name: Gi0/48 Unknown unicast blocked: disabled Unknown multicast blocked: disabled Appliance trust: none So, if the boxen hanging off of gi0/18 on the 3560 are on an unmanaged layer2 switch and all within the 192.168.0.82-95 range and are using 192.168.0.81 as their gateway, what is left to do, especially to gi0/18, to get this working on vlan3? Are there any recommendations for a better setup without taking everything offline? Pardon, in your cut and pasted configs, you appear to be describing Gi0/48 - your uplink to your router, but in your question refer specifically to hosts connected to Gi0/18. I'm going to assume you're describing two different ports here. Further, I'm assuming from details in your config statements and question, that vlan 3 is being used for the 192.168.0.80/28 traffic. I'm going to assume that the vlan has already been declared on your 3560. (Check sh vlan) First of all, your port Gi0/18 should be configured for access mode on vlan 3. Likely, something like this: interface GigabitEthernet 0/18 switchport access vlan 3 switchport mode access As far as for other recommendations. Will all/most of your traffic from your IP subnets be to and from the internet. Basically, If you have enough traffic between subnets, it may suit you to have the 3560 act as your internal router and then dedicate your 3825 to be your border router. The problem is that if your router is baring the entire load for all routing, then a packet from one subnet will arrive at your switch, then be forwarded via the dot1q to your trunk on some vlan X, the router then makes a routing decision and sends the same packet back along the dot1q trunk on some new vlan Y now destined for the destination machine. Btw, I'm simply describing the situation of internal traffic to your customers/organization that crosses your different subnets. Instead, you can configure the 3560 at the, assuming normal conventions, first address of each vlan/subnet. E.g. 192.168.0.81 and enable ip routing. The next step is you create a new subnet specifically for between the router and switch. For convenience, i'd use something completely different, for example, 192.0.2.0/24 is reserved for documentation examples. Configure the router at 192.0.2.1 and the switch at 192.0.2.2. Have the switch use 192.0.2.1 as the default route. Configure the router to reach 192.168.0.0/16 via the switch at 192.0.2.2. If your network is small enough, static routes should be sufficient. No need for OSPF or anything. Of course, this would be a rather dramatic change; but it has potential for being a large improvement. It all depends on the nature of your traffic. For reference, cisco lists the Cisco Catalyst 3560G-48TS and Catalyst 3560G-48PS having a 38.7 Mpps forwarding rate and the Cisco 3825 as having 0.35Mpps forwarding rate. Mpps, just in case you don't know, is millions of packets per second. It's not bandwidth, but it's how many 64 byte packet routing decisions the device can make a second. The length of the packet doesn't affect how long it takes to making a routing decision. So the peak performance in bits/bytes will be somewhere in a range. In terms of bandwidth, it means that 350kpps is 180Mbps w/ 64byte packets and 4.2Gbps w/ 1500 byte packets. Mind you, that's in bits per second, so think of it as 18 Megabytes or 420 Megabytes per second in regular file-size terms. In theory, this means that your 3560G can route somewhere between 19.8Gbps and 464Gbps or rougly 2GBps and 45GBps. Actually, looking at those numbers, you most definitely should consider the plan I described above. Dedicate your 3825 to handling, presumably, NAT'd external traffic and let your 3560 handle the rest. I'm sorry this is so long; I'm bored at work waiting for tapes to finish.
http://serverfault.com/questions/216562/vlans-and-subinterfaces
crawl-003
refinedweb
938
72.66
allegations very seriously" and is "continuing to investigate so we can respond to the City." "Though we can't comment on the specific claims at this early stage, what we are seeing alleged here is completely at odds with the integrity of our team and the commitment they have to taking care of our customers every day," T-Mobile." 82 Reader Comments "How dare Liberal democrats admonish T-Mobile for being Innovative! *Drinks from Big Recess Cup*. It is their First Amendment right to lie to consumers!" Last edited by TechCrazy on Thu Sep 05, 2019 4:18 pm Is this isolated to a group of regional crooks? Or is it institutionalized throughout?. My most recent two phones are from Google. Shop online. Study. Research. Then Buy. Easy financing if you qualify. The phones are unlocked. I go in to my carrier and get a SIM card and install it. Only the bare minimum applications installed. No crapware / bloatware. If I want any additional apps installed (and I do) then I can find them easily enough in the Play store and install them. But most importantly, I don't have a ton of unwanted non-removable apps eating up my storage.. Last edited by kurkosdr on Fri Sep 06, 2019 12:01 pm "How dare Liberal democrats admonish T-Mobile for being Innovative! *Drinks from Big Recess Cup*. It is their First Amendment right to lie to consumers!" Its their first amendment right to provide alternative facts and innovations to consumers... One must use politically correct language... Is this isolated to a group of regional crooks? Or is it institutionalized throughout? Scam artists, greedy managers, a bunch of rogues operating in the interstices between 'MetroPCS' as was and their new corporate overlords, T-Mobile. This is going to be a headache for T-Mobile who have branded themselves all over the former MetroPCS but are going to have to pay dearly for this. On the other hand, all main wireless providers are scam merchants and bastards, with the amount they charge customers. It's just that they have shiny stores and nice corporate logos, very little competition and effectively no federal regulation. God I miss European telcos. You have it the wrong way around, my friend. Post-paid in the US is just a way to screw over the middle class with excessive prices and fees. Pre-paid can be *incredibly* cheap if you do your research (I get unlimited voice, 3gb/mo LTE, unlimited texts for $15/mo with Mint for example). I wish more low-income people were aware of the super-cheap deals going on out there - I think there should be a PSA about this, rather than let the big Telcos screw everyone so hard. I have no idea if it's possible to get the data, but Apple knows every device out there. That's one reason why it's stupid to steal iPhones. For those not in the know, "re-wrapping" phones is a highly profitable business because a used phone's intrinsic value is so much lower than a new one's (with phones being prone to damage, having their batteries worn out if changed improperly, and being so hard to repair). Of course, these are the very same reasons re-wrapping is so unethical as a practice. PS: is there some vendor-independent way to tell if a phone has been used before? Preferably one that doesn't go away with a firmware or bootloader reflash. If not I think there should be. I suspect you'll find these phones are all returns.. Last edited by IllegitimateLobsterParty on Thu Sep 05, 2019 5:08 pm Is this isolated to a group of regional crooks? Or is it institutionalized throughout? It mut be a regional thing. This didn't happen to me. I know my iPhone X is new. I bought it directly from T-Mobile online rather than in a store. Two more payments and it's mine! And then my bill goes way down. Not to excuse anything, but this is a legacy problem dating back to before the T-Mobile acquisition. Most of the original Metro PCS stores were franchises, and located where their target customers lived: in less-affluent urban ("Metro") areas, where retailers routinely prey on a captive market that doesn't having access to banking or consumer credit. We can fault T-Mobile for acquiring a predatory business model and not doing enough to fix it, but it's an old problem. This sort of thing is usually driven by substantial commission, probably heavily loaded toward the extras so that the salespeople get little or nothing on the basic package. Presumably this will emerge at trial. I can't say I've been very happy about T-Mobile's "service" but yeah, the stores here seemed like they wanted to help...they just seemed unable to fix the problems with the back-end that wouldn't let me activate my account completely. When I asked about a USB modem type device the store told me they didn't have any and suggested I try Verizon which surprised me. It's sad to see them pulling the same tactics under T-Mo's watch. It's sad to see them pulling the same tactics under T-Mo's watch. T-mobile is in for hell by NY's AG and t-mobile has no clean legs to stand on.... People need to go to prison for fraud like this.. I do the same, though I use One Plus. You need to verify your device is whitelisted to use Tmobile VoLTE. Only iPhones and Tmobile branded phones can use US Cellular. They have an IMEI whitelist. There isn't that much exclusive USC territory.... Thanks, I wasn’t aware they had their own network, I thought it was Sprint because it was PCS. They absolutely did charge activation fees back then though. I was charged $45 to activate my phone, and another $40 for the first month’s service. The second month I was again charged $85 total, shown on my emailed bill as “$45 new device activation” and the regular monthly fee. When it happened again I cancelled the service and when they refused to refund me for the multiple activations and the phone that never worked, I did a chargeback on my card for all of it. Euopean telcos aren't perfect. Last year I bought a Vodaphone chip in Portugal and was assured it would work all over Europe. I specifically asked for that assurance. When I got to Spain a few days later the chip stopped working. I visited a Vodaphone store and was told the chip wouldn't work in Spain and that was the end of the story, my loss, no recourse.. 100% agreed. I just upgraded my Sprint phone from an S8 to S10, and because I neglected to call in last August (2018) when my lease was up, I've been paying $31.25 to "lease" the S8. In order to upgrade, I was told I had to pay $187.50 to "own" the S8. After spending an hour on the phone with Sprint in an attempt to confirm that upgrading to the S10 would only increase my monthly bill by $6.25 (S8 was $31.25 a month, S10 is $37.50 a month) and getting 5 different answers from 5 different reps, I finally went through with the upgrade... and was told that it would actually be $85 to "purchase" my old phone. I had asked multiple times about if I paid off my old phone that I wouldn't have to send it in as a return (it had a damaged screen) and had been told over and over that I wouldn't have to return it. I get my new phone, and a return kit for the old phone. So I contact Sprint, and I'm told that the payoff price was $187.50 and that it doesn't matter what I was told, the phone is not paid off until I make an additional $102.50 payment. I sent them the paperwork that I had emailed to me stating that it was $85, but they are not budging on this. I have to take time tomorrow to call them and see if I can't get this resolved... which should be an interesting call. I've been with Sprint since 2007 when Nextel was absorbed into the company, before that I had been with Nextel for several years. So 12 years gets a loyal customer nothing (new deals are for new customers), and I refuse to drop my plan as it's the original unlimited plan (unmetered bandwidth, unlimited minutes, unlimited texts, international plan) and always get the "Thanks for being a long time loyal customer" but then get raked over the coals every time I have an issue. I flatout told them this time around that if it wasn't for the plan and the fact that I get a 25% discount due to where I work, I would happily switch to another carrier. Not that another carrier would be any better. They are all crooks, run by crooks. And their customer service is as cheap as it comes... pure scripted crap, outsourced to cheap labor countries where communication is made difficult by accents and language differences. As for "stores", Sprint stores are no better... upgrading via the website/phone had a special waving the $30 activation/upgrade fee but because ALL Sprint stores are "3rd party owned and operated" they don't have to offer ANY specials that Sprint has.. Almost every single t-mobile store I've seen in a mall in particular is owned by someone franchising the space and not by t-mobile themselves. It sounds like your store was sketchy doesn't mean t-mobile themselves were. You must login or create an account to comment.
https://arstechnica.com/tech-policy/2019/09/nyc-sues-t-mobile-to-stop-rampant-and-abusive-sales-tactics/?comments=1&post=37920873&mode=quote
CC-MAIN-2019-47
refinedweb
1,666
72.66
Summary: An obscure bug in Gecko causes list-item markers to be differently sizeed than the text of the list item, but there is a fix authors can use. Learn how to correctly size list item markers in Gecko 0.9.4, the basis of Netscape 6.2.x and CompuServe 7. Shortly before Mozilla 0.9.4 was finished, a bug was introduced that affects the sizing of list item markers (such as the numbers in an ordered list). While this bug was corrected shortly after 0.9.4 was finished, the bug still affects all Netscape 6.2.x versions, as well as Compuserve 7.0. The Problem In affected browsers, list item markers will very often appear to be too big compared to the text in the list item itself. This is most obvious in ordered lists, where the number that precedes each list item may be obviously different than the content that follows it. In fact, the markers were set to be a uniform size that did not change to match the content of the list items, so in rare cases the marker might actually appear to be smaller. See bug 110360, which explains the problem and shows when the fix was applied. In addition, when a document is displayed in "quirks" mode in Mozilla 0.9.4 and later, the markers of lists will not use the font size of the list item text, but will instead stay the same as the user's default font size. See bug 97351 for more details. The Solution Fortunately, there are Mozilla-specific CSS-like rules that can be used to correct both problems. The following rule is derived from Mozilla's <tt>html.css</tt> file: *|*:-moz-list-bullet, *|*:-moz-list-number {font-size: 1em;} This rule tells Gecko-based browsers to use the computed value of font-size for the marker's parent, which is the list item itself. Thus the font sizes of the marker and the content will be the same. Since this rule is not valid CSS, it will prevent the validation of any stylesheet that contains it. One solution is to move the rule into its own stylesheet, and accept that the stylesheet in question will never validate. This might also be a place to put any Explorer-specific CSS-like rules (such as scrollbar styling rules), which also will not validate. Authors who are not concerned with making sure the rule applies across all namespaces can use a slightly more simplified rule: *:-moz-list-bullet, *:-moz-list-number {font-size: 1em;} Recommendations - If it is important to make list item markers match the font size of the content, use one of the suggested rules. - In situations where validation is a priority, segregate the rule into a separate stylesheet so that the rule will not confuse CSS validators run against the main stylesheet. Related Links Original Document Information - Author(s): Eric A. Meyer, Netscape Communications - Last Updated Date: Published 04 Oct 2002; Revised 07 Mar 2003 - Note: This reprinted article was originally part of the DevEdge site.
https://developer.mozilla.org/ja/docs/Fixing_Incorrectly_Sized_List_Item_Markers
CC-MAIN-2014-15
refinedweb
515
70.43
#define min(a,b) ((a)<(b)?(a):(b))#define max(a,b) ((a)>(b)?(a):(b))#define abs(x) ((x)>0?(x):-(x))#define constrain(amt,low,high) ((amt)<(low)?(low):((amt)>(high)?(high):(amt)))//#define round(x) ((x)>=0?(long)((x)+0.5):(long)((x)-0.5))#define radians(deg) ((deg)*DEG_TO_RAD)#define degrees(rad) ((rad)*RAD_TO_DEG)#define sq(x) ((x)*(x)) MyNumber=5; max(++MyNumber); kirbergusI don't mind a good discussion but please watch your language. Definitely if you make statements which are not solid.1) your language:It is not macroses but macrosYou do not use WTF. Definitely not in a title. 2) the "must be a function"The C89 standard you refer to is more commonly known as ANSI C. ANSI C was officially released in 1989. I have been programming in C from 1991 till 2000 for my living. In all those years I used a define for max. It is new to me that this must be a function. If you are sure that ANSI C states that max must be a function please tell me where this is stated.The C99 is the newest version of C. The compiler used by the Arduino team is (like most other C compilers) not 100% compliant with C99. As far as I know there is no statement that max must be a function. Please state where it states min max and abs must be functions. Also refer to where it states all C compilers must comply to C99. 3) No macros but InlineYou state that these macros must be rewritten as inline. Inline is new in C99. It comes from C++. Inline is class related and on a AVR you do not want to have classes for something as basic as integers. 4) why a macro?A macro has a great advantage. That is, it works with any type that implements the methods you use. So you do not need a maxint maxlong maxstring..... 5) You are right there is a risk with macrosOn the content you are right thatCode: [Select] MyNumber=5; max(++MyNumber); will result in MyNumber being 7 and not 6; This is however a design decision that has been taken a long time ago on very good ground by people that didn't know about Arduino because it simply didn't exist yet. So blaming Arduino is not the way to go. It has been like this for decades now. I feel no sense of urgency. 6) overloaded functions from C++ stdI see no reason why you could not overload. You can use the #undef instructions if you want. bar = sq(foo++); bar = abs(foo()); If they are not macros, how should they be implemented? Templates? Inline-functions? template<typename T> const T& max(const T& a, const T& b) { return (a > b) ? a : b; } class wtf{ int max(void) { return 1; }};void setup(void){ wtf w; Serial.print(w.max());} sketch_dec26a.cpp:6:15: error: macro "max" requires 2 arguments, but only 1 givensketch_dec26a.cpp:15:22: error: macro "max" requires 2 arguments, but only 1 givensketch_dec26a:2: error: function definition does not declare parameters It uses some C++11 features so -std=gnu++0x flag should be passed to gcc. P.S. Using it shouldn't have any impact on program size or speed if you are using macros only with single variables. template <class T> const T& minT ( const T& a, const T& b ){ return( (a<b) ? a : b );} No round? Did you exclude it on purpose? //#define round(x) ((x)>=0?(long)((x)+0.5):(long)((x)-0.5)) You allow the parameters to be different types. You're not concerned about the possibility of implicit type-conversions? template<typename _Tp> inline const _Tp& max(const _Tp& __a, const _Tp& __b) { // concept requirements __glibcxx_function_requires(_LessThanComparableConcept<_Tp>) //return __a < __b ? __b : __a; if (__a < __b) return __b; return __a; } QuoteIt uses some C++11 features so -std=gnu++0x flag should be passed to gcc.I suspect that will not happen. QuoteP.S. Using it shouldn't have any impact on program size or speed if you are using macros only with single variables.In my testing it does. In some cases this...Code: [Select]template <class T> const T& minT ( const T& a, const T& b ){ return( (a<b) ? a : b );}Produces less code the the min macro. int value = constrain(Serial.readByte(), 0, 5); #define max(a,b) \ ({ typeof (a) _a = (a); \ typeof (b) _b = (b); \ _a > _b ? _a : _b; }) #define min(a,b) \ ({ typeof (a) _a = (a); \ typeof (b) _b = (b); \ _a < _b ? _a : _b; }) #define abs(x) \ ({ typeof (x) _x = (x); \ _x > 0 ? _x : -_x; })
https://forum.arduino.cc/index.php?topic=84364.msg637001
CC-MAIN-2020-24
refinedweb
781
76.72
The Sandbox Thread4 posts Forum Read Only This forum has been made read only by the site admins. No new threads or comments can be added. HD Photo Command Line Encoder Back to Forum: The Sandbox Conversation locked This conversation has been locked by the site admins. No new comments can be made. I created a very basic command line encoder to convert images to the HD Photo format to do some basic tests. I did it because i don't have Photoshop so i can't use the new beta HD Photo plugin. Vista's new Photo Gallery is able to read these .wdp files but i don't think it's possible ti view on XP without someone whipping up a WPF image viewer or by using Photoshop so i recommend you use the Windows Photo Gallery app in Vista to view the output files. . The app uses the new managed classes for Windows Imaging Component that are part of WPF (in the System.Windows.Media.Imaging namespace) so you'll need the .NET Framework 3.0 installed before you can use it. I included the full source code as well as a compiled EXE so feel free to mess around with the code. The code is commented a bit (mostly function explanations) but this was thrown together in a couple of hours (including reading through the docs) so don't expect any miracles. Error handling and input checking isvery basic as well so doing silly things and breaking the app isn't hard. I also do not support the advanced codec options, just the basic quality level. The app saves files into the same folder as the input file(s) with the same filename except they have a .wdp extensions (Vista still doesn't recognize the .hdp extension). The app should be runnable on XP but don't expect to be able to view the files on XP unless you have Photoshop and the HD Photo beta plugin installed. Here are some examples of the usage: HdpEncoder "C:\Users\Stebet\Pictures\test.png" -q 1.0 - Saves the picture as a lossless .wdp file. HdpEncoder "C:\Users\Stebet\Pictures\test.png" - Saves the picture using a quality setting of 0.9 (default). HdpEncoder "C:\Users\Stebet\Pictures\test.png" -q 0.5 - Saves the picture using a quality of 0.5. The Quality can range from 0.0 to 1.0 with 1.0 being lossles and 0.0 being horrid quality HdpEncoder "C:\Users\Stebet\Pictures\SomePictureFolder" -q 0.8 - Saves all the images in a folder as .wdp's with a quality setting of 0.8 HdpEncoder "C:\Users\Stebet\Pictures\test.png" -s 102400 - Saves the picture as a 100 kilobyte .wdp file using binary search to find the optimal quality for that file size, using a 2% file size error margin, brute force by encoding again and again and looking at the resulting file size. This is good if you want to compare .wdp to .jpg for quick comparison purposes. Do not though that it is not using any of the advanced codec options so the comparison isn't very accurate except for quick and dirty testing. As input you should be able to use .bmp, .jpg, .png, .gif, .tiff and .wdp at least (can't remember what other file formats are supported with the WIC by default). That's all for now, enjoy the code and feel free to mess around with it and experiment. Ugh, i just noticed that the file didn't get uploaded and editing the post doesn't seem to work. So grab the file here! Someone whipped up an HDP image viewer Hey Stebet, you wouldn't be able to update the link to the HDP encoder, would you? It has gone stale, but I could really use the encoder you wrote. Thanks for anything you can put up!
https://channel9.msdn.com/Forums/Sandbox/253153-HD-Photo-Command-Line-Encoder
CC-MAIN-2017-22
refinedweb
652
75.2
Thanks, Brent On Thu, 19 Jul 2007, Brent A Nelson wrote: Patch 322 seems to have fixed the stray ls errors, but not the cp -a complaints. A "cp -a" strace is attached.Thanks, Brent On Wed, 18 Jul 2007, Brent A Nelson wrote:Aha, it looks like GlusterFS is giving odd/varying error responses to queries for ACL information (I assume it should be giving an "operation not supported" error). This must be related to my previously reported problem copying from GlusterFS to GlusterFS where it was complaining about preserving ACLs for every file copied.See attached strace. Thanks, BrentPS At least in this simple case where glusterfs is directly mounting a storage/posix, NFS reexport works fine. I haven't had a chance to test a full setup with recent GlusterFS tlas, but I will once the ACL glitch is squashed.On Wed, 18 Jul 2007, Anand Avati wrote:Brent, very interesting diagnosis! is it possible for you to re-create the 'posixonly' setup (no server/client) and again do 'strace ls -ial /beast' ? we arenot able to reproduce this error at our setup. thanks avati 2007/7/17, Brent A Nelson <address@hidden>:Just a quick note that this doesn't seem to be any sort of corruption issue. I completely emptied all my shares (even removing lost+found) and my namespace and rsynced the corresponding AFR shares and namespace. The only thing different between the AFRs would be ctimes. I restarted everything, and did: ls -al /beast ls: /beast: File exists ls: /beast/.: File exists total 8 drwxr-xr-x 2 root root 4096 2007-07-17 09:27 . drwxr-xr-x 27 root root 4096 2007-07-02 10:18 .. I also tried disabling readahead and writebehind (my only performance translators). It didn't help. Changing the unify from alu to rr also didn't help. I then tried "glusterfs -f /etc/glusterfs/beast -n mirror0 /beast" to mount a single AFR, no unify. It STILL produces the same messages. I then tried "glusterfs -f /etc/glusterfs/beast -n share0-0 /beast" to mount a simple, single share used as half of an AFR. Same issue.I then stripped down a server to serve out one single storage/posix share,with no posix locks (I wasn't using any other translators on the server side, apart from protocol/server, of course). I mounted that share as in the previous attempt. No difference!So, this issue occurs even with just protocol/client, protocol/server, andstorage/posix in use. As barebones as you can get. Almost. One more try. No glusterfsd, and glusterfs accesses a single storage/posix directly: ls -al /beast ls: /beast: File exists ls: /beast/.: File exists total 8 drwxr-xr-x 2 root root 4096 2007-07-17 09:27 . drwxr-xr-x 27 root root 4096 2007-07-02 10:18 ..No difference, even with just glusterfs directly accessing a single, localstorage/posix, with no other translators. Spec is simply: volume share0 type storage/posix # POSIX FS translator option directory /share0 # Export this directory end-volume Ubuntu Feisty, Fuse 2.6.3. Any ideas? Thanks, Brent On Sat, 14 Jul 2007, Brent A Nelson wrote: > It's the same spec I was using previously (AFRed namespace cache, unified > AFRs spread across four servers, posix-locks, readahead, and writebehind). > It's not just the top-level directory; it's everywhere. > > Thanks, > > >> >-- Anand V. Avati
http://lists.gnu.org/archive/html/gluster-devel/2007-07/msg00354.html
CC-MAIN-2015-18
refinedweb
571
73.68
We can overload constructors, just as we overloaded methods. The parameter lists must be different. This can mean that they take a different number of parameters and/or that the types are different. In the above code we have two constructors. One takes no parameters, and one takes a String object, which is the name to use. Here we have added another constructor that takes the name and an array of grades. public class Student { //////////// fields ////////////////// private String name; private double[] gradeArray; //////////// constructors /////////// public Student() {} public Student(String theName) { this.name = theName; } public Student(String theName, double[] theGradeArray) { this.name = theName; this.gradeArray = theGradeArray; } /////////// methods /////////////// public String toString() { return "Student object named: " + this.name; } } To use this constructor we need to pass in a name as a String object and an array of grades of the type double.
https://flylib.com/books/en/1.79.1.101/1/
CC-MAIN-2018-30
refinedweb
138
67.86
Flask Cookies Flask Cookies Cookies - The cookies are stored in the form of text files on the client's machine. It is used to track the user's activities on the web. - In flask, the cookies are associated with the Request object as the dictionary object of all the cookie variables. It is transmitted by the client. response.setCookie(title, content, expiry time) It is used to specify the expiry time, path, and the domain name of the website. set_cookie() - It is used to response the object. The response object can be formed by using the make_response() method in the view function. Set Cookie Additionally we can read the cookies stored on the client's machine using the get() method of the cookies attribute associated with the Request object. request.cookies.get(<title>) Read Also Sample code from flask import * app = Flask(__name__) @app.route('/cookie') def cookie(): res = make_response("<h1>cookie is set</h1>") res.set_cookie('foo','bar') return res if __name__ == '__main__': app.run(debug = True) Output: Flask Cookie We can track the cookie details in the content settings of the browser. Login application in Flask - We will create a login application in the flask where a login page (login.html) is shown to the user which prompts to enter the email and password. - If the password is "jtp", then the application will redirect the user to the success page (success.html) where the message and a link to the profile (profile.html) is given otherwise it will redirect the user to the error page. List Sample code login.py from flask import * app = Flask(__name__) @app.route('/error') def error(): return "<p><strong>Enter correct password</strong></p>" @app.route('/') def login(): return render_template("login.html") @app.route('/success',methods = ['POST']) def success(): if request.method == "POST": email = request.form['email'] password = request.form['pass'] if password=="jtp":> Output - Run the python script by using the command python login.py and visit localhost:5000/ on the browser. Flask Login - Click Submit. It will display the "Login Successful" message and provide a link to the profile.html Flask Login Success - Click on view profile. It will read the cookie set as a response from the browser and shows the following message. Flask View Profile If you want to learn about Python Course , you can refer the following links Python Training in Chennai , Machine Learning Training in Chennai , Data Science Training in Chennai , Artificial Intelligence Training in Chennai
https://www.wikitechy.com/tutorial/flask/flask-cookies
CC-MAIN-2020-50
refinedweb
409
59.6
A periodic function has at least as many real zeros as its lowest frequency Fourier component. In more detail, the Sturm-Hurwitz theorem says that has at least 2n zeros in the interval [0, 2π) if an and bn are not both zero. You could take N to be infinity if you’d like. Note that the lowest frequency term can be written as for some amplitude c and phase φ as explained here. This function clearly has 2n zeros in each period. The remarkable thing about the Sturm-Hurwitz theorem is that adding higher frequency components can increase the number of zeros, but it cannot decrease the number of zeros. To illustrate this theorem, we’ll look at a couple random trigonometric polynomials with n = 5 and N = 9 and see how many zeros they have. Theory says they should have at least 10 zeros. The first has 16 zeros: And the second has 12 zeros: (It’s difficult to see just how many zeros there are in the plots above, but if we zoom in by limiting the vertical axis we can see the zeros more easily. For example, we can see that the second plot does not have a zero between 4 and 5; it almost reaches up to the x-axis but doesn’t quite make it.) Here’s the code that made these plots. import matplotlib.pyplot as plt import numpy as np n = 5 N = 9 np.random.seed(20210114) for p in range(2): a = np.random.random(size=N+1) b = np.random.random(size=N+1) x = np.linspace(0, 2*np.pi, 200) y = np.zeros_like(x) for k in range(n, N+1): y += a[k]*np.sin(k*x) + b[k]*np.cos(k*x) plt.plot(x, y) plt.grid() plt.savefig(f"sturm{p}.png") plt.ylim([-0.1, 0.1]) plt.savefig(f"sturm{p}zoom.png") plt.close()
https://www.johndcook.com/blog/2021/01/14/sturm-hurwitz/
CC-MAIN-2021-21
refinedweb
324
65.01
# Machine learning in browser: ways to cook up a model ![Demo web page](https://habrastorage.org/r/w1560/getpro/habr/upload_files/9d1/bb0/2d3/9d1bb02d38571035769fd433a21e9dc8.png "Demo web page")Demo web pageWith ML projects still on the rise we are yet to see integrated solutions in almost every device around us. The need for processing power, memory and experimentation has led to machine learning and DL frameworks targeting desktop computers first. However once trained, a model may be executed in a more constrained environment on a smartphone or on an IoT device. A particularly interesting environment to run the model on is browser. Browser-based solutions may be used on a wide range of devices, desktop and mobile, online and offline. The topic of this post is how to prepare a model for the in-browser usage. This post presents an end-to-end implementations of a model creation in Python and Node.js. The end goal is to create a model and to use it in a browser. I'll use TensorFlow and TensorFlow.js as main frameworks. One could train a model in Python and convert it to JS. Alternative is to train a model directly in javascript, hence omitting the conversion step. I have more experience in Python and use it in my everyday work. I occasionally use javascript, but have very little experience in the contemporary front-end development. My hope from this post that python developers with little JS experience could use it to kick start their JS usage. What is Node.js? ---------------- [Node.js](https://nodejs.org/en/about/) is a runtime environment/engine that executes JavaScript code outside a browser. JavaScript is a dynamically typed programming language that conforms to [ECMAScript specification](https://en.wikipedia.org/wiki/ECMAScript). This may not tell you a lot, but what this means in plain English is that there are different specifications of JavaScript. Node.js definition through analogies in the Python world: * Node.js is an interpreter that processes files written in JavaScript. * NPM is the package manager, think `pip` for JavaScript. NPM was the original package manager. These days several alternatives exist. `yarn` gained popularity. * ECAMScript specifications are Python versions. * A package in node.js must have a `package.json` file, which is somewhat similar to `setup.py`. * A release version of js code intended for usage in a browser is usually *minified*. Think of creating a `.pyc` file in Python. Python does it behind the scene, but in javascript one has to do it explicitly. Why Node.js and javascript? --------------------------- [IBM](https://developer.ibm.com/tutorials/an-introduction-to-ai-in-nodejs/) gives a couple of reasons: * The large community of JavaScript developers can be effective in using AI on the large scale. * The smaller footprint and fast start time of Node.js can be an advantage when deployed in containers and IoT devices. * AI models process voice, written text, and images, and when the models are served in the cloud, the data must be sent to a remote server. Data privacy has become a significant concern recently, so being able to run the model locally on the client with JavaScript can help to alleviate this concern. * Running a model locally on the client can help make browser apps more interactive. For the project I have been working on, the possibility to use the model offline was the selling point. We had to use the model inside a web application in the wild with no connection. End-user would sync the data/model once in a while when they do get internet connection. This post refers to two implementations of training pipeline for a text classification model. One is written in Python, another one in Node.js, both use Tensorflow. I will not focus much on the models, but rather on the code around them. Code ---- All code ready to be run (locally and in Docker) is available from [modileml](https://gitlab.com/mobileml) group on Gitlab. This is a collection of repositories that contain fully implemented training pipelines and a web client. Below I will walk through the most interesting parts of it. Architecture ------------ Project architecture includes data in the DB and the code that will fetch the data, train the model and evaluate it. Trained model will be consumed by a web app. ![Overall architecture](https://habrastorage.org/r/w1560/getpro/habr/upload_files/052/7a5/348/0527a5348666745355bbdd4355ea22b7.png "Overall architecture")Overall architectureIncoming data consists of posts, which we need to preprocess before handling to the model. The preprocessing is rather simple and consists of: 1. text cleaning (removing and replacing some patterns in the text). 2. tokenization. Our model will analyze words, hence we need to split text into words. 3. Finally we will need to convert text into numbers. This involves building a vocabulary. Multiple approaches available here ([bag of words](https://en.wikipedia.org/wiki/Bag-of-words_model), [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf)). We are going to use the simplest model: each word from the training set will get an integer number in ascending order. The preprocessing step is necessary both during the model training and usage (inference). During inference we do not need to create a vocabulary and use the one obtained during training. > At this point you may spot a tricky bit. We need to do preprocessing both during training and inference. However, we may have training code in Python and inference code in Javascript! Just to give you a taste of it: inference code may have a requirement like support iPhone up to model 5. Some regex expressions, which are perfectly fine in python won't work in these circumstances, so one will need to adjust python code to match up inference requirements. > > ### Data We will use a subset of the [20 newsgroups](https://scikit-learn.org/stable/datasets/#the-20-newsgroups-text-dataset) text dataset. The dataset comprises around 18000 newsgroups posts on 20 topics. We will use just 8 topics instead of 20. As the very first step (independent of ML) we will store all the data in a database (sqlite for simplicity). This is done in order to mimic a real-life environment where one often does not have a clean train/test files as input, but a database. So train and test splits are created from the data in the database based on some criteria. We will assign a random `year` value for every post. Train and test data have disjoint year distributions, i.e. it is easy to figure out which data corresponds to train and which to test. Python code to create the database is found in file [news\_tosqlite.py](https://gitlab.com/mobileml/pynews/-/blob/70947ff8f5b27b0bd1c84fc4a284395af9edc715/src/pynews/news_to_sqlite.py). Database has a single table `news` with fields: * Data (string) - raw post text. * Category (string) - newsgroup topic as string. Our classification target. * Year (integer) - a random year. Train data have year in range (1922-1953), test data have year in range (1990-2000). Python ------ Our team leverages [Luigi](https://luigi.readthedocs.io/) for python code orchestration. We would normally have one task called `Train`, which we run with a local scheduler. The following figure shows the pipeline: ![Training graph](https://habrastorage.org/r/w1560/getpro/habr/upload_files/005/662/c40/005662c401ffc96ad000b7542d33928a.png "Training graph")Training graphIt is a direct acyclic graph (DAG) meaning that task dependencies do not need to be linear. The most interesting code is in `ConvertToJs` task, which prepares the model for inference in javascript. One has two options to convert model from within the python code: 1. Function `tensorflowjs.converters.save_keras_model` acts on a model instance (i.e. model loaded into memory) and saves it as a [Keras model](https://www.tensorflow.org/js/guide/conversion). One has to use `tf.loadLayersModel` function to load it in javascript. 2. Function `tensorflowjs.converters.convert_tf_saved_model`, which is able to convert a model stored on a disc. It saves the model in [SavedModel](https://www.tensorflow.org/js/guide/conversion) format and one has to use `tf.loadGraphModel` on the client side. Regardless of saved format a model in javascript format consist of two logical parts: model definition in a json file (`model.json`) and one or several binary files with model weights. Function `save_keras_model` may be better during developing as the resulting json file is more human readable. Another difference is that Keras models are fine tunable whereas `SavedModel` saved in a frozen state. `ConvertToJson` task also generates `constants.js` file which contains vocabulary and necessary text preprocessing constants. The easiest way to run the whole pipeline including the demonstration website is to use provided dockerfile: ``` # Build the image docker build -t tf_demo . # Run it. After training is done, visit http://localhost:5000/ docker run --rm -itp 5000:80 tf_demo ``` The provided demo is quite simple. For a more modern approach one could think of generating an NPM package. An NPM package will allow front-end developers to integrate the model easier. Inference --------- Before we come to node.js code, let's have a look on how to use our model in a browser. This is, in fact, quite easy. I assume vanilla javascript. We reference TensorFlow.js library and our js code in html: ``` ``` `constants.js` contains vocabulary, `predict.js` - javascript code to load the model, run preprocessing and inference. In a nutshell it is: ``` // global variable to hold our model let model; (async function() { // load the model after page loaded model = await tf.loadLayersModel('assets/model.json'); })(); async function predict(input = '', count = 5) { // equivalent of argsort const dsu = (arr1, arr2) => arr1 .map((item, index) => [arr2[index], item]) // add the args to sort by .sort(([arg1], [arg2]) => arg2 - arg1) // sort by the args .map(([, item]) => item); // extract the sorted items // Convert input text to words const words = cleanText(input).split(' '); // Model works on sequences of MAX_SEQUENCE_LENGTH words, make one const sequence = makeSequence(words); // convert JS array to TF tensor let tensor = tf.tensor1d(sequence, dtype='float32').expandDims(0); // run prediction, obtain a tensor let predictions = await model.predict(tensor); const numClasses = predictions.shape[1]; // array of sequential indices into predictions const indices = Array.from(Array(numClasses), (x, index) => index); // convert tensor to JS array predictions = await predictions.array(); // we predicted just a single instance, get it's results predictions = predictions[0]; // prediction indices sorted by score (probability) const sortedIndices = dsu(indices, predictions); const topN = sortedIndices.slice(0, count); const results = topN.map(function(tagId) { const topic = getKey(TAGS_VOCAB, tagId); const prob = predictions[tagId]; return { topic: topic, score: prob } }); return results; } ``` Function `predict` classifies input text and outputs top `count` topics. The inference code is the same regardless of the programming language used for model training. Node.js ------- Node.js is based on packages. In this example, we can say that a package is our project. All npm packages contain a file [package.json](https://nodejs.org/en/knowledge/getting-started/npm/what-is-the-file-package-json/) (usually in the project root), which holds various metadata about the package and a list of package dependencies. The first step in consuming the package is to install it's dependencies. A package manager is used for this. Traditionally, there was an NPM package manager. At some point *a better alternative* appeared called Yarn. Yarn is currently on version 2, but because of backwards compatibility in mind you get version 1 by default when you do fresh installation. I found the ideas in yarn 2 interesting, however I was not able to use it in this project. There were two errors I wasn't able to overcome: 1. it couldn't handle `"type": "module"` in `package.json` file. More on this later. 2. it was constantly complaining that I have to build package `@tensorflow/tfjs-node` from source. So Yarn v1.x it is. Once we have our dependencies installed, we need to reference them in the code. Here comes a tricky and annoying part for someone with Python background. For some time there was no official way to write modular js code. Variables and functions were global. Naturally, people tried to overcome this problem and so module loaders were born (there are [module bundlers](https://stackoverflow.com/questions/38864933/what-is-difference-between-module-loader-and-module-bundler-in-javascript/42317497) as well). The JavaScript language didn’t have a native way of organizing code before the ES2015 standard. Code organization guides how one references dependencies, i.e. defines and uses modules. Nowadays, we can broadly speak about two syntaxes: import and require. An example of import syntax: ``` import tf from "@tensorflow/tfjs-node"; ``` An example of require syntax: ``` const tf = require('@tensorflow/tfjs-node'); ``` Looks innocent. However, trying to execute code with a pure `node` command (i.e. `node train.js`), I found these two syntaxes being mutually exclusive (and it is indeed described in the documentation). Once you try to use `import`, node will complain that you can't use import outside a module! A common proposed solution on the Internet is to add `"type": "module"` to your `package.json`. Once done you won't be able to use `require` anymore. The error message was: `ReferenceError: require is not defined`. And here is another downside of Node.js for me. I was trying to google solutions for appearing errors (we all do it, right?!). With Node.js I often found myself with a 100 proposed solutions and all of them were different and not working, which means that there could be many underlying causes resulting in the same error message. I see two possible solutions for `import/require` issue: 1. Carefully read the documentation for [ECMAScript modules](https://nodejs.org/api/esm.html) and [CommonJS](https://nodejs.org/api/modules.html) modules. 2. Use [babel-node](https://babeljs.io/docs/en/babel-node). Then it will be possible to mix `import` with `require`. Not only this is an extra dependency to your project, but if you want to use import, you will have to write it in a special form anyway: `import * as tf from "@tensorflow/tfjs-node";`. Not bad for just starting a new project and getting dependencies in place! Trying to figure out all the details feels like this: Let's finally move towards the code. In order to mimic python pipeline I used a couple of libraries: * `better-sqlite3` - to access sqlite database. * `dataframe-js` - to process data in dataframes somewhat similar to Pandas. * `pipeline-js` - to organize code into a pipeline. This is a linear pipeline and not a DAG. It aims to replace code like this `trainModel(makeFeatures(preprocess(fetchData(database))))` with ``` var pipeline = new Pipeline([ fetchData, preprocess, makeFeatures, trainModel, ]); pipeline.process(database); ``` I didn't find a DAG package with built-in task executor. Perhaps this is due to my low familiarity with the ecosystem. Once we know the above, the rest is quite similar to python. Here is a function to preprocess data that we get from the database: ``` function preprocessData(rawData) { const columns = ["Data", "Category", "Year"]; let df = new DataFrame.DataFrame(rawData, columns); // filter out missing values df = df.dropMissingValues(["Data", "Category"]); df = df.filter(row => row.get("Category") !== " "); // split into train and test sets df = df.withColumn("testData", () => false); df = df.map((row) => row.set("testData", row.get("Year") > config.testsSplitYear)); // convert sentenses/categories to words df = df.withColumn("Words", (row) => cleanText(row.get("Data")).split(" ")); df = df.chain(row => row.set("Category", row.get("Category").split(" "))) const featuresFile = `${getBuildFolder('data')}/features.json`; df.select('Category', 'testData', 'Words').toJSON(true, featuresFile); console.log(`Features saved to: ${featuresFile}`); return df; }; ``` Note that is uses global variable `config`. We may save intermediate data to disc. Here I save preprocessed data into a JSON file. Model definition is straightforward: ``` export function createModel(wordsVocabSize, outputVocabSize, inputLength, embeddingDim = 64) { const model = tf.sequential({ layers: [ tf.layers.embedding({ inputDim: wordsVocabSize, outputDim: embeddingDim, inputLength: inputLength, }), tf.layers.globalAveragePooling1d(), tf.layers.dense({ activation: "relu", units: embeddingDim * 2, }), tf.layers.dense({ activation: "softmax", units: outputVocabSize, }), ], }); model.compile({ optimizer: tf.train.adam(), loss: "sparseCategoricalCrossentropy", metrics: ["acc"], }); return model; } ``` ![Visualized model architecture](https://habrastorage.org/r/w1560/getpro/habr/upload_files/0c6/e74/325/0c6e7432552ac2a374fc796d493747eb.png "Visualized model architecture")Visualized model architectureWe build a sequential model with an embedding layer and one hidden (dense) layer. When saving the model we do not need to convert it and can simply do `await model.save('nodejs_model')`. Please refer to the complete source code for all the details. Node.js pipeline produces a model and reports recall@K metric for K being 1, 3, and 5. Further improvements -------------------- The demonstrated pipelines may be further improved: * Separate training from inference, i.e. use different docker images. Training image may save the model to some sort of persistent storage and inference image may download it. * Generate an npm package with embedded model instead of a bunch of files, publish this package. * Tests. They have been omitted here. * Docker tweaks like not running from a root user. Conclusion ---------- Benefits of training in Node.js when you are going to inference in javascript: 1. You have one preprocessing coding base. 2. You omit model conversion step. When making model in Python, one has always to double check, which layers are implemented in TensorFlow.js. The conversion tool won't report any problems, but loading the model in javascript may fail. Not only you have layers that are not implemented, but you may get **differently** [implemented](https://github.com/tensorflow/tfjs/issues/2442) ones. 3. You could perhaps create a single npm package, which will contain training and inference code, which you may publish on npm repository (public or private). This may make your life a lot easier. Our current solution for a client includes an npm inference package. Preparing such package with python in docker was a tedious process. Downsides of working in node.js (from a python developer perspective): 1. A steep learning curve in order to use it right. Perhaps, you start with node and vanilla javascript. Soon babel kicks in and you will need to learn about all es5/6/2020 standards. Then you may want to move towards TypeScript (typed javascript language) where you would begin to compile `.ts` files into `.js`. After that you may find out that not all of your dependencies are typescript-compatible (although as I understood it shall run OK, you just miss some benefits of typescript). 2. Debugging difficulties. Node.js is very asynchronous whereas python is mostly synchronous by default. Loading data from disc in order to apply different functions to it may be hard. This may be just a habit. Currently a big chunk of my work includes trying out things on data. By this I mean that I would very often do: ``` df = pd.read_parquet('data.parquet') preprocess_simple(df) do_more(df) ... df2 = pd.read\_parquet('another\_data.parquet') do\_even\_more(df, df2) ``` In node.js data loading will probably be asynchronous and the code is like this: ``` let df = await dataframe.loadParquet('data.parquet'); do_more(df); // Ops! You'll get an error here as df is a promise df.then(df => { do_more(df) }); ``` The real-data `df` object exists only inside `then` function. It's possible to overcome this and save `df` globally, but doing so is not natural for people coming from a synchronious programming world. 3. Node.js REPL. In python I can just copy paste chunks of my code from a working file to REPL, including import statemetns. Node.js REPL uses `CommonJS` module loader. If you use ECMAScript modules, then you won't be able to paste `import` chunks. Alternatively, you may start `babel-node` REPL in order to overcome `import/require` issue. `Babel-node` has it's own peculiarities: only `var` variables are supported in REPL. If I am asked what to choose Python or Node.js for a new project involving inference in the browser, it will still be a hard question to answer. I feel like it is easier to experiment in python, but writing in Python and rewriting to JS afterwards is time consuming. So if your project is relatively small and straightforward, go with JS from the very beginning. You will be closer to consumers of your model and will understand their needs better. As always, choose the tool that suits the job. I hope that the provided code helps you to start you journey into node.js and mobile machine learning. References ---------- For further dive into the topic have a look at these links: * [codelabs.developers.google.com/codelabs/tensorflowjs-nodejs-codelab](https://codelabs.developers.google.com/codelabs/tensorflowjs-nodejs-codelab/#0) - a gentle introduction to TensorFlow.js + node.js from Google. * [github.com/tensorflow/tfjs-examples/](https://github.com/tensorflow/tfjs-examples/) - repository with official TensorFlow.js examples. Get inspired! * [auth0.com/blog/javascript-module-systems-showdown/](https://auth0.com/blog/javascript-module-systems-showdown/) - Information about modules in node.js. Use it to understand what modules are and how to use them.
https://habr.com/ru/post/520386/
null
null
3,498
51.44
Hi all, ----> I want to install my application automatically and after instalation it should start without user permission. ---->And also after system reboot also that should start, please help me any one Thanks and Regards Hi all, ----> I want to install my application automatically and after instalation it should start without user permission. ---->And also after system reboot also that should start, please help me any one Thanks and Regards Sunitha Devi.M LnT Infotech I think, There is no way to install sis file in phone without aware of user. We need system installer for install sis file but next thing you can do without user permission... 1>for Autostart...Set RI,FR these parameter in mmp file after exe i.e exe will start without any permission 2>system reboot---implement recognizer and launch exe at boot time... Hi kishore, Thanks for reply , I mad some che=anges in my application to auto start after reboot , Iam witing that changes please let me know my mistakes, I create a sisx file with certicate of N65. When i tried to install it is not installing in the system("Unable to install pop up iam getting), Same certificate iam using for other application those application installed properly. In mmp file -->Changed UID from 0x678A57AB to 0x278A57AB added following lines SOURCEPATH ..\data START RESOURCE 278A57AB.rss END //RESOURCE Created 278A57AB.rss file in Data folder with following code #include <startupitem.rh> RESOURCE STARTUP_ITEM_INFO startexe { executable_name = "c:\\sys\\bin\\Auto.exe"; recovery = EStartupItemExPolicyNone; } In pkg file i added this code "c:\symbian\9.1\s60_3rd\Epoc32\Data\z\278A57AB.Rsc"- "!:\private\101f875a\import\[278A57AB].rsc" "c:\symbian\9.1\s60_3rd\Epoc32\release\gcce\urel\Auto.exe" - "!:\sys\bin\Auto.exe",FR,RI Thanks and Regards Sunitha Devi.M LnT Infotech In reboot time No need FR,RI in recognizer coz it's automatically launch auto.exe from STARTUP_ITEM_INFO and you can use FR,RI launch other exe/application at Installation time. "c:\symbian\9.1\s60_3rd\Epoc32\release\gcce\urel\Auto.exe" - "!:\sys\bin\Auto.exe" If you are using developer certificate then you will need protected UID and self signed then you will need non-protected UID. It may be...you use same UID in other application which is already installed in phone... Please refer to the forum/wiki for working example snippets. This has been discussed many times already. Hi Sunitha Kishore said right, check ur UID in ur application, and if it is right, then format ur mobile and install again, it will install if every thing is perfact. Use the followig code in your rss file for auto start application after bootup the mobile. #include <startupitem.rh> RESOURCE STARTUP_ITEM_INFO exename { executable_name = "!:\\sys\\bin\\exename.exe"; recovery = EStartupItemExPolicyNone; } With best regards Amit Hi Kishore, I tried with that also, If i try to install it, installation started normally and in the middle it is giving error msg. Thanks and Regards Sunitha Devi.M LnT Infotech Go through below link and compare your source code..... Hi Kishore, Sorry for late reply, If i create a new project with UID 0x2xxxxxxx autorun is working but I Try to run the application which start with UID 0xExxxxxxxx is not working. Thanks and Regards Sunitha Devi.M LnT Infotech
http://developer.nokia.com/community/discussion/showthread.php/126202-Auto-start-and-Auto-installation
CC-MAIN-2014-15
refinedweb
546
56.45
16 June 2011 17:42 [Source: ICIS news] HOUSTON (ICIS)--LSB Industries has halted production of urea ammonium nitrate (UAN) at its site in ?xml:namespace> LSB Industries said that Pryor’s nitric acid facility now required maintenance. The unit is expected to resume production “on or about 21 June”, LSB said. The company estimates that the outage will cost about 20,000 tonnes in lost UAN production in the second quarter of 2011. The site’s anhydrous ammonia plant was shut down on 30 May for compressor repairs and production resumed on 12 June, LSB said. The company also said that the Pryor site is due for a planned turnaround in the third quarter of the year. For more on urea ammonium nitrate, visit ICIS
http://www.icis.com/Articles/2011/06/16/9470349/lsb-industries-halts-uan-production-at-oklahoma-ammonia-site.html
CC-MAIN-2014-52
refinedweb
126
62.58
Due to popular demand, mkdong has been modularized!! I present to you dong.py version 0.0.1. Enjoy: #!/usr/bin/env python ''' MKdong turned into a module. I think I was smoking crack and/or high on Red Bull this day. Dong goes into underpants Dong goes into boxers Dong goes into vagina Sperm comes out of dong Sperm goes into vagina Sperm goes into egg Dong goes into mouth Pee comes out of dong Pee goes into toilet Why am I iterating this crap? ''' import os, sys blue = '\\e[0;34m' # blue class Dong: def __init__(self, maxlen=40, color=blue): self.maxlen = maxlen self.color = color self.dong = None def mkdong(self): self.dong = '(_)/(_)' for i in range(self.maxlen): self.dong += '/' self.dong += 'D' os.system('echo -e "%s%s"' % (self.color, self.dong)) class Sperm(Dong): def init(self, count=500): self.count = count self.spermcount() Dong.init(self) def __repr__(self): return '' % self.count def tighty_whities(self): print 'Tighty Whities lowered your sperm count!' self.count -= 50 def boxers(self): print 'Boxers raised your sperm count!' self.count += 50 def spermcount(self): print 'Sperm count is %d' % self.count def bike_seat(self): print 'asdjfoisadjs' pass def radiation(self): print ':-x' pass def castration(self): print ':(' pass def smoke(self): print 'awwww yeahhhhh.' pass class Egg(Sperm): pass if name == 'main': try: donglen = int(sys.argv[1]) except: print "usage: mkdong " sys.exit() if donglen > maxlen: print 'warning: a %s" dong is too big! cannot be longer than %s"!' % (donglen, maxlen) sys.exit() else: d = dong(donglen) d.mkdong()</pre> There's really no excuse. I should be ashamed of myself but I'm not.
http://jathan.com/2009/10/
CC-MAIN-2018-26
refinedweb
282
71.71
On Tue, Oct 18, 2016 at 03:50:32PM +0200, Michal Hocko wrote: >). Advertising You're conflating things. Threads always share memory, but sharing memory doesn't imply being part of the same thread group. > What prevents those two to sit in different user > namespaces? For thread groups: You can't change user namespace in a thread group with more than one task. For shared mm: Yeah, I think that could happen - but it doesn't matter. The patch just needs the mm to determine the namespace in which the mm was created, and that's always the same for tasks that share mm. signature.asc Description: Digital signature
https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1251824.html
CC-MAIN-2018-05
refinedweb
109
75.61
This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project. On Tuesday 23 June 2009 08:37:14, Doug Evans wrote: > On Sat, Jun 20, 2009 at 4:55 PM, Pedro Alves<pedro@codesourcery.com> wrote: > > On Tuesday 02 June 2009 16:36:05, Doug Evans wrote: > > > > I don't think none of these forward declarations is needed? > > ok > [once upon a time, they were ok. the new rules haven't been locked in > memory yet] It's easy --- less redundant code to maintain is good. In this case, in addition to the unnecessary function prototypes, even those large descriptions were duplicated. > > >> + > >> +int > >> +i386_low_stopped_by_watchpoint (struct i386_debug_reg_state *state) > >> +{ > >> + ?CORE_ADDR addr = 0; > >> + ?/* NOTE: gdb version passes boolean found/not-found result from > >> + ? ? i386_stopped_data_address. ?*/ > >> + ?addr = i386_low_stopped_data_address (state); > >> + ?return (addr != 0); > >> +} > > > > Same as above. ?You've probably thought about that too... > > > >> + > >> +/*. > > >> Index: utils.c > >> =================================================================== > > > > +char * > > +paddr (CORE_ADDR addr) > > > > This isn't documented in neither server.h or here? > > Just "going with the flow". The flow says: "look closer and you'll see that all functions in utils.c have description comments." > > > +{ > > + ?char *str = get_cell (); > > + ?xsnprintf (str, CELLSIZE, "%lx", (long) addr); > > > > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ^^^^ > > > > Note: this will be wrong on Win64... ?BTW, Ulrich > > was removing several of these functions from GDB > > in the removing-current_gdbarch series. ?Will this one > > stay? ?Might be worth it to use the one that is going > > to stay in GDB. > > I think a higher order bit is that gdb and gdbserver cannot share > code. Bringing over all the smarts to handle all the different > portability issues is painful/depressing.. > > >> Index: linux-low.h > >> =================================================================== > >> RCS file: /cvs/src/src/gdb/gdbserver/linux-low.h,v > >> retrieving revision 1.30 > >> diff -u -p -r1.30 linux-low.h > >> --- linux-low.h?12 May 2009 22:25:00 -0000??????1.30 > >> +++ linux-low.h?1 Jun 2009 22:02:43 -0000 > >> @@ -56,8 +56,13 @@ struct process_info_private > >> > >> ? ?/* Connection to the libthread_db library. ?*/ > >> ? ?td_thragent_t *thread_agent; > >> + > >> + ?/*. > >> --- linux-x86-low.c?????13 May 2009 19:11:04 -0000??????1.2 > >> +++ linux-x86-low.c?????1 Jun 2009 22:02:43 -0000 > > > >> +static unsigned long > >> +x86_linux_dr_get (ptid_t ptid, int regnum) > >> +{ > >> + ?int tid; > >> + ?unsigned long value; > >> + > >> + ?tid = TIDGET (ptid); > >> + ?if (tid == 0) > >> + ? ?tid = PIDGET (ptid); > > > > The tid == 0 case is dead code coming from GDB, isn't it? > > Likewise in other places. > > Perhaps. There's similar code in linux-low.c:same_lwp. > == 0 code deleted. Yes, but same_lwp really handles cases where ptid_get_lwp == 0, due to find_lwp_pid. IIUC, these functions are always called with a full thread ptid. > > > >> +/* Update the inferior's debug register REGNUM from STATE. ?*/ > >> + > >> +void > >> +i386_dr_low_set_addr (const struct i386_debug_reg_state *state, int regnum) > >> +{ > >> + ?struct inferior_list_entry *lp; > >> + ?CORE_ADDR addr; > >> + > >> + ?if (! (regnum >= 0 && regnum <= DR_LASTADDR - DR_FIRSTADDR)) > >> + ? ?fatal ("Invalid debug register %d", regnum); > >> + > >> + ?addr = state->dr_mirror[regnum]; > >> + > >> + ?/* ???. > + return 0; /* ??? fatal? */ This just means not-stopped-by-watchpoint? Certainly not fatal. > Setting aside breakpoints+watchpoints -> "points", > Right. An extra point: On Tuesday 23 June 2009 08:37:14, Doug Evans wrote: > + default: > + error ("Z_packet_to_hw_type: bad watchpoint type %c", type); This should not call error, but return unsupported. > how about this? It looks goodish, but I'd really like to see the points I raise be addressed, instead of just ignored. It just makes us waste the (narrow already) review bandwidth... -- Pedro Alves
https://sourceware.org/legacy-ml/gdb-patches/2009-06/msg00848.html
CC-MAIN-2021-43
refinedweb
546
71.31
Hi everyone! Recently I made a post about react-monetize and what I'm trying to achieve. Today I reached a working MVP and I'd love to share it with you to receive feedback, contributions, ideas or whatever you like. What is react-monetize? It's a set of helpers and hooks to easily integrate the new Web Monetization API in your React project. Can I use it on SSR? It's been tested on standalone React, Create React App and Next.js. Further using is required to see if it works on Gatsby, Preact and other frameworks. What technologies is it built on? Currently Typescript, React (>=16.8) and Rollup. ESLint, Prettier and Jest are coming soon! How can I use it? Installation, usage information and examples can be found on the Github repo: guidovizoso / react-monetize Helpers and hooks to speed up your integration with Web Monetization API 💸 react-monetize Helpers and hooks to speed up your integration with Web Monetization API Install Currently supports React, Create React App and Next.Js Not yet testd in Gatsby or Preact. yarn add react-monetize Usage Wrap your app with the MonetizeProvider and add your payment pointer. You can read how to get one here: import { MonetizeProvider } from 'react-monetize' function App() { return ( <MonetizePovider paymentPointer="myPaymentPointer"> <YourApp /> </MonetizeProvider> ) } export default App; Now you have two hooks available to use anywhere in your app: useStatus State is the current state provided by Web Monetization API according to this list. import { useStatus } from 'react-monetize' function Component() { const { state, events } = useStatus(); return ( <> <p>State: {state}… Final thoughts Please feel free to leave a comment or reach out to me on Twitter. Hope you like it and have a good week! Discussion (4) Great start! 🎉 Thanks! Congrats! Thanks Robert!
https://dev.to/guidovizoso/introducing-react-monetize-29hp
CC-MAIN-2021-25
refinedweb
297
56.45
Warning This component will be available in the Palette of Talend Studio on the condition that you have subscribed to one of the Talend Platform products. The tRecordMatching component enables you to use a user-defined matching algorithm for obtaining the results you need. A custom matching algorithm is written manually and stored in a .jar file (Java archive). Talend provides an example .jar file on the basis of which you are supposed to develop your own file easily. To do this, proceed as follows: In Eclipse, check out the test.mydistance project from svn at: In this project, navigate to the Java class named MyDistance.Java:. Open this file that has the below code: package org.talend.mydistance; import org.talend.dataquality.record.linkage.attribute.AbstractAttributeMatcher; import org.talend.dataquality.record.linkage.constant.AttributeMatcherType; /** * @author scorreia * * Example of Matching distance. */ public class MyDistance extends AbstractAttributeMatcher { /* * (non-Javadoc) * * @see org.talend.dataquality.record.linkage.attribute.IAttributeMatcher#getMatchType() */ @Override public AttributeMatcherType getMatchType() { // a custom implementation should return this type AttributeMatcherType.custom return AttributeMatcherType.CUSTOM; } /* * (non-Javadoc) * * @see org.talend.dataquality.record.linkage.attribute.IAttributeMatcher#getMatching Weight(java.lang.String, * java.lang.String) */ @Override public double getWeight(String arg0, String arg1) { // Here goes the custom implementation of the matching distance between the two given strings. // the algorithm should return a value between 0 and 1. // in this example, we consider that 2 strings match if their first 4 characters are identical // the arguments are not null (the check for nullity is done by the caller)); } } In this file, type in the class name for the custom algorithm you are creating in order to replace the default name. The default name is MyDistance and you can find it in the line: public class MyDistance implements IAttributeMatcher. In the place where the default algorithm is in the file, type in the algorithm you need to create to replace the default one. The default algorithm reads as follows:); Save your modifications. Using Eclipse, export this new .jar file. Then this user-defined algorithm is ready to be used by the tRecordMatching component. This scenario describes a six-component Job that aims at: matching entries in the name column against the entries in the reference input file by dividing strings into letter blocks of length q, where q is 3, in order to create a number of q length grams. The matching result is given as the number of q-gram matches over possible q-grams, checking the edit distance between the entries in the email column of an input file against those of the reference input file. The outputs of these two matching types are written in three output files: the first for match values, the second for possible match values and the third for the values for which there are no matches in the lookup file. In this scenario, we have already stored the main and reference input schemas in the Repository. For more information about storing schema metadata in the Repository, see Talend Studio User Guide. The main input table contains seven columns: code, name, address, zipcode, city, email and col7. We want to carry the fuzzy match on two columns: name and email. In the Repository tree view, expand Metadata - DB Connections where you have stored the main input schemas and drop the relevant file onto the design workspace. The [Components] dialog box appears. Select tMysqlInput and click OK to drop the tMysqlInput component onto the workspace. The input table used in this scenario is called person. It holds several columns including the two columns name and email we want to do the fuzzy match on. The following capture shows the basic properties of the main input component: Do the same for the second input table you want to use as a reference, customer in this scenario. The following capture shows the basic properties of the reference input component: Drop the following components from the Palette onto the design workspace: tRecordMatching and three tLogRow. Connect the main and reference input components to tRecordMatching using Main links. The link between the reference input table and tRecordMatching displays as a Lookup link on the design workspace. Connect tRecordMatching to the three tLogRow components using the Matches, Possible Matches and Non Matches links. Double-click tRecordMatching to display its Basic settings view and define its properties. Click the Edit schema button to open a dialog box. Here you can define the data you want to pass to the output components. In this example we want to pass to the tRecordMatching component the name and email columns from the first tMysqlInput component, and the ref_name and ref_email columns from the second tMysqlInput component. The MATCHING_DISTANCE and the MATCHING_WEIGHT columns in the output schema are defined by default. The MATCHING_WEIGHT column is always between 0 and 1. It is a global distance between sets of columns (defined by the columns to be matched). The MATCHING_DISTANCE column will print a distance for each of the columns on which we use an algorithm. The results will be separated by a vertical bar (pipe). Click OK to close the dialog box and proceed to the next step. In the Key Definition area of the Basic settings view of tRecordMatching, click the plus button to add two columns to the list. Select the input columns and the output columns you want to do the fuzzy matching on from the Input key attribute and Lookup key attribute lists respectively. In this example, select name and email as input attributes and ref-name and ref_email as lookup attributes. Matching type column and select from the list q-gram, the method to be used on the first column to check the incoming data against the reference data. Set the matching type for the second column, Levenshtein in this example. The minimum and maximum possible match values are defined in the Advanced settings view. You can change the by-default values. In the Confidence Weight column, set a numerical weight for each of the columns used as key attributes. Click in the cell of the Handle Null column and select the null operator you want to use to handle null attributes in the columns. If required, click the plus button below the Blocking Selection table to add one or more lines in the table and then click in the line and select from the list the column you want to use as a blocking value. Using a blocking value reduces the number of pairs of records that needs to be examined. The input data is partitioned into exhaustive blocks based on the blocking value. This will decrease the number of pairs to compare as comparison is restricted to record pairs within each block. Check Scenario 2: Comparing columns and grouping in the output flow duplicate records that have the same functional key for a use case of the blocking value. Double-click the first tLogRow component to display its Basic settings view, and select Table in the Mode area to display the source file and the tRecordMatching results together to be able to compare them. Do the same for the other two tLogRow components. Save your Job and press F6 to execute it. Three output tables are written on the console. The first shows the match entries, the second show the possible match entries and the third shows the non match entries according to the used matching method in the defined columns. The figure below illustrates extractions of the three output tables. The first table lists all the names and emails that could be matched with identical entries in the reference table. Thus the matching distance and the matching weight are equal to "1.0". The second table lists all the names and emails that have a possible match in the reference table. The matching distance column prints the distances for the name and email columns and separate them by a vertical bar. The third table lists all the names and emails that do not have match in the reference table. In this scenario, reuse the previous Job to load and apply a user-defined matching algorithm. As a prerequisite, follow the steps described in Creating a custom matching algorithm to manually write a custom algorithm and store it in a .jar file (Java archive). The mydistance.jar file is used here to provide the user-defined matching algorithm, MyDistance.class. You will also need to use the tLibraryLoad component to import the Java library into the Job. On the previous Job, drop the tLibraryLoad component from the Palette to the Design workspace. Delete the tLogRow components named possible and none. Connect the tLibraryLoad component to the tMysqlInput (person) component using a Trigger > On Subjob Ok link. Double-click tLibraryLoad to open its Component view. Click the [...] button and browse to the mydistance.jar file. Click Windows>Show view... to open the Modules view. In the Modules view, click and in the open dialog box, browse to the user-defined mydistance.jar file created for this Job. Click Open. The user-defined .jar file is imported and listed in the Modules view. You will get an error message if you try to run the Job without installing the external user-defined .jar file. Double-click tRecordMatching to open its Component view. In the Key Definition table of this view, click the name row in the Matching Type column and select custom... from the drop-down list. In the Custom matcher class of this name row, type in the path pointing to MyDistance.class in the mydistance.jar file. In this example, this path is org.talend.mydistance.MyDistance.. Press F6 to run this Job. In the Run view, the matched entries are identified and listed as follows:
https://help.talend.com/reader/hCrOzogIwKfuR3mPf~LydA/l15VtpZFMmiL48008j13gA?section=Raa86106
CC-MAIN-2019-47
refinedweb
1,622
64.2
How to Build a Credit Card Form Using Stripe.js with React.js in Next.js September 24th, 2021 What You Will Learn in This Tutorial How to create a credit card form using Stripe.js and Stripe Elements as well as how to retrieve that credit card form's value and generate a Stripe source token. Table of Contents Master Websockets — Learn how to build a scalable websockets implementation and interactive UI. Getting started For this tutorial, to give us a starting point for our work, we're going to use the CheatCode Next.js Boilerplate. Let's clone a copy from Github now: Terminal git clone cd into the project and install its dependencies: Terminal cd nextjs-boilerplate && npm install Finally, go ahead and start up the development server: Terminal npm run dev With that, we're ready to get started. "Publishable key." - Copy this key (don't worry, it's intended to be exposable to the public). Next, once we have our publishable key, we need to open up the project we just cloned and navigate to the /settings/settings-development.js file: /settings/settings-development.js const settings = { graphql: { ... }, meta: { ... }, routes: { ... }, stripe: { publishableKey: "<Paste your publishable key here>", }, }; export default settings; In this file, alphabetically at the bottom of the exported settings object, we want to add a new property stripe and set it to an object with a single property: publishableKey. For the value of this property, we want to paste in the publishable.js file. If we were deploying to production, we would make this change in our settings-production.js file, using our live publishable key as opposed to our test publishable key like we see above. Next, in order to use Stripe in the browser, we need to load the Stripe.js library via the Stripe CDN. Initializing Stripe.js in the browser For security purposes, when it comes to hosting the Stripe.js library—what we'll use below to generate our credit card form and retrieve a credit card token with—Stripe does not allow us to self-host. Instead, we need to load the library via a CDN (content delivery network) link, hosted by Stripe. To load the library, we're going to open up the /pages/_document.js file in our boilerplate which is where Next.js sets up the base HTML template for our site: /pages/_document.js import Document, { Html, Head, Main, NextScript } from "next/document"; import { ServerStyleSheet } from "styled-components"; export default class extends Document { static async getInitialProps(ctx) { ... } render() { const { styles } = this.props; return ( <Html lang="en"> <Head> ... > <script src=" </Head> <body> <Main /> <NextScript /> </body> </Html> ); } } Here, toward the lower middle-half of the <Head></Head> tag we see here (beneath the cdn.jsdelivr.net/npm/bootstrap script), we want to paste in a script tag that points to the CDN-hosted version of Stripe.js: <script src=" This is all we need to do. When we load up our app now, Next.js will load this script tag. When it runs, this script will automatically load Stripe in the browser and give us access to the library via the global variable Stripe. Writing a script to initialize Stripe Now that we have access to Stripe itself, next, we need to write a script that will allow us to initialize Stripe with the publishable key that we copied earlier and then easily re-use that initialized copy of the library. /lib/stripe.js import settings from "../settings"; const stripe = typeof Stripe !== "undefined" ? Stripe(settings.stripe.publishableKey) : null; export default stripe; Here, in the /lib folder of the boilerplate we cloned earlier, we're adding a file stripe.js which will pull in our publishableKey that we set in our settings file and then, after checking that the global Stripe variable is defined, call it as a function Stripe(), passing in our publishableKey. Then, assuming we get back an instance (or null if for some reason Stripe.js fails to load), we export that from our file. As we'll see next, this will allow us to import a "ready to go" copy of Stripe.js without having to rewrite the above code every time we want to access the library (helpful if you're building an app and intend to use Stripe in multiple project files). Creating a credit card component with Stripe Elements Now for the fun part. One of the nice parts about using Stripe.js is that it gives us access to their Elements library. This allows us to quickly set up a card form in our app without having to write a lot of boilerplate HTML and CSS. To get started, we're going to set up a class-based component in React.js (this will give us better control over initializing Stripe and Elements than we'd get with a function-based component). /pages/index.js import React, { useEffect, useState } from "react"; import StyledIndex from "./index.css"; class Index extends React.Component { state = { token: "", cardError: "", }; componentDidMount() { // We'll set up Stripe Elements here... }; Getting set up, here, we're creating a rough skeleton for the page where we'll render our credit card via Elements. Fortunately, the bulk of the component is quite simple. Here, we're doing a few things: - Adding the HTML markup that will be used to display our form. - Adding default/placeholder values for two state values that we'll use tokenand cardError. - Adding placeholder functions for componentDidMount()(where we'll load up Stripe and mount our card form) and handleSubmit()which we'll use to generate our Stripe card token. Of note, here, we should call quick attention to the <StyledIndex></StyledIndex> component that's wrapping the entirety of our compnent's markup. This is a styled component which is a React component generated by the library styled-components. This library allows us to create custom React components that represent some HTML element (e.g., a <div></div> or a <p></p>) and then attach CSS styles to it. Let's take a look at the file where that's being imported from real quick: /pages/index.css.js import styled from "styled-components"; export default styled.div` .credit-card { border: 1px solid #eee; box-shadow: 0px 0px 2px 2px rgba(0, 0, 0, 0.02); padding: 20px; border-radius: 3px; font-size: 18px; &.StripeElement--focus { border: 1px solid #ffcc00; box-shadow: 0px 0px 2px 2px rgba(0, 0, 0, 0.02); } } .card-error { background: #ea4335; color: #fff; padding: 20px; border-radius: 3px; margin-top: 10px; } .token { background: #eee; padding: 20px; border-radius: 3px; font-size: 16px; color: #444; } `; Here, we import the object styled from the styled-components library (this is pre-installed in the boilerplate we cloned earlier). On this object, we can find a series of functions named after the standard HTML elements, for example: styled.div(), styled.p(), or styled.section(). For our credit card form, we're going to use a plain <div></div> tag so we're using the styled.div() function here. Though it may not look like it, the styled.div`` part here is equivalent to styled.div(``). The idea being that in JavaScript, if we're going to call a function where the only argument is a string, we can omit the parentheses and replace our single or double-quotes with backticks, passing our string as normal. In this file, this is purely a syntactic choice to keep our code inline with the examples offered by styled-components and its authors. Focusing on the contents of the string we're passing to styled.div(), we're just adding a little bit of polish to our card form (by default, Stripe gives us a very stripped down form with no styles). Of note, here, you'll see the StripeElement--focus class having styles applied to it (we use a nested CSS selector with & to say "if the .credit-card element also has the class StripeElement--focus, apply these styles."). This is an auto-generated class that Stripe automatically applies when a user focuses or "clicks into" our card form. We use this to change the border color of our card form to acknowledge the interaction. /pages/index.js import React, { useEffect, useState } from "react"; import stripe from "../lib/stripe"; import StyledIndex from "./index.css"; class Index extends React.Component { state = { token: "", cardError: "", }; componentDidMount() { const elements = stripe.elements(); this.creditCard = elements.create("card", { style: { base: { fontSize: "18px", }, }, }); this.creditCard.on("change", (event) => { if (event.error) { this.setState({ cardError: event.error.message }); } else { this.setState({ cardError: "" }); } }); this.creditCard.mount(".credit-card"); } /> component where we're rendering the markup for our credit card, now we're ready to actually mount our credit card. By "mount," we mean telling Stripe to replace the <div className="credit-card" /> tag on our page with the actual credit card form from Stripe Elements. Up top, we can see that we're importing the /lib/stripe.js file we set up earlier. Down in our componentDidMount() method, we use this to get access to the .elements() function which creates an instance of the Stripe elements library for us. Next, in order to "mount" our credit card, we first need to create the element that represents it (think of this like the in-memory representation of the card form before its been "drawn" on screen). To do it, we call to elements.create(), passing in the type of element we want to create as a string "card" as the first argument and then an options object as the second argument. For the options, we're setting a slightly larger-than-default font size (due to how Stripe mounts our card form, unfortunately, we can't set the font-size with the rest of the CSS in our styled component). Finally, once our element is created, we store it on our <Index></Index> component class as this.creditCard. This will come in handy later when we need to reference this.creditCard in order to access its value and generate a token. Below this code, next, in order to "catch" or handle the errors generated by Stripe elements, we need to add an event listener to this.creditCard. To do it, Stripe gives us a .on() method on that instance. This takes the name of the event we want to listen for—here, `"change"—and a callback function to call whenever that event occurs. For our needs, the only change we care about is if this.creditCard produces an error. Inside of our change callback, this will be available as event.error. If it exists, here, we grab the event.error.message value (text describing the error that's occurring) and set it onto state. If there's not an error (meaning a previous error was corrected or there was never an error to begin with), we make sure to reset cardError on state to be an empty string. Finally, beneath this change event handler, we finally get to the point where we mount our Stripe elements form via this.creditCard.mount(). Notice that we pass in the className we set on the <div></div> down in our render() method to this function. This tells Stripe to inject or "mount" the elements form in this spot. Just beneath this, we can also see that we conditionally render our cardError if it has a value (remember, we styled this up earlier inside of our /pages/index.css.js file). While this technically gets us a credit card form on page, to finish up, we're going to learn how to access the value typed into our credit card form and convert that into a Stripe source token. Generating a Stripe token In order to make our form useful, now, we're going to learn how to generate what's known as a Stripe source token. Because of various laws around the transmission of financial data (e.g., PCI Compliance), offering a credit card form involves a bit more legal complexity than collecting more innocuous forms of data like a name or email address. Because complying with this sort of regulation is a significant burden on small businesses and independent operators, companies like Stripe step in to solve the problem. They act as a middle man between your customer's credit card data and your servers. Instead of copying credit card data directly to your own server—and thus, having to comply with PCI laws—you hand the data off to Stripe who's servers/code are already PCI compliant (and promise to be in the future). The mechanism that Stripe uses to manage this process is known as a source token (here, source being a "payment source" like a credit card or bank account). When we use Stripe.js, we establish a secure connection over HTTPS back to Stripe's servers, send them the card data our user inputs, and then Stripe responds with a unique token that represents that credit card. In order to actually charge that card, we pass that unique token along with our other requests to Stripe on our own server. When we do, Stripe "looks up" the actual credit card data associated with that token on their own secure servers/database. /pages/index.js import React, { useEffect, useState } from "react"; import stripe from "../lib/stripe"; import StyledIndex from "./index.css"; class Index extends React.Component { state = { token: "", cardError: "", }; componentDidMount() { ... } handleSubmit = () => { stripe.createToken(this.creditCard).then(({ error, token }) => { if (error) { this.setState({ cardError: error.message }); } else { this.setState({ token: token.id }); } }); };></Index> component and focusing on our handleSubmit() method, we call to the stripe.createToken() method, passing in the this.creditCard value we set up earlier. From this, Stripe knows how to retrieve the current input value. Behind the scenes, it takes this value, transmits it to its own servers, and then responds. That response is captured here in the .then() callback (we expect stripe.createToken() to return a JavaScript Promise) here in our code. To that callback, we expect to get passed an object with a token property on it that is itself an object which has our actual source token stored in its .id property. Here, assuming that the error value also included on this response object is not defined, we take that token.id and set it back onto the state of our component as this.state.token ( this.setState() modifies the this.state value on our component). That's it! At this point, we'd take the token.id we've received and relay it to our own servers to then pass on to Stripe. To test it out, we can enter the card number 4242 4242 4242 4242, passing in any future expiration date and CVC. Wrapping up In this tutorial, we learned how to generate a credit card form using the Stripe Elements library bundled inside of Stripe.js. We learned how to include Stripe.js in our HTML and initialize it with our publishable key we obtained from the Stripe dashboard and then import that instance to generate our form. We also learned how to retrieve our user's input via Stripe.js and then pass that to Stripe's .createToken() method to generate a secure card token for use elsewhere in our app. Get the latest free JavaScript and Node.js tutorials, course announcements, and updates from CheatCode in your inbox. No spam. Just new tutorials, course announcements, and updates from CheatCode.
https://cheatcode.co/tutorials/how-to-build-a-credit-card-form-using-stripe-js-with-react-js-in-next-js
CC-MAIN-2022-21
refinedweb
2,576
65.22
really cool it is not what i meant but this is good news for me thanks On Mon, Sep 6, 2010 at 3:35 AM, kirby urner <kirby.urner at gmail.com> wrote: > On Sat, Sep 4, 2010 at 7:09 AM, Fahreddın > Basegmez <mangabasi at gmail.com> wrote: >> >> Could it be Mekanimo? It let's you create circles and polygons and >> join them together with connectors while automatically generating >> Python code. Created objects behave like agents. Here are some >> videos. > > > Hey this Mekanimo thing is fantastic. Amazingly cool use of the wx API for > GUI. Really, Python? > Thanks Fahri! > I relayed my pleasure to mathfuture, a Google group. > > Maria D. also expressed appreciation, replying on naturalmath: > > mathfuture is where I do some of my Martian Math writing, a curriculum that > uses Python quite a bit (including VPython [1]), but is far enough afield to > sometimes make more sense in another namespace. > Speaking of Martian Math, I feel obliged to cluck about the Buckyball on > Google yesterday. > I yakked with Josh Cronmeyer about it by email. He and I met up at an OS > Bridge before he took off for Australia (that's the Josh mentioned in this > blog post -- he's Python programmer of note, works with Thoughtworks.com): > > In a couple hours I'm off the PDX (our airport) to fetch Steve Holden, PSF > chairman. Holden Web is this the organizer of this year's DjangoCon in > Portland. > > Kirby > <historica type = "biographica" > > [1] > if you dig back in edu-sig you will find Arthur Siegel and I doing a lot of > the talking. He was some high powered guy in the financial district, NYC, > who wisely devoted much of his remaining time to raising his son and doing > some esoteric Python programming to explore projective geometry. Pygeo is > the name of his free / open source project, which makes heavy use of > VPython. Can't think of anything quite like it either before or since. > Check it out. > > Arthur was a passionate and colorful character and our debates on this list > were free ranging (much to the dismay of some). We met twice in New York, > also talked on the phone. This old blog post chronicles our 2nd and last > meeting: > > (paragraphs 2,3) > > [2] > > Holden Web provided me with an exceptional opportunity in April, to lead a > 3-day workshop for the Space Telescope Science Institute (Johns Hopkins > campus, Baltimore). I'd expressed admiration for Hubble and the astronomer > groups using Python, but never dreamed I'd be able to do a Python training > with them. > I also got to look over Steve's shoulder as he did some curriculum writing > for O'Reilly School of Technology. This school offers for-credit distance > education courses using a customized student version of Eclipse called > Ellipse. > </historcia> > >> Physcial proof of the pythagorean theorem >> >> >> Ragdolls >> >> >> James Watt's linkage >> >> >> This shows how to make a platform game with it >> >> >> Fahri >> >> > > _______________________________________________ > Edu-sig mailing list > Edu-sig at python.org > > > -- roberto
https://mail.python.org/pipermail/edu-sig/2010-September/010078.html
CC-MAIN-2018-09
refinedweb
500
71.14
PSL1GHT is a fantastic SDK, growing day after day thanks to help of talented developers like phiren, MattP, AerialX and others. What I need to start porting Scogger is to print some debug information (like screen size, sprites information and such), but unfortunately for me STDOUT and STDERR are redirected to the lv2 TTY interface. Right now there are two ways I know for printing debug informations: - using Kammy - using libcairo font support provided by ps3libraries Altought these are valid alternatives, they represent a complexity level that is too much for my lazyness: Kammy requires PS3 attached to the router via ethernet cable, plus it prints information to a pc, not to the screen. Most important it needs a peek/poke capable payaload, and my PSJailbreak dongle doesn’t have it. Libcairo is new to me, it has amazing power but for now I don’t want to learn another library, also it is a waste to use it just for replacing printf. That’s why I created very simple console, called Sconsole, whose job is to print some text on framebuffer using 8×16 fonts. Let’s see how to use it: Using Sconsole Using Sconsole is very simple: just add the three files included in the zip file in your source directory and you are almost done. Let’s see an example. The first thing to do is actually import the header file: #include "sconsole.h" Then we need to initialize it, here it is the syntax: void sconsoleInit(int bgColor, int fgColor, int screenWidth, int screenHeight); - bgcolor is the background color of the printed string, - fcColor is the actual font color - screenWidth and screenHeight are your screen resolution Colors are in the 0xAARRGGBB format (alpha channel not available). Few colors are defined in sconsole.h but you can use yours. Also FONT_COLOR_NONE means no color (“transparent”), ScreenWidth and ScreenHeight are actually the screen resolution you get with videoGetResolution function. sconsoleInit(FONT_COLOR_NONE, FONT_COLOR_BLACK, res.width, res.height); Printing function needs X and Y coordinates, plus the pointer to the framebuffer where write into. print(400, 80, "Hello world", framebuffer); Or using sprintf before actually print comples strings: sprintf(tempString, "Video resolution: %dx%d", res.width, res.height); print(540, 160, tempString, framebuffer); Dead simple 🙂 Please note: 108 ASCII characters are included (most used ones). They are from 32 (space) to 126 (~) of the ASCII table . They include digits, uppercase & lowecase letters, pluse other ones like parenthesis, plus minus, commas… Hope it will help someone, waiting for screen output of STDOUT/STDERR. Happy coding! Download Demo pkg + sources (version 0.1) Thanks Bitmaps fonts courtesy of libogc. 10 pensieri su “Sconsole: a simple function for printing strings on PS3” Hey – if anyone is interested, I wrote a wrapper class in c++ for this that simulates an actual console buffer with line pruning, line wrap and so on, check it out —> Mi piaceMi piace Hey, I was looking for something like this. Can you please update this to the latest version of psl1ght? Thans. Mi piaceMi piace Sorry I don’t mantain it anymore, feel free to modify it. If you have some question you can ask here, I hope to be helpful 😛 Mi piaceMi piace No worries, and thanks for the fast response. Here’s the updated code that works with the latest version of psl1ght, feel free to check and update for this post. Mi piaceMi piace sample psl1ght app: teaser Mi piaceMi piace
https://scognito.wordpress.com/2010/11/07/sconsole-a-simple-function-for-printing-strings-on-ps3/
CC-MAIN-2019-04
refinedweb
576
59.13
I am developing a secure Rails app on a secure internal server, though I still want to protect it from any kind of SQL injections or XSS attacks. I know that if I have a search box I can use something like this in my MODEL to protect the app from SQL injections: def self.search(search) Project.where("project_title LIKE ?" "%#{search.strip}%" end projects/new You have to care about SQL injection whenever you use string concatenation with any kind of user input to construct SQL fragments. If you use parameters, you're fine. For example this is not vulnerable to SQL injection: Project.where("project_title LIKE ?", "%#{search.strip}%") But this is vulnerable, because a request parameter is written directly into the SQL query and the database has no way to know where the intended query ends, so a user could inject additional parts to this query through the search parameter: Project.where("project_title LIKE %#{search.strip}%") Similarly, if you post a form, the question is how you use values from the request in resulting queries. If you always use parameters like in the first example above (with ? or named ones with symbols) and any request parameter is always assigned through parameters, your app is not vulnerable. If you ever mix request parameters with SQL queries as string like in the second example, your application will be vulnerable to SQL injection. So just to clarify: any Rails method call is secure against SQL injection if you are using ActiveRecord. You only have to worry when you write parts of SQL statements yourself as strings and incorporate request parameters into that string. The above example with LIKE is somewhat special, you usually do not need to create SQL strings yourself with an ORM like ActiveRecord.
https://codedump.io/share/gmMHW6CLrWsL/1/does-rails-submit-form-need-protection-from-sql-injections-or-xss-attacks
CC-MAIN-2018-05
refinedweb
295
51.07
On Fri, 1 Oct 1999, Klaus Peter Wegge wrote: (quoting me) > > If your version of lynx was built with PDCURSES (the lynx386 from > > fdisk is), put the following in your environment: > > PDCURSES_BIOS=TRUE > This should go to the documentation. The question is "which documentation". I think that it is in the blynx docs. Do we put it in the INSTALLATION file (which most DOS end users will never see)? The binary that I distribute is built with SLang, so I don't have it there. This is a similar problem to the docs on how to install a Win32 binary. It either needs to be put in by whomever is distributing the binary, or someone needs to sponsor a web page with the information, so that pointer to the URL can be given. > But another question: Is the env setting > set term=vt100 > still secessary for the DJGPP lynx. > This variable seems to have no effects, except for the telent client. > If it is not set, lynx (pdcurses) works also fine. > Is this variable an relict? >From LYMain.c: #ifdef DOSPATH if (getenv("TERM")==NULL) putenv("TERM=vt100"); #endif Doug __ Doug Kaufman Internet: address@hidden (preferred) address@hidden
http://lists.gnu.org/archive/html/lynx-dev/1999-10/msg00011.html
CC-MAIN-2015-22
refinedweb
199
72.36
Talk:Main Page/Archive 1 Do you think that another category dedicated to utilities and libraries could be included here? something like tutorials/examples for make, autoconf, where to find libraries, etc. Just so that the user can obtain all the resources in a single place. Alexhairyman 07:47, 24 April 2012 (PDT) - I don't think that these subjects would fit into this wiki. We specialize in C and C++ and the wiki is structured specifically for these languages. -- P12 08:22, 24 April 2012 (PDT) - Not even a development process/miscellaneous page? Something for coding standards/style, that may be too opinionated for a site with reference in its' name but I feel it would benefit many, I know an integral part of my learning c/c++ was the various tools available to me. Or some basic data on the windows.h and the posix standard documentation, at least where you can find out more information. Alexhairyman 14:04, 24 April 2012 (PDT) - Style wouldn't be opinitionated if it was backed by reliable references.. but what would they be? Besides a few books (Sutter/Alexandrescu, Sutter alone, Meyers, Lakos), there's not much else - you're left with slides from ACCU conferences (where available), Dr. Dobbs, C++ Source, and other online magazines, Sutter's GOTW, which are all opinionated to varying degrees. working through all those primary sources would be like writing your own book. Now the documentation idea could be useful, I think: there already are occasional links to POSIX and MSDN in some of the existing pages here. We could use a small dedicated page listing available online C++ references (IBM, Comeau, Apache, MSDN, cplusplus.com, STL) and related standards (POSIX, Unicode). --Cubbi 15:05, 24 April 2012 (PDT) - I think a 'links to resources' page is a fine idea, as long as we keep the scope limited to the best of the best -- maybe one rule of thumb would be to not let it grow past one screenful. What I think we should avoid is having a dumping ground for everything vaguely related to C or C++, which won't help people looking for good information and will be hard to maintain. --Nate 19:01, 24 April 2012 (PDT) - But I think the problem is where to put it. Do we want to add another category? Or simply link in a page where relevant? (which would kind of suck for end-users) I would be happy to do the research for the linux/posix side. The easiest link to throw down is the documentation to Glibc, although organized a little oddly, it makes finding out how to solve your Unix problem much easier, and is super complete. Links to boost.org would be a must also, apache portable runtime, etc. Rather than a new section, a respective page for each language may work too. I think though, that we should come to a common consensus before anyone makes some major change to the wiki. Btw, Thank you all for being so welcoming to a new member in your community, I've been watching the site for a few months now, and it has really become much more expansive and useful! Alexhairyman 22:26, 26 April 2012 (PDT) - We're always happy to have more people helping out. :) As far as where to put a single 'links to resources' page, perhaps we could start by putting one at the top level of the C++ site: (I'm assuming that it would make sense to have C++-specific resources, given the links that e.g. Cubbi proposed above.) Once it seems solid, we could start by linking to it from the FAQ. --Nate 04:04, 27 April 2012 (PDT) - "cpp/links" works for me. --Nate 15:55, 27 April 2012 (PDT) - All in favor say aye! I say aye! cpp/links Alexhairyman 09:28, 29 April 2012 (PDT) - I agree that there are valid reasons for having such additional information somewhere. I object to this particular solution of the problem, not the idea itself. May I explain this using an example. Imagine two extreme cases. The first one: there's a single wiki about C, C++, make, autoconf, etc. - everything mixed. The second one: there are a number of different wikis one for a particular subject. Which one would be easier to manage and better for the reader? The first one, believe me, would eventually become a huge mess, because the C/C++ material would slowly mix with other material and vice-versa. Thus I'm advocating for a set of wikis with strictly defined boundaries, each specializing into a particular subject. We specialize in C and C++ reference. I'd be happy if someone created a similar, specialized wiki for make/autoconf (I might even help to set everything up). But I'm against including the same material here. -- P12 15:45, 24 April 2012 (PDT) Template arguments Most of the templates in C++ have requirements on their template arguments. Some of these requirements are quite complex. Therefore, I think it's worth to add a separate section outlining these requirements, with links to concepts, if appropriate. I am, however, unsure, in which format the arguments should be described. Here's what I have thought of so far: Option 1: A separate section before the parameters section. Template parameters Option 2: Additional explanation after the main description, before the parameters section: Template parameters: InputIt1, InputIt2: must meet the requirements of InputIterator. BinaryPred: must meet the requirements of BinaryPredicate}} In addition to that, I'm unsure what exact terminology should we use: "template parameters", or "template arguments", or "type requirements", or something other. Finally, after adding the longer explanations of type requirements, the names of the types can be shortened, as there won't be any need to convey full explanation just by the name of the identifier. So we probably should agree what consistent names we will use throughout the wiki. Here's the initial list: InputIt- input iterator. OutputIt- output iterator. ForwardIt- forward iterator. BidirIt- bidirectional iterator. RandomIt- random access iterator. Pred- unary predicate. BinaryPred- binary predicate. Op- operation. BinaryOp- binary operation. Traits- traits: string traits, regex traits, etc. Alloc- allocator. CharT- base characer type. - ... What do others think about this? -- P12 08:01, 17 May 2012 (PDT) - I'd prefer option 1 as it puts template parameters closer to the function parameters. There are cases where template parameters *are* the inputs, e.g. in the ratio library, the type traits, std::get (and other functions with non-type template parameters), and I kinda feel that they should be treated similarly. And yes, consistent and precise naming for template params would be great. --Cubbi 10:37, 17 May 2012 (PDT) - You should call them template parameters. When you declare a template the names in <> are template parameters. When you instantiate a template the names in <> are template arguments -- Ville 02:28, 11 June 2012 (PDT) Page width The wiki pages are too wide and thus places important information to the right of a page. Main page, at least with default theme, places the section headers: "C++ reference" and "C reference" to the left of their respective indices, thus pushing the interesting parts to the right and possibly out-of-view. Those section headers should be placed above their respective indices. Also the wiki related link sections "Navigation", "Toolbox" and "In other languages" aren't at all that interesting when using this site as a reference. However the links on the right hand side on the individual pages like: are. Again those are pushed unnecessarily to the right and are again possibly out-of-view. -- Ville 02:43, 11 June 2012 (PDT) - Hi. This problem is known to us. A new layout solving this issue has already been developed and will be deployed in the following days. You can see several examples of the new layout in action: [1] (the main page), [2] (shows the replacement of the right sidebar). How to write a class that uses a structure as a private datatype and uses it's pointers in the implementation. class PQueue{ public : /*other public functions and the constructors,destructors*/ private : struct chunkT{ vector<int> nums ; chunkT *next ; }; /*other private functions*/ } The implementation code uses the struct chunkT to generate a linked lists of the given type but gives many errors . basically , the error messages reported demonstrate that the compiler did not identify the data type : chunkT Please suggest a reason for this behaviour or give a sample function to be written in the implementation that uses a pointer to the struct chunkT . many thanks in advance . :) --Nikunj 07:46, 16 July 2012 (PDT) - This is a discussion page dedicated to the improvement of this wiki. This is not a forum. Besides, you've already asked your question at one --Cubbi 09:01, 16 July 2012 (PDT) Pages accessible through built-in Search Hello everyone -- I'm new to cppreference, just found it and I'm quite interested. I have a question regarding the built-in search function: it seems to exclude keywords and pages that do not refer exactly to a member of a library. Is it what is wanted? Jocelyn 07:15, 20 July 2012 (PDT) - Yes, our custom search engine isn't sophisticated enough to provide sensible results for arbitrary queries. You can use an external search engine such as Google if the results are inadequate. -- P12 14:07, 20 July 2012 (PDT) - Just to elaborate a bit -- we use a custom search because the built-in plaintext search generates a lot of noise with our specific content. We have been working on improving the custom search, however, so hopefully it will continue to improve. In the meantime, the search results include links to external engines that might get closer to what you expected. --Nate 14:49, 20 July 2012 (PDT) Suggested page(s)/table(s) - "Since C++11" summary and "C++11 via Boost" summary I think it would be a good idea to have a page with a table (or separate tables) that lists all of the "since C++11" elements as well as those that came to "C++11 via Boost" elements. The table entries would be click-able links to the language elements themselves. I am just gauging interest so I'll leave formatting ideas out of this for now as well my reasoning for having such a table. --Arbalest 10:29, 5 October 2012 (PDT) - "C++11 via Boost" describes the history (for which we have cpp/language/history), but I don't think it's particularly helpful as part of a reference. Quite a few bits and pieces changed as the libraries moved from boost to TR1 and from TR1 to C++11, and then boost keeps improving and expanding its libraries, while C++11 is fixed, for now. We could have a special "See also" category just like we have a "See C reference" for boost-originated components, something like "See also: xxxxx in Boost". --Cubbi 10:40, 5 October 2012 (PDT) - I think third-party libraries should not be linked from the reference itself, because this would result in a mess in the long term. There are many libraries that perform similar tasks as the standard libraries. Moreover, boost is not the only one that has an API very similar to one of the standard libraries. It would be hard to establish a criteria that defines which libraries can be linked to and which not. Therefore I believe this isn't a good idea. A better place place for such links would be cpp/links/libs. -- P12 09:49, 20 October 2012 (PDT) - In my opinion having a list of all changes between standard revisions is worthwhile. Maybe not in cpp/language/history, but cpp/changelog for example. -- P12 09:53, 20 October 2012 (PDT) More visible search box The search box is really useful and in most cases, the users wants to find a function (printf, scanf, pow, … for instance) and as now, it's quite hidden. (BTW, very nice, very well organized, a big thanks to the creators (yes, you!)) 93.20.177.195 16:21, 14 November 2012 (PST) - Thanks for the comment. Do you have any suggestions on how the search box might be made less hidden? I think keeping it in its current location is good, as it's fairly standard for websites to have search in the top-right. One option might be giving the background of the box a light shade of color, or gray. Another option might be outlining the box in darker color. -Nate 08:23, 15 November 2012 (PST) - As casual user of this site I think the search box could have more width, and maybe a bit more height. - But it would be much more useful if you could selectivly only search the page titles, or the page contents. - Anyway: great work you're doing! --87.148.222.140 13:31, 12 January 2013 (PST) no vandalism? Why does this wiki not suffer from vandalism, or only little?? I just did some minor fixes, and it appears that the page contents are only protected by some easy "cpp captcha". Honestly, everyone who skimmed through en.wikipedia.org/wiki/C++ can know what the output of cout <<... is. I'm used to wikipedia, and I'm missing (I don't really miss it) the trolls who happily post ads, or garbage information... if that happens in future, one should consider a more complex security question. 87.148.222.140 13:46, 12 January 2013 (PST) - Sure, that's in our plans. Hopefully we won't need to enact harsher measures, as they don't help the contributors :) P12 16:26, 12 January 2013 (PST) C++ keywords search with search box Tried to search some C++ keywords (constexpr, typename) and it doesn't work with built-in search, but searching for STL stuff works OK. Thanks for this awesome site, btw. - The search currently only indexes methods and classes. Adding keyword support to search results is in the works, though. Until that happens, you can always use the links to external search engine results or just use an external search engine directly. --Nate 07:20, 9 June 2013 (PDT) more lang support possible ? Need translation for korean... is it possible ? - 07:27, 15 February 2013 (PST) - Create Main Page of [ko:] like this. - How do I change the URL to ? --Kkspoint 21:09, 19 February 2013 (PST) - - - - - - It's looking like it's coming along nicely. Once the c/language and cpp/language pages are completed, we'll do the server-side updates that will create the actual ko.cppreference.com/* pages. --Nate 06:48, 20 February 2013 (PST) - As the required pages have been translated, I've initialized a Korean wiki at [3]. Good luck! P12 07:43, 2 July 2013 (PDT) Viewing multiple versions I think that the approach on this page to presenting historical information is a bit awkward, and is only going to get worse as future standards are released. I'm worried that the historical revision boxes add a lot of clutter and will increasingly not be what most people are looking for. I propose that for inline text like this, we describe behavior according to the current standard, and use the revboxen for historical information. Any thoughts or objections? --Nate 07:34, 2 January 2013 (PST) - In a few months we'll have a draft C++14, so yes, this will get harder. Maybe there's another way? How about a page that shows everything as current, with a clickable tag that unhides the changes from older revisions? Or even tabs that redraw the page completely? --Cubbi 08:31, 2 January 2013 (PST) - This is good idea in principle, I like it. Here are some thoughts: - Client-side implementation using Javascript is the best option, I think. Server-side implementation would be slower and hard to implement. - If an user selects a non-default standard version, we probably want to remember his choice. Will need cookies to implement. - It's probably worthwhile to allow direct linking to a standard versions via additional URLs parameter. - This would be quite major change. It may be non-trivial to sync this with the translations. We need to keep this in mind. - Interactive features may not play nice with offline viewers. - These are all good ideas, and I think it's true that they could require some work to fully implement. It seems like a relatively easy first step would be to write the main content for the current standard and include historical notes in revboxes. Then, as we have time, we can figure out how to implement appropriate client-side functionality to show/hide the historical notes. Any objections? --Nate 08:39, 6 January 2013 (PST) - Hmm, but wouldn't it become ambiguous which version of the standard specifies what? At least now we can precisely say that, e.g. x = 1 in C++11, but x = 0 in the previous versions. - In my opinion it's worth to divide the problem into two parts: how we use revboxes at source level and how they appear to the reader. I'd like to put all changes between standard revisions into revboxes at the wikitext level, because then we have much larger freedom specifying which information we show. - As for appearance, my vision is that we either show the information about one standard (and explicitly note which one somewhere in a visible location). Or we show all the changes between standards in an ugly, but unambiguous table. If single wikitext definition is used, then implementing switching between these two views wouldn't be hard. - What do you think? - That could definitely work. My main concern is that whatever version we end up showing by default to non-logged-in users should be easy to parse (i.e. not riddled with diffs or awkward formatting due to a bunch of hidden revboxen), but I think that's achievable as long as we're careful about where we insert revboxen. - To achieve this, we'd need to (a) elaborate the revlist template a bit and (b) add some client-side UI/state that could control the display of whatever the revlist template spits out. Anything else? Off the top of my head, one way to do the UI would be a dropdown similar to the "actions" menu with e.g. C++98, C++03, C++11 in it. --Nate 08:12, 11 January 2013 (PST) - My opinion is almost exactly the same on all points. We don't need much more than a bit of Javascript. As for drop-down menu, it would be quite hard to add it next to the actions menu, because that part of the UI is controlled by the skin file. A possible way around this could be to add the menu below the actions button instead of next to it. The menu could look like the current navbar and possibly integrate with it. - One additional thing that we need to discuss is how we should present the descriptions of classes and lists of functions in the global namespace. For example, should we hide/show the member functions depending on the selected standard? - Assuming you meant e.g. "should cpp/container/vector display a listing for emplace_back when C++98 is selected", that's an excellent question. In addition, if we have a numbered list of overloads (e.g. at the top of cpp/container/vector/insert) then hiding some of them will mean renumbering subsequent references. Also, we should consider the case when the user is viewing e.g. cpp/container/vector/emplace_back -- what should happen if they try to switch to C++98? - I think a good option would be having different CSS styles available which can be selected in the user preferences - Some styles that may be useful: - show C++11 without special formatting, C++03 as a collapsible box (and vice-versa) - visually mark all the alternatives - show just C++11 or just C++03 - (As C++11 is not fully implemented someone may be interested in the C++03 reference and only that) - --Bazzy 09:52, 2 January 2013 (PST) - (UPD: looks like cplusplus.com is going with tabs, so we can see how that works out: --Cubbi 13:24, 29 January 2013 (PST)) - That's interesting. I don't really like the "one tab per diff" approach that they use; I feel like if I want to see e.g. C++11 content, I want to see it consistently throughout the current page. Having separately-clickable tabs makes it easier to explore the differences, but I'm still skeptical about how many people really want to examine diffs, versus how many will just want the most recent content. :) --Nate 17:26, 29 January 2013 (PST) - I have pretty much the same opinion. I wonder how they'll implement versioning of member function listings (if they do this at all). - By the way, we can also look at their implementation of declaration block versioning (example with several declarations). Quite unclear I'd say. Since there's no per-overload description, one needs look at the parameters section to understand what exactly each overload does. P12 20:37, 29 January 2013 (PST) Multiple versions brainstorming Summarizing the above section: - multiple versions are probably worth displaying, but clutter is bad - storing all of the revision information in a canonical format (like revboxen) will allow it to be displayed in a variety of ways - client-side javascript (possibly with cookies) could drive the website UI Here is a (partial) list of page/template types that our solution should consider: - pages with version-specific information - e.g. - how should version differences in the text be displayed? - pages that describe multiple version-specific items - e.g. - (contains both begin and cbegin, but cbegin is C++11 only) - if we selectively display items from the syntax list, how should numbering work? - pages that describe items that only appear in a single version - e.g. (C++11 only) - should there be an option to view C++98 content on a C++11-only page? - pages that display lists of links - e.g. - summaries with multiple versions - e.g. - (begin and cbegin, but cbegin is C++11 only) - version-specific summaries - version-specific sections of summaries - e.g. - (non-member "numeric conversions" section is all C++11) - should the entire section be selectively displayed? - all pages - sections like "Exceptions", "See also": if they only contain version-specific information, should the entire section be displayed for other versions? - version-specific links: - contains the link: - - ...which jumps to a version-specific location on the type_traits page This is a lot of stuff to consider, so I'd imagine that our ideal solution will be something that we can implement incrementally. For example: - modify revbox/dcllist/etc. templates to allow explicit version tagging, but have templates continue to display current output - change existing version-specific content to use the new templates (some of this could possibly be done by robot) - implement javascript UI to allow version selection --Nate 19:01, 2 February 2013 (PST) - I think that versioning could be implemented entirely in javascript without any new support from MediaWiki templates. Javascript isn't limited language like they are, so matching of much more complex patters could be implemented. This means that the necessary information is already included via {{mark since/until **}} templates, so the wikicode doesn't need to be changed (well, except an occasional new html class attribute). Therefore, we don't need to spend much time syncing with translations and there won't be any new compatibility issues to solve. It's win-win. P12 04:19, 27 February 2013 (PST) - After looking at some examples, I think that we can do a lot of the versioning in javascript, but there are a few places that will warrant template changes. The best examples I've found so far are the use of <br> tags as "fake" table rows. For example, the links to beginand cbeginin Template:cpp/string/basic_string/navbar_content produce HTML with multiple nested parallel table cells, each with <br> delimited content. Writing rules to match content in these cells to eachother while ensuring that no stray newlines were left behind would be difficult and would produce a fairly brittle solution. And it would only get worse when we try to add additional versions. - I think we would have a much more robust system if we replaced e.g. the "fake <br> multiline" begin/cbegin entries with two separate entries. Both entries could still point to the same page. -Nate 17:12, 4 March 2013 (PST) - <br> are used in dcl list templates too. Replacing the newlines with divs or tables would need not only these templates to be changed but also all inclusions where one entry describes several members. <br> is also used in ddcl list templates where one entry describes two different versions of the same function. - The above means that a lot of work would need to be done to adapt the templates and the wikicode would become quite complex. Also in the cases of navbar and dcl list, we would lose the visual feedback that the items within an entry all lead to the same page. - I think that JavaScript should handle all these cases just fine. The script could interpret the <br> tag as a delimiter when traversing the DOM tree. It shouldn't be very hard to implement the script to handle <td>A<br/>B</td> <td>C<br/>D</td> - as if it was <td><div>A</div><div>B</div></td> <td><div>C</div><div>D</div></td> - So one of the problems with <br>s is that (as far as I know) text children in the DOM ("A" and "B" in your example) can't be manipulated in the same way as element children ("<br/>" in your example) -- specifically, their visibility can't be changed. To deal with this, I think we'd need something like a pre-processing step that actually modifies the DOM, replacing text children (e.g. "A") with wrapped element children (e.g. "<span>A</span>"). - Having to add markup after the fact with javascript seems like a reasonable sign that the wiki is missing some key information. In this case, we're using <br>s -- which are normally used for presentation -- to separate semantically different items. - I don't think that we need to get rid of all of the newline formatting, at least not initially. It's currently only a problem for things like begin/cbegin, where the two items have different semantics. Most of the entries in Template:cpp/string/basic_string/navbar_content that use newlines have the same semantics (e.g. stoi/stol/stoll) so they could remain unchanged. - One thing that I'm a little confused by is how such a change could make the wikicode more complex. I can see it making it longer, since we'll e.g. replace a single begin/cbegin item with two items, but that doesn't seem too horrible. Am I missing something? -Nate 15:52, 6 March 2013 (PST) - The complexity is with dcl and ddcl list templates. Navbar is pretty much safe to change. - I agree with your statement that the wiki misses some information that is indeed necessary. I've concluded that my previous suggestion is quite wrong in multiple ways. - I think that we can do the same with some clever use of templates. Essentially we want a<br/>b to be changed to <span>a</span><span>b</span> (divs don't work within links, spans can be display:block via css). The solution could be similar to <span>{{#replace:{{{1|}}}|<br/>|<span/><span>}}</span>. Even better thing would be to introduce some kind of dedicated separator, say '@' or '%%' characters. This could even increase the readability of templates. P12 18:46, 6 March 2013 (PST) Proposed changes and additions Fixing this issue is going to be a major change. The list below includes a lot of unrelated changes that could go along. So here it goes: {{rev list begin}}, {{rev list item}}and {{rev list item}}to {{rev begin}}, {{rev end}}and {{rev|since=std-rev|until=std-rev}}. std-rev default to the first and the latest standards respectively. The behavior would be the same as before. Each of the {{rev std-name}}could carry a style attribute that would make hiding items much easier. {{dcl list ***}}templates and all the accompanying templates to {{desc ***}}. {{ddcl list begin}}, {{ddcl list item}}, {{ddcl list end}}. {{ddcl list ***}}templates to {{dcl ***}}. Rename {{ddcl list item}}to {{dcl|since=std-rev|until=std-rev}}. std-rev default to the first and the latest standards respectively. Add {{cpp/std rev}}that would return the revision of the standard the specific feature is defined in the first time (this could be used to retrieve std-rev in templates). Add a set of templates for displaying changes between revisions, so e.g. the declarations here could be implemented as: {{dcl begin}} {{cpp/container/if ord | {{{1|}}} | {{dcl rev begin | num=1}} {{dcl | until=c++11 | void erase( iterator pos ); }} {{dcl | since=c++11 | iterator erase( const_iterator pos ); }} {{dcl rev end}} {{dcl rev begin | num=2 }} {{dcl | until=c++11 | void erase( iterator first, iterator last ); }} {{dcl | since=c++11 | iterator erase( const_iterator first, const_iterator last ); }} {{dcl rev end}} {{dcl | num=3 | size_type erase( const key_type& key ); }} | {{dcl | num=1 | since={{cpp/std|{{{1|}}}}} | iterator erase( const_iterator pos ); }} {{dcl | num=2 | since={{cpp/std|{{{1|}}}}} | iterator erase( const_iterator first, const_iterator last ); }} {{dcl | num=3 | since={{cpp/std|{{{1|}}}}} | size_type erase( const key_type& key ); }} }} {{dcl end}} {{param list ***}}to {{param ***}}except for {{param list item}}which becomes just {{param}}. {{sparam list ***}}to {{sparam ***}}except for {{sparam list item}}which becomes just {{sparam}}. {{sb list ***}}to {{sb ***}}except for {{nv list item}}which becomes just {{nv}}. {{desc ***}}and {{nv ***}}to separate {{mark std-rev}}templates. The templates themselves will search/replace $$ with a proper HTML construct that CSS can be applied on selectively. {{rev ***}}templates in the Exceptions section in places where we use lone {{noexcept}}template now. Most of the renames of templates need not to be done at once. The transition could be incremental. P12 17:23, 23 May 2013 (PDT) - I like it, and I'm willing to do some grinding to convert over. However, I'm not sure exactly how much work it will be to do this, mostly because I'm unsure how much can be achieved by robots. Care to comment on what parts will have to be done manually vs. automatically? --Nate 16:13, 24 May 2013 (PDT) - Everything except points (10) and (12) have been completed. Highlighting inline code snippets We have a lot of places where text is mixed with lots of snippets code snippets. This leads to loss of readability, since it's hard to tell where a snippet begins and ends. I propose to use a different background color to highlight such snippets. This would apply to all snippets that use the {{c}} template. Compare: No highlighting With highlighting P12 10:47, 18 May 2013 (PDT) - A little highlighting seems like it could be improvement, at least in the cases that you posted. Will it work as well across all the locations the the "c" template is currently used? For example, inside the various tables on ios_base or on top of the already-darkened background boxes on basic_stringstream? --Nate 12:58, 19 May 2013 (PDT) - I think it should apply everywhere. My reasoning is that by using {{c}} instead of {{tt}}, one asks for highlighting anyway, thus additional background shouldn't make matters worse. The background should be applied to code in darkened boxes too (this will be a bit harder to do properly, but possible -- e.g. using something like semi-transparent black color). - The only problem that I see is that the {{c}} template is currently used to make links, e.g. std::string within text. The issue could be solved by adding a new template for such snippets which would not be highlighted. - The above idea of a new template could be taken even further by developing a specialized extension for link processing. A specialized implementation could work-around the current limitations of Geshi. For instance, automatic linking to related member functions could be implemented -- so, e.g. wikitext {{lc|cbegin()}}in cpp/container/vector/front would automatically link to cpp/container/vector/begin. Such extension would be faster than Geshi too, thus it's a win-win scenario. - Okay, I think that could work. I do like the idea of a specialized extension for link processing. My only concern is an aesthetic one; I like the idea of our "minimal" look, and I think we should keep an eye on pages like those I mentioned above to make sure they don't get too visually cluttered. But as you pointed out we can iterate a bit to make sure that we get it right. --Nate 07:32, 20 May 2013 (PDT) Diff with last translation ? Hi, For update translations, it's hard to know if a page is updated with current english version. Is it possible to create tools/macro/template/other to list pages that need a update ?For example, add You can help to correct and verify the translation. Click here for instructions. Thanks, Guillaume Belz 11:10, 26 May 2013 (PDT) - I'll see what can be done. There's possibility that this won't be implemented soon though, since there's a lot of other things in the pipeline. For now, you can translate the current text without checking whether it's out of date or not -- it will be possible to say which translations need to be updated again. The more translations are verified, the higher priority your request will get. P12 07:56, 27 May 2013 (PDT) - Thanks for response - So I am going go to work :) - Guillaume Belz 08:21, 28 May 2013 (PDT) - (HS : how to create its user page ?) News section in the main page I think it's worth adding a news section to the end of the main page and list any changes that might be interesting to a lot of people (e.g. new offline package version, support of a new standard, large layout changes, etc.). P12 13:09, 28 May 2013 (PDT) - That could be useful. I think the main page should still be focused on the C++ and C top-level sections, but we can probably find a non-emphatic style for a news section. (Actually, just having such a section below the other two will go a long way in this regard.) One idea is an indented third box below the "C" box with a smaller header. --Nate 17:34, 28 May 2013 (PDT)
http://en.cppreference.com/w/Talk:Main_Page/Archive_1
CC-MAIN-2015-32
refinedweb
5,846
62.88
In response to my description of how you can use the #error directive to check whether the compiler even sees you, some commenters proposed alternatives. I never claimed that my technique was the only one, just that it was another option available to you. Here are some other options. scott suggested merely typing asdasdasd into the header file and seeing if you get an error. This usually works, but it can be problematic if the code does not already compile. And of course it doesn't compile, because the reason why you're doing this investigation in the first place is that you can't get your code to compile and you're trying to figure out why. Consequently, it's not always clear whether any particular error was due to your asdasdasd or due to the fact that, well, your code doesn't compile. For example after adding your asdadsads to line 41 of file problem.h, you get the error Error: Semicolon expected at line 412 of file unrelated.h. Was that caused by your asdasdad? Doesn't seem that way, but it actually was, because the preprocessed output looked like this:. I suppose that, for very large projects at least, pre-compiled headers can be useful to reduce compile times. Most of my projects are not that large, however; and I have usually found that using the pre-compiled headers sometimes leads to subtle compile or even link errors that I spend far too much time trying to track down. Naturally, the solution is to "Rebuild All" whenever one of these problem arises, but that certainly seems to make using pre-compiled headers kind of pointless, doesn’t it? Thanks for the link to the pre-compiled header consistency rules. I never knew about that stuff, and it goes some way to explaining how I introduced all those subtle compile and link errors into my own code. ;) This reminds me of the time I joined a project with 500+ source files. The OBJ directory had compile dates spread across 5 months. And when I asked the "lead" if I could do a clean compile, he asked "why?". The only good thing about that response was it allowed me to assign the correct assessment level of his skills…. #error is usually the best way of detecting compile errors, for at least as many reasons as Raymond pointed out. Probably the only reason not to is if you have a compiler that doesn’t support it… Or does that date me too well. @doug I’m curious what will happen if the compiler doesn’t support #error? Does it just ignore a preprocessor command it doesn’t understand? And what answer did you give your lead? And what was your assessment of his skills? @doug I’m curious what will happen if the compiler doesn’t support #error? Does it just ignore a preprocessor command it doesn’t understand? And what answer did you give your lead? And what was your assessment of his skills? You could always add: asdasdasd++; then the error will clearly say something along the lines of "unrecognized symbol: asdasdasd" I understand the reasoning behind precompiled headers, but it seems like the most frustrating problems I run into are precompiled headers. The easiest are ones to which the compiler barfs and identifies a precompiled header as the cause, but I’ve had obscure build errors that I fixed by disabling precompiled headers. Most annoying yet is when I get fresh source code, build it, and get a precompiled header error – it’s a frickin’ clean build and the precompiled header is already out of date? @doug So, let’s say it takes the rebuild like 10 hours and the old obj files are perfectly ok. Why do a rebuild ? It’s not that older obj files rot or rust over time. Unless there is a compiler or build options change, there is no reason to rebuild all. "the reason why you’re doing this investigation in the first place is that you can’t get your code to compile and you’re trying to figure out why" I’m not arguing against #error, but sometimes I go the "asdasdasd" route if the project is compiling but the changes I’m making don’t seem to have any effect and I want to prove that the file/part I’m editing is actually used. (There are a few ways to fall into that situation but a simple one is using Find In Files to find some code which turns out to be in an "old/unused code backup" file that’s no longer in the project, with the real code elsewhere.) #error definitely generates a clearer error message, and works in more situations, but it also takes slightly more thought*/effort to type than mashing the keyboard in frustration and hitting F7. :-D Sometimes I write a little message to the compiler, like "Hello!? Are you seeing this!??" which takes much longer to write, and produces very spurious error messages, but has a pleasant venting effect on the programmer. :) (*The main reason is habit, really. I didn’t learn about #error until after I was used to mashing, so I still instinctively mash.) I’d have to agree with Raymond that #error is generally the easiest way. However, I’ve met situations where #error didn’t seem to help and I worked my way around the generated source, which was tedious but, mostly thanks to the preprocessor inserting #line directives and such so you don’t lose your way, doable. In both cases it turned out the #error was reached both in the good version and in the bad version, but in a different way. But most of the time it works, and it’s certainly the easiest so that makes it the first thing I try in such situations. @error asdasdasd There could be latent binary incompatiblities between the old OBJs and the ones that have recompiled. A header: struct old_school { int field1; int field2_was_short_previously; int field3; }; void useful(struct old_school*); Source to five-month-old OBJ void i_never_change() { struct old_school x; x.field3 = 1; useful(&x); } I know of one piece of software that does some hacks with #include What it does in one file called lets say abc.c: #ifndef x #define x #define abc(x) <blah> #include "abc.c" #undef abc #define abc(x) <blah 2> #include "abc.c" #else abc("123") abc("456") abc("789") … #endif If there was ever a wierd hack, this is it… I have experienced many times on some of my projects (all of which are stock Visual C++ 2008 projects with no special build stuff set up) when I have a bug showing up which disappears when I do a "rebuild all". "when I asked the "lead" if I could do a clean compile, he asked "why?"… it allowed me to assign the correct assessment level of his skills…." You’d likely get me asking why as well; for two reasons; 1 – I believe my build system works properly. And hence 2 – If you think it doesn’t I’d like to know *why* you do, so I can find out if it’s a problem which needs fixing. Clean builds are a solution to a symptom, not the actual underlying problem. Please don’t turn "why" questions into that sort of nerd pissing contest that means problems don’t get solved. @Jonathan Wilson: then you have a regular, common-or-garden, dependency bug, which you should track down and fix. I will always do a rebuild all for an actual handover-to-someone-else release, but for development, an incremental build is fine. Virtually any Windows program (that uses windows.h) will benefit from precompiled headers. Hey, I remember that comment…Over a year later and I just found myself doing it again yesterday, so I guess I haven’t changed my ways much. I often do it to make sure that I have the correct define (#ifdef AMD64 was the one I was looking for yesterday) and that I’m compiling what I think I’m compiling. In the case where the code isn’t compiling at all though I’d agree that #error makes more sense, why confuse the mess even further. I always use #pragma message("SOME TEXT") which has the side effect of not breaking the build, and turns up for every file in which it gets included. One way I use it is if I have some code I’ve included temporarily — e.g., some OutputDebugString calls. I add some #pragma message stuff in all-caps so that I can see in the build output to remember to remove it when I’m done testing. There’s a small but nice compiler switch "/showIncludes". Guess what it does. This is useful if you don’t know from WHERE an include file was included, for example. @Mike Dimmick: one cause of dependency problems is Resource.h, which in VC++ is excluded from dependency tracking by default. If you remove a #define from it, code that relies on the #define will no longer compile, and will probably crash when run, but if you don’t rebuild all manually you may not notice. My reasons for having a clean build: VC itself has some holes in the dependency check (e.g. #pragma comment lib, handling resource files). Also, a clean build will turn up some dependency / build order bugs that can go unnoticed for a long time. A clean build process is a de-facto documentation *how* to build your sources, in case that’s left is the source code repository. It also makes Joel happy. Still, incremental build should work in the general case – waiting for full rebuilds is no fun. "one cause of dependency problems is Resource.h, which in VC++ is excluded from dependency tracking by default." For anyone who hasn’t run into this before, that is because of this line at the top: //{{NO_DEPENDENCIES}} which you can add to your own files to get the same behavior. There’s nothing magical about the name resource.h. (See TechNote TN035.) I’ve done that for a similar header file full of function IDs that is automatically generated and then included everywhere. The existing IDs are stable, but new ones can be added. В ответ на мое описание того, как вы можете использовать директиву #error , чтобы проверить, что компилятор
https://blogs.msdn.microsoft.com/oldnewthing/20090529-00/?p=18103
CC-MAIN-2018-30
refinedweb
1,749
70.13
VNDB api client implementation and dumps helper Project description VNDB Thigh-highs This module provide a VNDB client api implementation. It aims to provide some high level features to easily use VNDB api. It also includes some helper functions and classes to easily use database dumps. API Quick start from vndb_thigh_highs import VNDB from vndb_thigh_highs.models import VN vndb = VNDB() vns = vndb.get_vn(VN.id == 17) vn = vns[0] print(vn.title) Check the documentation for more details Dumps Quick start from vndb_thigh_highs.dumps import TraitsDatabaseBuilder builder = TraitsDatabaseBuilder() trait_db = builder.build_with_archive("path/to/traits.json.gz") trait_id = 186 trait = trait_db.get_trait(trait_id) print(trait.name) Check the documentation for more details Testing Run test/main.py. By default tests are run using predefined responses. It is possible to run them with vndb by edting use_mock_socket = True in test/test_case.py, though logged in tests require valid credentials in data/login.json. A few troublesome tests are also skipped when using vndb. Database dumps tests will need dumps, compressed and decompressed, in data/. License This module is licensed under the AGPLv3. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/vndb-thigh-highs/
CC-MAIN-2022-27
refinedweb
203
61.53
Adding gradient details to your application graphic can make a big difference. The out-of-box support for this in Compact Framework is not very good and you need to custom build controls with gradient support. This post explains how to create your own gradient buttons in .NET Compact Framework. I have earlier written a post about Creating gradient background with transparent labels in .NET Compact Framework and the approach when creating a gradient button is quite similar. As a base for drawing gradient I use the same GradientFill.cs and Win32Helper.cs that I used and explained when creating gradient background. The GradientFill.cs and Win32Helper.cs I have got from Microsoft MSDN site: How to Display a Gradient Fill. Their example also have a gradient button, but the example I will show here is a very simple gradient button that is easy to continue developing to suite your needs. GradientFill.cs using System; using System.Diagnostics; using System.Drawing; using System.Runtime.InteropServices; Recent Comments
http://breathingtech.com/tag/net/
CC-MAIN-2017-22
refinedweb
167
53.07
On Thu, 11 Sep 2008, Serge E. Hallyn wrote:> Is it really a problem? The admin can always go ahead and kill the> user, which already takes care of any mounts in private namespaces,It's not necessarily against the admin, it could deny service toanother user or a script running as root.The nasty thing is: an unprivileged user can cause unlink/rmdir tofail, even when otherwise it would have succeed. It's not a hugeissue: unprivileged fuse mounts have the same effect, and nobodycomplained yet. But I think we have to deal with this in some way,not just leaving it to the admin.Thanks,Miklos
https://lkml.org/lkml/2008/9/11/183
CC-MAIN-2022-05
refinedweb
108
71.44
Building Visual Studio Extensions with Roslyn - | - - - - - - Read later My Reading List Yesterday we talked about the Roslyn Compiler and Workspace APIs. Today we take a look at the Roslyn Service APIs and how they can be used to extend Visual Studio. The extensions we will look at today are Code Issue, Code Refactoring, Completion Provider, and Outliner. Like all modern Visual Studio extensions, Service APIs are registered using MEF. This means developers simply need to implement certain interfaces and include matching MEF-style attributes, a much welcomed change from previous versions of Visual Studio which required code signing and COM registration. Code Issue The Code Issue extension allows developers to write their own compiler warnings and errors. The sample project included with the CTP shows compiler warnings wherever the letter ‘a’ appears in the syntax tree. As you can see from the images below, they integrate quite nicely into Visual Studio’s overall workflow. The ICodeIssueProvider interface is quite simple, consisting only of three variants of the GetIssues method. Each variant receives an IDocument containing all of the information about the file being processed including the raw text, the syntax tree, the semantic model, and a back-reference to the containing project. A cancellation token is also included in case the IDE needs to abort analysis, perhaps because the user edited a file. The three overloads also accept one of the three types of syntax: a node, a token, or a trivia. Most of the analysis work will probably be done at the node level. Trivia represents information not needed by the compiler such as whitespace and comments and tokens lack the context to be informative. Nodes, on the other hand, represent everything from the top level namespace declaration down to the smallest expression. When errors are detected they can be returned to the IDE via an enumeration of CodeIssue. A CodeIssue consists of a severity level (Information, Warning, or Error), a Span object indicating where the error is located, and a description of the error. Quick Fix Code Issues may also include one or more ICodeAction objects. These objects allow developers to provide auto-correct options much like what is seen in the below image. Building an ICodeAction and matching ICodeActionEdit is significantly more difficult than creating code issues. One needs to learn how to edit syntax trees and publish the updates via the IWorkstation interface. The Roslyn site includes a walk-through for writing Quick Fixes. Code Refactoring The support for Code Refactoring looks a lot like the “quick fix” support correction for Code Issues, but it is applied at the text level. The ICodeRefactoringProvider is supplied with a document and TextSpan and is expected to return a CodeRefactoring object. This object simple contains a collection of ICodeAction objects just a like the CodeIssue object discussed above. The project template for code refactoring doesn’t include a usable demo, but the same techniques shown in the Quick Fix walkthrough can be used here. Completion Provider The ICompletionProvider interface has a single method called GetItems. This accepts an IDocument and a position parameter represented as an integer. From this an enumeration of CompletionItem is returned. Each CompletionItem requires the text to be displayed. Developers may also include an icon, description, and/or alternate text to be inserted. (Presumably the insertionText, if not supplied, defaults to the display text.) While nowhere near as useful as the other providers, one could still do interesting tricks like build templates that are too complex for the normal code snippet infrastructure. Outlining The final project template is the Syntax Outliner, exposed via the ISyntaxOutliner interface. This is used to create collapsible outlines in the text editor much like we have for regions, classes, and methods. The interface receives a Syntax node and is expected to return an enumeration of OutliningSpan objects, each of which has a TextSpan to encompass, HintSpan (for mouse-over text), banner text, and the option to AutoCollapse. Rate this Article - Editor Review - Chief Editor Action
https://www.infoq.com/news/2011/10/Rosyln-Extensions
CC-MAIN-2017-09
refinedweb
665
53.41
How to get MD5 Sum of a String in Python? In this article, we will learn how to get MD5 sum of a given string in Python. We will use a built-in function to find the sum. Let's first have a quick look over what is MD5 in Python. MD5 Hash in Python MD5 Hash is one of the hash functions available in Python's hashlib library. It is mainly used in cryptographic functions to perform hash calculations. Hash is also used to check the checksum of a file, password verification, fingerprint verification, build caches of large data sets, etc. It accepts a byte string and outputs the equivalent hexadecimal string of the encoded value. Encoding a string to an MD5 hash produces a 128-bit hash value. Hashing algorithms typically act on binary data rather than text data, so you should be careful about which character encoding is used to convert from text to binary data before hashing. The result of a hash is also binary data. In this article, we will import hashlib library to use hashlib.md5() function to find the MD5 sum of the given string in Python. Three functions are mainly used here- 1. encode() - It encodes and converts the given string into bytes to be acceptable by the hash function. 2. digest()- It returns the encoded data in byte format. 3. hexdigest()- It returns the encoded data in hexadecimal format. It returns a 32 character long digest. Example: Use hashlib.md5() to get MD5 Sum of a String This method imports hashlib library of Python. The below example calls hashlib.md5() function with an argument as a byte string to return an MD5 hash object. It calls str.encode() with str as an argument to return an encoded string. hexdigest() function is then called to display the encoded data in hexadecimal format, else you can call digest() a function to display data in byte format. The md5 hash function encodes the string and the byte equivalent encoded string is printed. Python 2.x Example import hashlib #using hexdigest() print hashlib.md5("This is a string").hexdigest() print hashlib.md5("000005fab4534d05key9a055eb014e4e5d52write").hexdigest() 41fb5b5ae4d57c5ee528adb00e5e8e74 f927aa1d44b04f82738f38a031977344 Python 3.x Example import hashlib #using hexdigest() print(hashlib.md5("This is a string".encode('utf-8')).hexdigest()) print(hashlib.md5("000005fab4534d05key9a055eb014e4e5d52write".encode('utf-8')).hexdigest()) #using digest() print(hashlib.md5("This is a string".encode('utf-8')).digest()) print(hashlib.md5("000005fab4534d05key9a055eb014e4e5d52write".encode('utf-8')).digest()) 41fb5b5ae4d57c5ee528adb00e5e8e74 f927aa1d44b04f82738f38a031977344 b'A\xfb[Z\xe4\xd5|^\xe5(\xad\xb0\x0e^\x8et' b"\xf9'\xaa\x1dD\xb0O\x82s\x8f8\xa01\x97sD" Note: 1. If you need byte type output, use digest() instead of hexdigest(). 2. You must have noticed in the above examples that, Python 2 does not require utf-8 encoding but Python 3 requires encoding. If you run the program in Python 3 without encode(), you will get an error. Reason: MD5 function takes a byte string and does not accept Unicode. Python 3 is explicit, and so str ( "") is Unicode and has to be encoded to a byte string. Strings in Python 2 can be interpreted as either a byte string or Unicode string, and passing str ( "") string is interpreted as a byte string. If the string has Unicode characters, it will raise an Exception. Encoding a byte string will leave ASCII characters untouched and convert Unicode correctly Conclusion In this article, we learned about hashlib.md5() function to get the MD5 sum of a string. We discussed MD5 hash functions and why it is used. We saw the implementation of the hash function in both Python 2 and 3.
https://www.studytonight.com/python-howtos/how-to-get-md5-sum-of-a-string-in-python
CC-MAIN-2022-21
refinedweb
603
68.06
Gregory Leake Microsoft Corporation James Duff Vertigo Software, Inc. May 2003 Applies to: Microsoft® .NET Framework 1.0 and 1.1 Microsoft® Windows 2000 and Windows Server™ 2003 Microsoft® Internet Information Services Microsoft® SQL Server™ 2000 Oracle® 9i Database Summary: Version 3.x of the .NET Pet Shop addresses the valuable feedback given by reviewers of the .NET Pet Shop 2.0 and was created to ensure that the application is aligned with the architecture guideline documents being produced by the Microsoft. (20 printed pages) Download the Pet Shop 3.0 Installer.msi code sample. Abstract What is the Java Pet Store? Microsoft .NET Pet Shop Business Requirements Application Data Model .NET Pet Shop 2.0 Architecture .NET Pet Shop 3.0 Architecture Conclusions Appendix A: Changes between Versions 2 and 3 The purpose of the original .NET Pet Shop study was to take Sun's primary J2EE blueprint application, the Sun Java Pet Store, and implement the same application functionality with Microsoft .NET. Based on the .NET implementation of Sun's J2EE best-practice sample application, customers can directly compare Microsoft's .NET technology to J2EE-based application servers across a variety of fronts while learning about the similarities and differences in the recommended design patterns for building Web-based applications. The .NET Pet Shop application is now in its third revision and is designed to show the .NET best practices for building enterprise n-tier applications which may need to support a variety of database platforms and deployment models. The .NET Pet Shop 3.0 has been re-architected based on community feedback on the .Net Pet Shop 2.0 to comply with Microsoft's Prescriptive Architecture Guidance as published on MSDN. The third revision is also fully compliant with the Middleware Company Application Server Benchmark Specification, and will serve as Microsoft's entry in the upcoming Middleware Application Server Benchmark this spring: a second round of testing by the Middleware Company to compare the scalability of .NET and the J2EE platforms for building and hosting enterprise Web applications. The Java Pet Store is a reference implementation of a distributed application according to the J2EE blueprints maintained by Sun. The sample application was originally created to help developers and architects understand how to use and leverage J2EE technologies, and how each of the J2EE platform components fit together. The Java Pet Store demo includes documentation, full source code for the Enterprise Java Beans (EJB) architecture, Java Server Pages (JSP) technology, tag libraries, and servlets that build the application. In addition, the Java Pet Store blueprint application demonstrates certain models and design patterns through specific examples. The full Java Pet Store contains three sample applications: The original version of the Java Pet Store was designed to work with the following databases: Oracle, Sybase, and Cloudscape. IBM has created a DB2 port of the application. The application is publicly available at the Java 2 Platform Enterprise Edition Blueprints. The premise of the main application, the Java Pet Store, is an e-commerce application where you can buy pets online. When you start the application you can browse and search for various types of pet from dogs to reptiles. A typical session using the Java Pet Store would be: Homepage—This is the main page that loads when the user first starts the application. Category View—There are five top-level categories: Fish, Dogs, Reptiles, Cats, and Birds. Each category has several products associated to it. If we select Fish as the category, we might see "Angelfish" etc. Products—If you now select a product the application will display all variants of the product. Typically the product variant is either male or female. Product Details—Each product variant (represented as items) will have a detailed view that displays the product description, a product image, price, and the quantity in stock. Shopping Cart—This allows the user to manipulate the shopping cart (add, remove, and update line items). Checkout—The checkout page displays the shopping cart in a read-only view. Login Redirect—When the user selects "Continue" on the checkout page, they are redirected to login page if they have not signed in yet. Verify Sign In—After being authenticated onto the site, the user is then redirected to the credit card and billing address form. Confirm Order—The billing and the shipping addresses are displayed. Commit Order—This is the final step in the order-processing pipeline. The order is now committed to the database at this point. Figure 1. Java Pet Store The goal of the .NET Pet Shop was to focus solely on the Java Pet Store (the administration and mailer component were not implemented in .NET). In addition to reproducing the functionality of the Java Pet Store application, two additional objectives were added: Figure 2. .NET Pet Shop The overall logical architecture of the .NET Pet Shop is given in Figure 3. The main focus of the design was to use ASP.NET Web Forms for the presentation tier which communicate to C# business components in a logical middle tier. In turn the business components access a back end database through ADO.NET and a SQL Server helper class known as the Data Access Application Block (DAAB) for more information on the DAAB and to download the full DAAB source code. Data Access is fully abstracted into a Data Access Layer (DAL), separate from the business logic layer (BLL). New with the .NET Pet Shop 3.0, we have introduced a DAL layer for both Oracle 9i and SQL Server 2000 databases. Class loading of the appropriate DAL layer occurs dynamically at runtime based on an application configuration setting in Web.Config. Note that .NET Pet Shop 3.0 uses two backend databases, and the order process involves a distributed transaction across these two databases. Using simple Web.Config application settings, users can deploy the .Net Pet Shop to work with single or multiple backend databases, and can freely mix SQL Server and Oracle backend databases with the distributed transaction handled by .NET Serviced Components using COM+ Enterprise Services. Figure 3. .NET Pet Shop high-level logical architecture Figure 4 shows how the Microsoft .NET Pet Shop might be physically deployed. In this case inbound network traffic is split between the two application servers using Network Load Balancing, NLB, or perhaps hardware load balancing. Once a network request has reached one of the machines in the cluster, then all the work for that request will be performed on that particular machine. The business logic and data access components will be installed as assemblies on both servers, which in essence will be identical clones of one another. If the load-balancing software is configured to use "sticky Ips," then each server can have its own session-state store, because a second request will be guaranteed to return to the server where the first request was fulfilled. For a more fault-tolerant solution, the two application servers can share a common session-state store such as SQL Server or a dedicated session server (Not shown on diagram). The type and location of the session-state store is determined by the values in the 'sessionState' child node of the 'system.web' element in the 'web.config' file for each Web site. Figure 4. .NET Pet Shop physical deployment diagram As part of the architecture documentation for Pet Shop 3 we have laid out what we considered to be the business requirements for the .NET Pet Shop, so that developers and customers could understand some of the trade-offs we made when we made some of our design decisions for the application. What are the functional requirements of the Pet Shop application? The database schema used in the .NET Pet Shop is ported directly from the Java Pet Store. The Java Pet Store supports several database vendor formats and so we took the schema for the Sybase application and created it in a Microsoft SQL Server 2000 instance. This required no changes to the Sybase version of the schema. With the creation of an Oracle version of the .NET Pet Shop, we took the original Oracle implementation of the Java Pet Store database. The database has the following overall structure of tables; see Table 1: Table 1. Database table in Pet Shop With version 2 of the .NET Pet Shop, the application was modified to create a scenario where a distributed transaction would be required to complete the order process. For the distributed transaction scenario, the Orders, OrderStatus, and LineItem tables have been split off into a separate database instance that could be installed on a separate machine. We have kept this distributed design pattern in the third version of the .NET Pet Shop. Figure 5. .NET Pet Shop orders database schema Figure 6. .NET Pet Shop accounts and products database schema There are some changes that could be made to the schema used in the application; however, we have not made these changes in order to keep in line with the schema provided by the Java Pet Store reference implementation. These changes are listed in table 2: Table 2. Possible improvements to the Pet Shop schema Figure 7. .NET Pet Shop 2 Application Architecture .NET Pet Shop 2.0 was designed to be deployed in a physical two-tier deployment and took advantage of this fact in the implementation of certain parts of the application. The application consisted of a Web tier built with ASP.NET Web Forms which used "code-behind" to separate the application HTML from the user interface code. The middle tier contained business components to control the application logic which talked to a SQL Server database through a customized version of the Microsoft Database Access Application Block (DAAB). To support distributed transactions, some of the middle tier business components were implemented using Enterprise Services. With Microsoft .NET this is the recommended way to support distributed transactions. However, not all classes extended the ServicedComponent class, as there is a performance overhead of implementing all classes as Enterprise Serviced Components. When publishing the .NET Pet Shop 2.0, which was used in the first Middleware Application Server benchmark, we received much feedback on optimizations that should be made to the architecture to make it more relevant to large-scale enterprises. The major areas of feedback included: Figure 8. .NET Pet Shop 3.0 application architecture Table 3. Application areas in the Pet Shop solution Figure 9 shows the Microsoft Visual Studio .NET solution layout for the .NET Pet Shop application. Each element or tier of the application was given its own project to make the solution manage and clearly define where new classes used in the application should be placed or old ones can be found. Figure 9. Visual Studio .NET application solution The purpose of each project is listed in Table 4: Table 4. Visual Studio projects in the Pet Shop solution One of the key requirements for this version of the Microsoft .NET Pet Shop was to provide an implementation of the application that supported both Oracle and SQL Server databases. When designing the database access mechanism for the application, we had a choice as to which database providers we should use; we could either use the generic OLE-DB managed provider or the database-specific, performance-optimized .NET managed providers, such as the SQL Server and Oracle managed providers provided with the .NET Framework 1.1. One of the key requirements for the application was to create a high-performance solution, so we chose to build the application using the database-native .NET managed providers. For an analysis of the difference in performance between the managed providers and the generic OLE-DB providers, readers should refer to Using .NET Framework Data Provider for Oracle to Improve .NET Application Performance, which shows that the vendor-specific providers can perform two to three times better than the equivalent OLE-DB provider. The tradeoff that we made when choosing to go with a database-specific access class was that we would need to write a separate data access layer for each database platform we wanted to support and hence the application would contain more code. While the two data access layers share much common code, each is clearly targeted for use with a specific database (Oracle or SQL Server 2000). In order to simplify the use of the database access classe,s we elected to use the Factory Design Pattern as outlined by the GoF, with reflection being used to dynamically load the correct data access objects at runtime. The factory is implemented as follows: a C# interface is created with a method declared for each method that must be exposed by the database access classes. For each database that we want to support, we create a concrete class that implements the database specific code to perform each of the operations in the interface or "contract." To support the runtime determination of which concrete class to load, we create a third class which is the factory itself and reads a value from configuration file to determine which assembly to load using reflection. With the reflection namespace in .NET, you can load a specific assembly and create an instance of an object using that assembly. To make the application more secure and provide improved support for versioning, we can add "evidence" about the assembly file to load in the application configuration file, in this case, web.config. This means that the .NET Framework will only load the assembly that we have signed during compilation and has the correct version number. Figure 10 shows how the business logic, factory, and databases access classes interact with each other. The key advantage of the solution built is that the database access class can be compiled after the business logic classes as long as the data access class implements the IDAL interfaces. This means that should we want to create a DB2 version of the application, we do not need to change the business logic tier (or UI tier). The steps to create a DB2 compatible version are: No changes or recompiles need to be performed on the business logic components. Figure 10. DAL factory implementation in .NET Pet Shop Normally we would recommend to customers that they use stored procedures to access tables in the database. There are several reasons for this: The disadvantage of stored procedures is they tend to be proprietary and are not portable across platforms. However, to make the best use of the financial investment that you have made in your database software and hardware, developers will tend to optimize the SQL used in their applications for their specific database engine regardless of whether the SQL is in a stored procedure or generated in the middle tier. A good example of this is generating unique numbers or identity numbers, as all databases support their own particular mechanisms for doing this, and so the SQL used to generate the unique number tends to be specific to the database being used. There are always alternatives, but they do not perform as fast as the proprietary solutions. With the .NET Pet Shop we took the conscious decision not to use stored procedures in the application as this might be seen to be an unfair advantage for the .NET solution in the Middleware Benchmark. In reality the difference in performance is small because the application is relatively simple and most of the SQL statement execution plans are cached in the database. However, for purposes of the Middleware Benchmark specification, which disallows the use of stored procedures even to wrap simple SQL statements, the .NET Pet Shop 3.0 uses no stored procedures. One of most effective ways to improve the performance of a database-driven applications is to eliminate accessing the database for every request. ASP.NET offers a variety of caching mechanisms that boost performance in most applications. The two main ways to use caching in ASP.NET are output caching and data caching. Page-level output caching takes the response from an ASP.NET Web page and stores the full page in a cache. The page cache is designed to work between the Web tier and the middle tier and caches the results/data of middle-tier methods, or the results of database calls in the case of two-tier apps. The first .NET Pet Shop distribution was offered in a page-level output-cached version and a noncached version. Version 3 only supports data caching but can easily be modified to support output caching. With Windows Server 2003 and IIS 6.0, certain output cached pages (those with VaryByParm="none" and the directive to Cache 'Anywhere') can also be kernel-level cached, with even faster access from Internet clients. Regardless, any output-cached page (kernel or nonkernel cached) is served extremely quickly by the Windows application server with much less resource (CPU) consumption, since no processing is actually occurring to recreate the page. ASP.NET Output Caching The original .NET Pet Shop application, version 1.5, also used a variation on page-level output caching, partial page caching or fragment caching to cache different and Object Caching The object cache (Cache API) allows you to store the results of a method call or database query in the internal caching engine with the .NET Framework. Since you have gone further through more of the application pipeline, the data cache may not offer the same performance boost that output caching yields, since HTML pages still must be dynamically constructed on each request. However, it does offer a good compromise between fully dynamic pages and reducing the load on the database by storing nonvolatile data in the middle tier. For example, you may want to display the same piece of data differently in two Web pages, or use the cached objects or data differently in various pages of the same application.. It is also possible to monitor the turnover rate and the hit rate within Perfmon to detect whether the item is being used in the cache. Table 5. Summary of caching options in .NET (Cache user controls) The Middleware Company has defined strict rules for the benchmark application as to what can be cached and how it can be cached. Basically, page-level output caching is disallowed, but middle-tier data caching is allowed in certain places in the application. The following data was allowed to be cached; it does not need to be refreshed from the database on each request: category information, product information, item information (excluding inventory data), and some account information. In terms of inventory data, the current quantity in stock should reflect an up-to-date value whenever an item is added to the cart or a user navigates to the 'ItemDetails' page. For the account information the username and password has to be validated on every single login attempt, and the address used for billing information should always be refreshed from the database to ensure that it is up to date during a checkout process. The implication of this is that the results for most products searches and browse operations can be cached in the middle tier using a data/object cache set to expire on a given interval. The second rule for the benchmark is that page level output caching should not be allowed; the web server should always be required to recreate the rendered HTML for a page. The reason for this is to test the ability of the application servers to generate dynamic pages, not how fast the server could pull HTML from a cache. For this version of the application, we used data caching in the code behind of the ASPX pages to cache the results of middle tier (business logic tier) requests. The following sample code shows how one would access the ASP.NET cache API to retrieve information from the cache. The example is based on the category query, category.aspx.cs, in the Pet Shop application. In the Pet Shop a user can select one of the five predefined categories of pet and view a list of animals in that category. The first thing the code does is to check to see if the data has already been cached by using the category id as a key to look up an element in the data cache. If this element does not exist, either because the data has not yet been cached, or the current cache has expired, it will return null. If the element exists, the data from the cache is pulled and cast to the appropriate type. If the data is not in the cache, the middle tier is invoked to query for the data. The results from the middle tier query are then added to the cache with an expiry time of 12 hours from now. Alternatively, we could expire the cached based on a fixed time of day, a dependency on another cache item, or by supplying a callback function which could be used to clear the cache. // Get the category from the query string string categoryKey = WebComponents.CleanString.InputText(Request["categoryId"],50); // Check to see if the contents are in the Data Cache if(Cache[categoryKey] != null){ // If the data is already cached, then used the cached copy products.DataSource = (IList)Cache[categoryKey]; }else{ // If the data is not cached, // then create a new products object and request the data Product product = new Product(); IList productsByCategory = product.GetProductsByCategory(categoryKey); // Store the results of the call in the Cache // and set the time out to 12 hours Cache.Add(categoryKey, productsByCategory, null, DateTime.Now.AddHours(12), Cache.NoSlidingExpiration , CacheItemPriority.High, null); products.DataSource = productsByCategory; } // Bind the data to the control products.DataBind(); Cache.Add(categoryKey, productsByCategory, null, DateTime.Now.AddHours(12), Cache.NoSlidingExpiration , CacheItemPriority.High, null); The architecture of the .NET Pet Shop has been enhanced to provide a more flexible solution in terms of deployment options and the application is more readily customizable and adaptable to the changes in business models. Despite all these changes, the performance of the Pet Shop has remained roughly the same as the 2.0 implementation, and shows that the Microsoft .NET framework offers a viable alternative to J2EE for Enterprise solutions. Using the .NET Pet Shop and J2EE Pet Store, architects and developers can compare .NET and J2EE side by side using functionally identical reference applications that illustrate best coding practices for each platform. The upcoming Middleware Benchmark will test the new .NET Pet Shop 3.0 implementation to compare its performance to two new J2EE implementations—one based on CMP 2.0 and one based on a pure JSP/Servlet architecture with no EJBs. The testing will be conducted by the Middleware Company and published on the ServerSide with a variety of J2EE application servers. A panel of J2EE experts has been established and is overseeing this benchmark to ensure fairness across tested products and adherence to specification/best practice for all implementations. In addition, the major J2EE application server vendors have been invited, along with Microsoft, to participate by commenting on the specification, providing implementations to be tested, and being on site for tuning/optimizations and the testing process. Microsoft has opted to fully participate in this second round of testing.
http://msdn.microsoft.com/en-us/library/ms954623.aspx
crawl-002
refinedweb
3,874
54.32
This part of the Bookshelf app tutorial shows how to create a sign-in workflow for users and how to use profile information to provide users with personalized functionality. By using Google Identity Platform, you can easily access information about your users while ensuring their sign-in credentials are safely managed by Google. Use OAuth 2.0 to provide a sign-in workflow for all users of your app. It provides your app with access to basic profile information about authenticated users. This page is part of a multi-page tutorial. To start from the beginning and read the setup instructions, go to Ruby Bookshelf app. Creating a web app client ID A web app client ID lets your app authorize users and access Google APIs on behalf of your users. In the Google Cloud Platform Console, go to Credentials. Click OAuth consent screen. For the product name, enter Ruby Bookshelf App. For Authorized Domains, add your App Engine app name as [YOUR_PROJECT_ID].appspot.com. Replace [YOUR_PROJECT_ID]with your GCP project ID. Fill in any other relevant, optional fields. Click Save. Click Create credentials > OAuth client ID. In the Application type drop-down list, click Web Application. In the Name field, enter Ruby Bookshelf Client. In the Authorized redirect URIs field, enter the following URLs, one at a time. http://[YOUR_PROJECT_ID].appspot.com/auth/google_oauth2/callback https://[YOUR_PROJECT_ID].appspot.com/auth/google_oauth2/callback Click Create. Copy the client ID and client secret and save them for later use. Installing dependencies Go to the getting-started-ruby/4-auth directory, and enter this command: bundle install Configuring settings Copy the example settings file. cp config/settings.example.yml config/settings.yml Edit the settings.ymlfile. Replace [YOUR_CLIENT_ID]with your web app client ID and [YOUR_CLIENT_SECRET]with your web app secret. default: &default oauth2: client_id: [YOUR_CLIENT_ID] client_secret: [YOUR_CLIENT_SECRET] In the settings.ymlfile, replace the values for project_idand gcs_bucketwith the values you used in the Using Cloud Storage database.example.ymlfile. sample code and explains how it works. User sign-in The form for the app's home page has a new link so users can sign in. <%= link_to "Login", login_path %> The /login route is configured in the config/routes.rb file. get "/login", to: redirect("/auth/google_oauth2") When you click Log in, you are redirected to the Google OAuth 2.0 user consent screen. The OmniAuth and Google OAuth2 Ruby gems provide support for the sign-in workflow. gem "omniauth" gem "omniauth-google-oauth2" The omniauth.rb initializer file configures OmniAuth to use Google OAuth 2.0 for user sign-in. The code in the omniauth.rb file reads configuration settings from the oauth2 section of config/settings.yml. Rails.application.config.middleware.use OmniAuth::Builder do config = Rails.application.config.x.settings["oauth2"] provider :google_oauth2, config["client_id"], config["client_secret"], image_size: 150 end When you click Allow on the Google OAuth 2.0 user consent screen, the authorization service issues an HTTP GET request for the sample app's /auth/google_oauth2/callback route, which is configured in the routes.rb file. get "/auth/google_oauth2/callback", to: "sessions#create" resource :session, only: [:create, :destroy] The handler for the callback route is the create method of the sample app's SessionController class. The omniauth.auth request variable provides your profile information, which includes your identifier, display name, and profile image. The create method reads the profile information and stores it in a User object that is serialized into the session. class SessionsController < ApplicationController # Handle Google OAuth 2.0 login callback. # # GET /auth/google_oauth2/callback def create user_info = request.env["omniauth.auth"] user = User.new user.id = user_info["uid"] user.name = user_info["info"]["name"] user.image_url = user_info["info"]["image"] session[:user] = Marshal.dump user redirect_to root_path end The app's home page has new elements to display information about the signed-in user. <% if logged_in? %> <% if current_user.image_url %> <%= image_tag current_user.image_url, class: "img-circle", width: 24 %> <% end %> <span> <%= current_user.name %> <%= link_to "(logout)", logout_path %> </span> The logged_in? and current_user helper methods read the User object in the user's session. class ApplicationController < ActionController::Base helper_method :logged_in?, :current_user def logged_in? session.has_key? :user end def current_user Marshal.load session[:user] if logged_in? end List a user's books When a signed-in user adds a new book, their user ID is associated with the book: def create @book = Book.new book_params @book.creator_id = current_user.id if logged_in? if @book.save flash[:success] = "Added Book" redirect_to book_path(@book) else render :new end end The app's home page has a new Mine link, so signed-in users can view only the books they have added. <% if logged_in? %> <li><%= link_to "Mine", user_books_path %></li> <% end %> The index action of the UserBooksController queries for books that were added by the signed-in user. Cloud SQL / PostgreSQL @books = Book.where(creator_id: current_user.id). limit(PER_PAGE).offset(PER_PAGE * page) Cloud Datastore @books, @more = Book.query creator_id: current_user.id, limit: PER_PAGE, cursor: params[:more] def self.query options = {} query = Google::Cloud::Datastore::Query.new query.kind "Book" query.limit options[:limit] if options[:limit] query.cursor options[:cursor] if options[:cursor] if options[:creator_id] query.where "creator_id", "=", options[:creator_id] end User sign-out The form for the app's home page has a new logout link so that users can sign out. <%= link_to "(logout)", logout_path %> The logout_path is configured in the config/routes.rb file. get "/logout", to: "sessions#destroy" The destroy action of the SessionsController deletes the user from the session. def destroy session.delete :user redirect_to root_path end
https://cloud.google.com/ruby/getting-started/authenticate-users?hl=no
CC-MAIN-2019-22
refinedweb
916
52.66
A smart pointer to a reference counted object. More... #include <IMP/Pointer.h> A smart pointer to a reference counted object. Any time you store an Object in a C++ program, you should use a Pointer, rather than a raw C++ pointer (or PointerMember, if the pointer is stored in a class). Using a pointer manages the reference counting and makes sure that the object is not deleted prematurely when, for example, all Python references go away and that it is deleted properly if an exception is thrown during the function. Use the IMP_NEW() macro to aid creation of pointers to new objects. For example, when implementing a Restraint that uses a PairScore, store the PairScore like this: When creating Object instances in C++, you should write code like: which is equivalent to There are several important things to note in this code: Definition at line 87 of file Pointer.h. get the raw pointer to the object Relinquish control of the raw pointer stored in the Pointer. Use this to safely return objects allocated within functions. The reference count of the object will be decreased by one, but even it it becomes 0, the object will not be destroyed.
https://integrativemodeling.org/nightly/doc/ref/structIMP_1_1Pointer.html
CC-MAIN-2021-10
refinedweb
200
57.71
‘__’, which default alignment for the target architecture you are compiling for. The default alignment is sufficient for all scalar types, but may not be enough for all vector types on a target that. generates (msg ) optional msg argument, which must be a string, changes what section the variable goes into and may cause the linker to issue an error if an uninitialized variable has multiple definitions. When applied to a static data member of a C++ class template, the attribute also means that the member is instantiated if the class itself is instantiated. vector_size (bytes ) int foo __attribute__ ((vector_size (16))); causes the compiler to set the mode for foo, to be 16 bytes, divided into int sized units. Assuming a 32-bit int (a vector of 4 units of 4 bytes), the corresponding mode of foo Function Attributes. dllimport dllimportattribute is described in Function Attributes. dllexport dllexportattribute is described in Function Attributes. progmem progmemattribute is used on the AVR to place read-only data in the non-volatile program memory (flash). The progmemattribute accomplishes this by putting respective variables into a section whose name starts with .progmem. This attribute works similar to the section attribute but adds additional checking. Notice that just like the section attribute, progmem affects the location of the data but not how this data is accessed. In order to read data located with the progmem attribute (inline) assembler must be used. /* Use custom macros from AVR-LibC */ #include <avr/pgmspace.h> /* Locate var in flash memory */ const int var[2] PROGMEM = { 1, 2 }; int read_var (int i) { /* Access var[] by accessor macro from avr/pgmspace.h */ return (int) pgm_read_word (& var[i]); } AVR is a Harvard architecture processor and data and read-only data normally resides in the data memory (RAM). See also the AVR Named Address Spaces section for an alternate way to locate and access data in flash memory. Three attributes are currently defined for the Blackfin. l1_data l1_data_A l1_data_B l1_dataattribute are put into the specific section named .l1.data. Those with l1_data_Aattribute are put into the specific section named .l1.data.A. Those with l1_data_Battribute are put into the specific section named .l1.data.B. l2 l2attribute are put into the specific section named .l2.data. generates seth/add3 instructions to load their addresses). The basedattribute is assigned to the .basedsection, and assigns-field packing. The padding and alignment of members of structures and whether a bit-field can straddle a storage-unit boundary are determine by these rules: alignedattribute or the packpragma), whichever is less. For structures, unions, and arrays, the alignment requirement is the largest alignment requirement of its members. Every object is allocated an offset so that: offset % alignment_requirement == 0 MSVC interprets zero-length bit-fields in the following ways: For example: struct { unsigned long bf_1 : 12; unsigned long : 0; unsigned long bf_2 : 12; } t1; The size of t1 is 8 bytes with the zero-length bit-field. If the zero-length bit-field were removed, t1's size would be 4 bytes. foo, and the alignment of the zero-length bit-field is greater than the member that follows it, bar, baris aligned as the type of the zero-length bit-field. For example: struct { char foo : 4; short : 0; char bar; } t2; struct { char foo : 4; short : 0; double bar; } t3; For t2, bar is placed at offset 2, rather than offset 1. Accordingly, the size of t2 is 4. For t3, the zero-length bit-field does not affect the alignment of bar or, as a result, the size of the structure. Taking this into account, it is important to note the following: t2has a size of 4 bytes, since the zero-length bit-field follows a normal bit-field, and is of type short. struct { char foo : 6; long : 0; } t4; Here, t4 takes up 4 bytes. struct { char foo; long : 0; char bar; } t5; Here, t5 takes places the variable in the first 0x100 bytes of memory and use special opcodes to access it. Such variables are placed in either the .bss_below100section or the .data_below100section.
https://gcc.gnu.org/onlinedocs/gcc-4.8.1/gcc/Variable-Attributes.html
CC-MAIN-2019-22
refinedweb
680
62.78
file You just want a file object (pointing to an empty file) which has a filename associated to it and hence you cannot use a StringIO object: from tempfile import NamedTemporaryFile f = NamedTemporaryFile() # use f .. Once f is garbage collected, or closed explicitly, the file will automatically be removed from disk. Case #2: You need a empty temporary file with a custom name You need a temporary file, but want to change the filename to something you need: from tempfile import NamedTemporaryFile f = NamedTemporaryFile() # Change the file name to something f.name = 'myfilename.myextension' # use f Since you change the name of the file, this file will automatically not be removed from disk when you close the file or the file object is garbage collected. Hence, you will need to do so yourself: from tempfile import NamedTemporaryFile f = NamedTemporaryFile() # Save original name (the "name" actually is the absolute path) original_path = f.name # Change the file name to something f.name = 'myfilename.myextension' # use f .. # Remove the file os.unlink(original_path) assert not os.path.exists(original_path) Case #3: You need a temporary file, write some contents, read from it later This use case is where you need a temporary file, but you want to work with it like a "normal" file on disk - write something to it and later, read it from it. In other words, you just want to control when the file gets removed from disk. from tempfile import NamedTemporaryFile # When delete=False is specified, this file will not be # removed from disk automatically upon close/garbage collection f = NamedTemporaryFile(delete=False) # Save the file path path = f.name # Write something to it f.write('Some random data') # You can now close the file and later # open and read it again f.close() data = open(path).read() # do some work with the data # Or, make a seek(0) call on the file object and read from it # The file mode is by default "w+" which means, you can read from # and write to it. f.seek(0) data = f.read() # Close the file f.close() .. # Remove the file os.unlink(path) assert not os.path.exists(path) By default delete is set to True when calling NamedTemporaryFile(), and thus setting it to False gives more control on when the file gets removed from disk.
https://echorand.me/tempfilenamedtemporaryfile-in-python.html
CC-MAIN-2019-09
refinedweb
386
71.14
Abstract singleton adapter for opening files. More... #include <filemanager.h> Abstract singleton adapter for opening files. This adapter provides an interface for opening files. The tuner produces various files (e.g. log files) which have to be saved at certain standard locations, depending on the platform of the implementation. For example, Windows and Android applications provide different standard locations for writing and reading files. These differences are accounted for in the derived implemenations. The FileManager is a singleton class, i.e., there is only a single instance. Definition at line 43 of file filemanager.h. Constructor, setting the singleton pointer to its own instance. Definition at line 47 of file filemanager.h. Virtual destructor. Definition at line 48 of file filemanager.h. Abstract function: Get the file content for an algorithm. In the EPT each tuning algorithm comes with its own xml file. This function opens the xml file and returns its content in form of a single string. The only parameter is the ID of the algorithm. This function has to be implemented in the derived class. Implemented in FileManagerForQt. Abstract function: Get the standard path for logfiles. Each platform provides a different standard directory for logfile. This abstract function has to be overridden accordingly in the respective implementation. The tuner software allows for different logs labelled by certain names passed here as an argument. Implemented in FileManagerForQt. FileManager::getSingleton: Get a reference to the singleton. Definition at line 47 of file filemanager.cpp. Open an input stream. This function opens an input stream. If the stream can be opened successfully an information message is sent. If it cannot be opened a warning is displayed. The function may be overwritten in the implementation. Definition at line 71 of file filemanager.cpp. Open an output file stream. This function opens an output stream. If the stream can be opened successfully an information message is sent. If it cannot be opened a warning is displayed. The function may be overwritten in the implementation. Definition at line 106 of file filemanager.cpp. Singleton unique pointer. Singleton pointer. The FileManager is a singleton class, meaning that it admits only a single instance. The unique pointer mSingleton points to this instance. Definition at line 90 of file filemanager.h.
http://doxygen.piano-tuner.org/class_file_manager.html
CC-MAIN-2022-05
refinedweb
375
54.18
#include "U8glib.h"U8GLIB_SSD1306_128X64 u8g(U8G_I2C_OPT_NONE); void draw() { u8g.setFont(u8g_font_unifont); u8g.drawStr( 0, 20, "Hello World!");}void setup() {}void loop() { u8g.firstPage(); do { draw(); } while( u8g.nextPage() ); delay(1000);} Did you apply pull up resistors for the display?Which Arduino board do you use?Oliver Can you run the i2c_scanner ? it doesn't find an I2C addres, you have a display that can not pull the SDA low for an acknowledge. I don't know if the u8glib can do that. I have seen a library for that, perhaps I can try to find it if you want. I reported this thread to a moderator, it will probably be has been put in the Display section.Have a look at this thread (click).It is about this display, but on an other pcb.At the end u8glib is discussed here, and u8glib has its own topic in this Display section. No, there are no pull up resistors. Do I need one? QuoteNo, there are no pull up resistors. Do I need one?Once these resistors are there,. the scanner should find a device at 0x3c. In that case, you might have a super cheap OLED, that can only read I2C and can not acknowledge it.Does it say 'heltec.cn' on the backside ? The rar file on this page contains code for it. code doesn't do anything special, so perhaps the u8glib will work with it.They say it is Adafruit compatible, perhaps you can use the Adafruit library or the u8glib with Adafruit mode.For the u8glib, you have to ask in the u8glib thread if the display is supported.I only started recently using u8glib with another super cheap OLED. Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=219419.0
CC-MAIN-2016-22
refinedweb
319
78.75
. Null conditional operators ?. The new ? operator check if the expression is null before continue evaluating it, this is great to clean pieces of code like if (something != null) something.callMethod(); to something?.callMethod(); If the operator is in an expression that should return something it returns null when the evaluation cannot continue. Events are a good example of usage for this new operator. Let’s rewrite the OnCall method using a null conditional operator and a body expression: protected virtual void OnCall() => Call?.Invoke(this, EventArgs.Empty); Operator nameof() The constructor throws an exception when the var “Name” is null. This exception has a string parameter to specify the name of the parameter that is null. The new “nameOf()” operator allows to remove this “magic” string. This helps to avoid errors during renaming re-factoring: if (name == null) { throw new NullReferenceException(nameof(name)); } Strings projections String projections are an improvement of “string.Format” It allows to rewrite the logger call: Logger.WriteInfo("New Id = \{lastId}"); Exceptions filters and await in catch and finally It is easier to illustrate these new features with a different example, have a look to the next piece of code: public void RunAlarm() { try { // Run my phone alarm every morning, // if something was wrong run the backup alarm RunPhoneAlarm(); } catch (Exception) { // Run backup alarm RunBackupAlarm(); } } /// <summary> /// Run the phone alarm. /// If there is a problem with the phone and today is not Sunday -> throw an exception. /// </summary> public async void RunPhoneAlarm() { try { phone.RunAlarm(); } // New C# 6.0: catch conditions catch (Exception) if (DateTime.Now.DayOfWeek == DayOfWeek.Sunday) { Debug.WriteLine("Phone's alarm is not working... but I can fix it later..."); // New C# 6.0: await inside catch or finally block await CreateRemainder("Repair phone."); Debug.WriteLine("A remainder was created."); } } Index initializers This is a great feature when we have to deal with json objects in our code. We can write much easier our person.ToJson() method using index initializers. Index initializers allows to create the value to return on the fly, we can get rid of the local variable and use a body expression: public JObject ToJSon => new JObject() {["id"] = Id, ["name"] = Name, ["age"] = Age}; – There is also a new feature that I didn’t mention yet in this post, and the reason is that I don´t like it… but … Using and static class Now it is possible to add a static class in our “using” declarations and call the methods directly, for me is better to see the name of the class that implement the static method I am calling… I prefer Debug.WriteLine(“My message”); Than: using System.Diagnosis.Debug; // 500 lines of code in between … WriteLine(“My message”); // write where?? Conclusion I am missing primary constructors, Microsoft didn’t mention them during the “Connect()” event. (Or I didn’t heard it…) C# 6.0 has not big improvements but some of the new features will help to clean up our code and improve readability. Visual Studio 2015 has some cool new features (some of them covered already by Resharper). I will test now this in deep, but that will be a different post :). This is our class “Person” written with some C# 6.0 features: public class Person { // Constructor public Person(string name, int age) { if (name == null) throw new NullReferenceException(nameof(name)); Name = name; Age = age; } // Events public event EventHandler Call; // Properties public string Name { get; set; } public int Id { get; } = GetNewId(); public int Age { get; set; } = 18; // Calculated Properties public int GetYearOfBirth => DateTime.Now.Year - Age; public JObject AsJSon => new JObject() {["id"] = Id, ["name"] = Name, ["age"] = Age}; #region privates private virtual void OnCall() => Call?.Invoke(this, EventArgs.Empty); private static int lastId; private static int GetNewId() { lastId++; Logger.WriteInfo("New Id = \{lastId}"); return lastId; } #endregion }
https://softwarejuancarlos.com/2014/11/16/csharp6-new-features/
CC-MAIN-2019-39
refinedweb
628
55.54
Consider the following code snippet int main() { int var; var = 5; float var = 7.0; //Error } There is an error in the statement float var = 7.0; because it declares a variable using a name which has already being used in this scope. This is not allowed. As our programs increase in size, the number of identifiers also increases and sooner or later we feel that we don’t have any meaningful identifier left to name our variable or function that has not already been used. Here’s what namespaces are used for. Namespace simply is an area of code which guarantees that every identifier inside it is unique. So we want to use two variables of same name in one scope, we can put them in different namespaces. Note: By default, all global variables and global function definitions are in global namespace. To define a namespace use the keyword namespace and follow this syntax namespace namespace-name { ……… some definitions ……… } Example namespace n1 { int x = 5; void foo() { cout << x; } } We can put variables, function definitions and even classes inside a namespace. To access the variable ‘x’ of namespace n1, we use scope resolution operator (::) like n1::x; And similarly to call foo() function of namespace n1, we use n1::foo(); We can also define classes within a namespace. Let’s see an example, namespace n2 { class A { int a; public: int getA() { return a; } }; } Now there is a class ‘A’ inside namespace ‘n2’. We can create an object of this class like n2 :: A obj; And now use this object to call member functions of this class obj.getA(); Note: std is a namespace that we are using from very beginning without knowing much about namespaces. It is provided inside C++ standard library only. Namespaces can also be nested i.e. one namespace inside other. Here is an example of nested namespaces namespace n3 { int x; namespace n4 { int y; } } Now as we see, ‘x’ is a variable that belongs to namespace n3 and ‘y’ is a variable that belongs to n4 inside n3. To access ‘x’, we use n3 :: x; But to access ‘y’, we use a chain command like n3 :: n4 :: y;. Like this, we can handle any level of nesting.
https://boostlog.io/@sophia91/namespaces-5a9a57be8575ad004e55bced
CC-MAIN-2019-30
refinedweb
374
69.92
Hello i am wondering on how to set "Vector[] thingA" and "Vector3[] thingb" as one Vector3, would anyone know how to do this>? List l=new List(arrayA); l=AddRange(arrayB); Vector3[] arrayC = l.ToArray(); Answer by JoshuaMcKenzie · Sep 11, 2016 at 08:42 AM I'm assuming that you mean merge one array of vector3 with another array of vector3 the System.linq namespace not only has a bunch of methods that deal with working with various types of collections (arrays, lists, dictionaries, etc.), including converting between said types and of course the concatenation of multiple collections... it also uses alot of code sugar and chained commands so that you don't have to write as much code to do so. using System.linq; Vector3[] thingA; Vector3[] thingB; Vector3[] thingC = thingA.Concat(thingB).To 242 People are following this question. Stupid question but I have to figure out 0 Answers [SOLVED]Removing gameobject from list don't change the index 1 Answer item not adding to list correctly! 1 Answer "error CS1501: No overload for method `Add' takes `2' arguments" 0 Answers getting two errors and none of the fixes ive found online work or are relevant to my code CS0019 and CS1061 (full errors below) 0 Answers
https://answers.unity.com/questions/1242131/how-do-i-set-twoor-multiple-as-one.html
CC-MAIN-2020-16
refinedweb
209
60.24
Hello, I have to make a small game in C++ as a school project. Now, although I've made many 2D games before I'm very new to C++, and I ran into a few problems. One: I need dynamic key input. I now use getch() and it makes the program wait for key input, but I want other code to keep running while no key is pressed yet. Two: I think I have a memory leak or something, because if I keep running my game a few times, my pc becomes slower. (I have a dutch pc so can't describe it very clearly, but if you press CTRL+ALT+DEL, and look at the performance tab, and then the thing beneath the CPU gets higher everytime I play (from 178 MB to 200, to 300 and then my pc becomes kind of slow). My code (the game's a very silly not-yet-working correctly car-racing game): #include <cstdlib> #include <iostream> #include <string> #include <conio.h> #include <process.h> using namespace std; //car variables string car("|=|"); string car_frontback("0-0"); string car_space_left(""); string car_space_right(""); int car_space_int = 10; //int car_health = 100; //enemy variables string enemy("|=|"); string enemy_frontback("X=X"); string enemy_space_left(""); string enemy_space_right(""); int enemy_space_int = 0; int enemy_pos = 0; //game variables int row_total = 15; int space_total = 20; string game_space(" "); string line("|"); string show_nothing(" "); int nothing[] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; int i = 0; int num = 0; //keyboard variables const char KEYRIGHT = 75; const char KEYLEFT = 77; char arrow = 0; int main(int argc, char *argv[]) { //title screen cout << " -----------------" << endl << " - Welcome -" << endl << " - to -" << endl << " - Ultra Car -" << endl << " - Racing -" << endl << " - 3000 -" << endl << " -----------------" << endl << "" << endl << "Use the left and right keys to dodge cars." << endl << "Press any key to start."; srand(time(NULL)); //check keyboard arrow = kbhit(); while(arrow !=27) { //variables i = 0; car_space_left = ""; car_space_right = ""; enemy_space_left = ""; enemy_space_right = ""; //wait for kb input... why wait?! :( ..stupid c++ arrow = getch(); switch(arrow) { case KEYLEFT: //left arrow key if (car_space_int <= space_total) car_space_int+=1; break; case KEYRIGHT: //right arrow key if (car_space_int > 0) car_space_int-=1; break; } //space left to the car for (i=0; i < car_space_int; i++) { car_space_left = car_space_left + " "; } //space right to the car for (i=0; i <= space_total-car_space_int; i++) { car_space_right = car_space_right + " "; } //set enemy position if (enemy_pos == 14) { num = 1 + rand() % (20 - 1 + 1); enemy_space_int = num; } //space left to the enemy for (i=0; i < enemy_space_int; i++) { enemy_space_left = enemy_space_left + " "; } //space right to the enemy for (i=0; i <= space_total-enemy_space_int; i++) { enemy_space_right = enemy_space_right + " "; } enemy_pos += 1; if (enemy_pos > row_total) { enemy_pos = 0; } //clear screen system("cls"); //place game screen lower cout << endl; cout << endl; cout << endl; //position of enemy for (i=0; i<row_total; i++) { nothing[i] = 0; } nothing[enemy_pos] = 1; nothing[enemy_pos+1] = 2; nothing[enemy_pos+2] = 1; //show road, enemy for (i=0; i<row_total; i++) { if (nothing[i] == 0) { show_nothing = " "; } else if (nothing[i] == 1) { show_nothing = enemy_frontback; } else if (nothing[i] == 2) { show_nothing = enemy; } cout << game_space << line << enemy_space_left << show_nothing << enemy_space_right << line << endl; } //show car cout << game_space << line << car_space_left << car_frontback << car_space_right << line << endl; cout << game_space << line << car_space_left << car << car_space_right << line << endl; cout << game_space << line << car_space_left << car_frontback << car_space_right << line << endl; cout << endl << "check " << enemy_pos << " + " << i; } //end game system("PAUSE"); return EXIT_SUCCESS; } //I HAVE A MEMORY LEAK SOMEWHERE!!! :( Yes, I know my code is very very bad... this is the first time I use C++. And yes it's a text-based racing game... :eek: Basically what it does now is that you can steer a car from left to right, and that other cars run down the road from top to bottom, but they only can down if you keep driving (no keyboard input and they stop moving). Any help would be much appreciated!
https://www.daniweb.com/programming/software-development/threads/73242/school-project-simple-c-game-need-help-with-keyboard-input
CC-MAIN-2017-17
refinedweb
620
56.02
We had a Python meeting tonight (18 april 2013) in Delft at the fox IT offices. I especially liked the open source marketing talk, telling us how to properly tell others about our great projects. (Note: my brother Maurits also made a summary) I gave a quick lighting talk about my githubinfo script that gives you reports on the amount of tests in the last week per project and per developer. Handy for raising awareness for testing amongst your colleages if you mail around the results once a week! The source code is at . I already blogged about it, so you can read more there. Ronald works on detact, a fraud detection for online banking, trying to detect fraud in http traffic for online banking. They sell it to banks. The low-level stuff is in c++, the rest in Python. How do they make it? They use git, so they can use Gerrit, a code review tool. Every commit gets pushed first into a special gerrit branch on github and ends up in the gerrit tool. Every commit needs two upvotes before it gets included. One of the votes comes from their continuous integration tool. Continuous integration tool? They use jenkins for that. It integrates real well with gerrit, which has a plugin for it. Deploying? They try to deploy continuously. They have to play cat-and-mouse with online criminals, so they need to respond quickly. That’s why they put so much effort into testing, continuous integration and code review. Everything has to be deployable most of the time. Some of the tools they use for operations: irc, fabric, sentry, munin, nagios, request tracker (for sending out SMSs at night). Sentry aggregates all your error messages from your various servers. So you deploy and start monitoring sentry for a while. Works quite well. About irc: look at for some easy boilerplate to quickly create your own simple bot. If you need more functionality, use github’s hubot. It has lots of functionality, you can do a lot with it. (Update: I mis-interpreted what he said initially, but he mailed me and said he was positive about hubot, so the original text in my summary (which was negative about hubot) was wrong). Fabric uses SSH to execute commands on your servers. You write the commands in Python. They use it to roll out their software on their servers. They do some weird hacky great Python globals hacking here: for customer in customers: globals()[customer] = some_method_that_returns_a_customer_function Closing comment: use requests when you talk to something via HTTP. It also makes it easy to consume JSON. HTTP and JSON are your friends. How to get more attention for your open source project. Many people make open source software, but make small mistakes in telling the world about them. The sad thing is that you’re doing yourself a disservice because you don’t help people use your software this way. This talk gives you a few simple marketing tips to help you. Get your users engaged, talk about them. People care about themselves, so talk about them, not about yourselves. For instance, Ruby “helps you write…”. A bad example is sqlalchemy: “it provides a full suite of…”. That’s impersonal and not about me, the reader. Being personal helps! Think of the pain. Why did you start your project? What problems does it solve? Talk about pain. “I don’t have enterprise software” isn’t a pain. “My code isn’t beautiful” is a pain. Tell your user what their pain is, so what their A good example: Django. “Django makes it easier to build better web apps more quickly with less code.” So web apps are difficult to write, good apps are hard, good apps take too long and good apps are a lot of work. Hey, that are pain points that I can understand. A bad example is Flask. It “uses werkzeug and it is BSD licensed”. So apparently the pain I feel is that I’m not using werkzeug and apparently I’m bothered that I’m not using specifically the BSD license?!? (Somebody commented that the right way to tell it is somewhere else in the docs). Think about benefits first. Benefits first, not a list of features. Features are only useful as support for the benefits you claim. A good example: “… rely on MySQL to save you time and money”. Words like that make the benefits clear. Benefits are good. A benefit gets you from :-( to :-). PostgreSQL’s just lists a huge number of features. That doesn’t help. Show, don’t tell. A picture is worth a thousand words. We all know this. But do we do it? A good example is the requests library. On the home page is a quick code example that shows how easy and good the library is. The page doesn’t show a thing. It doesn’t show how beautiful or bad it is. So focus on: Your users. Their pains. Benefits first. Show, don’t tell. (Jeroen has an excuse, btw, for not talking to sqlalchemy and flask yet: they just got a baby :-) ) Nothing? Zero. IT is zeros and ones. He checked his own source code: 58% zeros vs 42% ones. You can find the script at . None is is also nothing. But sometimes you get None back from something that you later want to call, so you first have to check for None. Someone suggested to make None callable, returning None again, in PEP 336. It was shot down. So he wrote >>> from ninilo import nihil >>> nihil + 5 5 >>> nihil.something() nihil() Sigh. Some evil mind in the room suggested from nihilo import nihil as None… (Jasper checked later: this doesn’t work.) Onno is a scientist at the UVA in Amsterdam. Scientists, especially physisists, invented the atom bomb. And they also write some of the worst spagghetti code. For their particle accellerators they really need to write their own software as their’s nothing off-the-shelf. Version control means copying folders around. my_code.final.final2. Testing is non-existent. And those people are supposed to cure cancer and so? The solution is to educate scientists. Testing. Version control. And when to stop and get a real programmer on board. There are some people in Canada, Software carpentry, that teach scientists to program properly. Using Python. In a few weeks they come to the Netherlands to give courses here. If you have ideas, please tell him. A question: isn’t it required to publish your code along with your research? Answer: no it isn’t. That’s idiotic and weird, but it isn’t. Despite the “peer review” that’s touted in science. Bad. Fanstatic takes the pain out of static resources for your website: javascript, css, images. You’ll see a lot of js.* projects on pypi: those are fanstatic projects that provide javascript/css/images. js.extjs, js.jquery, js.html5shiv, etc. You can use your python stack to work with those resources. Including version numbers! install_requires=['js.jquery > 1.4', ...]. Fanstatic handles combining resources (“bundling”) and compilation (minifying, compiling, coffeescript, etc). It saves you time! You “require” resources in your code: jquery.need(). And… your resources can also require other resources. So if you need jquery, you just specify the requirement and fanstatic takes care of it. Nice. He demo’ed it and it all worked as a charm. Dependencies, compiling, bundling, minification. Fanstatic works with WSGI, zope, flask, django, etc. You can also have fanstatic serve your static resources via some separate static website. Note: fan**st**atic instead of):
https://reinout.vanrees.org/weblog/2013/04/19/pun-delft.html
CC-MAIN-2022-21
refinedweb
1,271
78.04
Java classpath Java classpath What is the difference between classpath and path variable CLass prompt Java class path can be set using either the -classpath option when calling... Java ClassPath Question: What is the meaning of PATH and CLASSPATH ? How it is set in environment JAVA CLASSPATH PROBLEM JAVA CLASSPATH PROBLEM hi all Friends I am stuck using the java servlets and problem raise for classpath. I had a problem with servlet to call... solved this problem by putting the path of classes to the classpath CLASSPATH detail. CLASSPATH tells Java where to search for programs Where to look... the environment variable CLASSPATH, which Java uses to see where it should find Java programs... complete explanation of CLASSPATH can be found in Java Glossary : classpath java classpath prbm in windows 7 java classpath prbm in windows 7 i have a problem to set the classpath in windows7 in my system oracle is not there. environment variables under...\lib; in system variables PATH .;C:\Program Files\Java\jdk1.6.0_32\bin; after Classpath - Java Beginners Java classpath setting command line Is there any command line to set the classpath in Java Checking Classpath Validity - Java Tutorials Checking Classpath Validity 2001-12-13 The Java Specialists' Newsletter [Issue 037] - Checking that your classpath is valid Author: Sydney Redelinghuys... to be -classpath. set ERRORLEVEL=1 goto end ) ) java path - Java Beginners meaning of path and classpath what is the meaning of path and classpath. How it is set in environment variable. Path and ClassPath in in JAVAJava ClassPath Resources:- Class path ; The Classpath is an argument that is path through the command line. Classpath can.... Setting the class path is mandatory to run a java application. Class path in java... the CLASSPATH environment variable. Setting the class path by using the -classpath ClassPath Helper ClassPath Helper Every developer who has used Java is familiar with the classpath. Usually it doesn't take long to generate your (C:\j2sdk1.4.2\bin) and classpath(C:\j2sdk1.4.2\lib) .But when i write simple code and executed the java program , ie When i compile as javacHelllo.java(in which set a class path - Development process to set a classpath for that please help me sir thanks in advance...  ... to ToolBar. Go to Project>>Properties>>Java Build Path>>Add Java IO Path Java IO Path In this section we will discuss about the Java IO Path. Storage... without complete information. To work with Path in Java, the Path class... is called a Path of that file or folder using which they can be accessed easily What is difference between Path and Classpath? What is difference between Path and Classpath? hi What is difference between Path and Classpath? thanks Hi, The Path & Classpath are used for operating system level environment variales. We mostly use Path Java Java What is class path and path in java Hi Friend, Classpath is Environment Variable that tells JVM or Java Tools where to find the classes. In CLASSPATH environment you need to specify only .class files. Java Construct File Path Java Construct File Path In this section we will discuss about how to construct file path in Java. A file path can be constructed manually or by a Java program. It is best practice to construct a file path in Java by a Java Program How to find unweighted Path with Less weighted path in java? How to find unweighted Path with Less weighted path in java? I have methods to find weighted and unweighted(Djistra) so I to output unweighted path that has less weighted cost...e.g if I have A->B->C and A->F->C how to set class path how to set class path how to set class path in java how to get java path name interview path pdf interview path pdf Plz send me the paths of java core questions and answers pdfs or interview questions pdfs... the interview for any company for <1 year experience thanks for all of u in advance Please visit Java file absolute path Java file absolute path In this section, you will learn how to get the absolute path of the file. Description of code: If you want to locate a file without requiring further information, you can use absolute path. It always contain Java example to get the execution path Java example to get the execution path  ... path of the system in java by using the system property. For getting execution... java installation directory java.class.path java class path Here Java program to get the desktop Path Java program to get the desktop Path  ... the desktop path of the system. In the java environment we can get the desktop path also with the system's property. For getting the desktop path we have to add java first error: Open the project properties Select Java Build Path >..., Properties, Java Build Path, Libraries, Remove and Add Library. 2 Import... 2 errors.. 1. The project cannot be built until build path errors are resolved Java get Absolute Path Java get Absolute Path In this section, you will study how to obtain the absolute path... file.getAbsolutePath() returns the absolute path of the given file.   No action instance for path language="java" import="java.util.*" pageEncoding="ISO-8859-1"%> <% String path = request.getContextPath(); String basePath = request.getScheme()+"://"+request.getServerName()+":"+request.getServerPort()+path+"/"; %> < Convert the path in to properties that common classpath and path can be shared among targets. Many tasks have... Convert the path in to properties This example illustrates how to convert path java java why to set classpath in java to obtain image path to obtain image path i have made a web application in which you can... or BROWSE button . and i am expecting to obtain the complete path of the image from..."); // create a file object for image by specifying full path of image Constructing a File Name path in Java ; In Java, it is possible to set dynamic path, which is helpful for mapping local file name with the actual path of the file using the constructing filename path technique. As you have seen, how a file Java Get Class path Java Get Class path In this section, you will learn how to get the class path. The method System.getProperties() determines the current system properties.  Jsp Absolute Path Jsp Absolute Path  ... the absolute path in jsp. The absolute path is the full path that contains the root directory instead of a few directories contained within the absolute path Getting a absolute path Getting a absolute path  ... the path of the file or directory so that you can access it. If you know the path... in front you where you don't know the path of the file, then what will you do Constructing a File Name path with the actual path of the file. Java API has provided us many packages...: C:\java>java ConstructingFileNamePath The path of the file... Constructing a File Name path   Platform dependent values like line separator, path separator - Java Beginners Platform dependent values like line separator, path separator Hi, How will you get the platform dependent values like line separator, path...("path.separator") for path separator Thanks java - Java Beginners java what is the difference between classpath and path Hi Friend, Path is system wide variable that tells where to find executable files While Classpath is Enviroment Variable that tells JVM or Java Tools where java compilation error - Java Beginners java compilation error how to set path and class path for java.. why... Machine. Setting the class path is mandatory to run a java application. Please visit the following links to know more about setting classpath in Java. http Path of source code Description: This example demonstrate how to get the path of your java program file. The URI represents the abstract pathname and URL construct from URI. Code: import java.io. java - Java Interview Questions java 1. Can constructers be declared as PRIVATE in a java file??? 2. What is the major difference between classpath and path envirounmental.... For more information about PATH and CLASSPATH,please visit the following link java - Java Beginners ; Advanced->Environment Variables and create your JAVA_HOME, CLASSPATH etc... Installation. Eg: C:\Program Files\Java\jdk1.5.0 The value of CLASSPATH... can use JAVA_HOME variable) The value of PATH variable is the "bin" folder java standard edition - Java Beginners java standard edition good evening sir. i want to install j2se version-6 on my laptop and netbeans, so i want to know the path: and classpath... paths to work my java Convert a file name path to url and vice versa in java the Filename Path into URL. It reads a file name and converted into the URL. toURL... path into the URL. This is done by creating the instance of the URL class... Path into URL Program : import java.io.*;   java java what is java Java is a programming language..., and business applications. Advantages of Java: * Java is simple, easy to design... programming languages. * Java is object-oriented, that is used to build modular Ant make directory with relative path Ant make directory with relative path  ..., how to compile java file and how to create jar file. This is a simple program that uses <classpath refid="test.classpath"> to map with the jar Java Questions Java Questions How can I set classpath for java classes through java... application? Hi, You can set the classpath while starting the program. There no general use of setting class path from the Java program. The servers Java - Java Beginners any jar file but haven't set the class path for that jar file. To resolve it, set the classpath and confirm that the class you are importing must exist Getting a absolute path is given below. C:\java>java GetAbsolutePath java The absolute path in first form is C:\java\java The absolute path is C:\java\java\java The absolute path is C:\java\java\..\java Download this example java - Java Beginners java why we use classpath.? Hi Friend, We used classpath to tell the Java Virtual Machine about the location of user-defined classes and packages in Java programs. Thanks JAVA in the classpath also C:\Program Files\Apache Software Foundation\Tomcat 6.0\webapps\website_app\WEB-I NF\classes>set CLASSPATH=C:\program Files\Apache Software Java Debugger,Java Debuggers Java Debugger Java debugger helps in finding and the fixing of bugs in Java language programs. The Java debugger is denoted as jdb. It works like a command-line Java - Java Beginners Java how can i make this java program open an image file and open a website like yahoo? GrFinger Java Sample (c) 2006 Griaule Tecnologia Ltda... environment . This sample is provided with "GrFinger Java Fingerprint Recognition Program :\apache-tomcat-5.5. 2)Like that create another variable classpath and put the path.... 2)Like that create another variable classpath and put the path of apache...Java Program A Simple program of JSP to Display message, with steps Source PDF Libraries | Java Ftp Libraries | Java ClassPath | Task Scheduling... | Site Map | Business Software Services India Java Tutorial Section Core Java | JSP Tutorials | J2ME-Tutorials | JDBC Tutorials | JEE 5 Java - Framework Java Installation path What is the Java installation path - problem - Java Beginners Java problem I have an image in my application and I need to restrict the image path view on the browser. Noone should be able to right click and see the image path. Please help java - Java Beginners to develop a java program for retriving the file selected in html file and storing it in the specified path.before saving it should check wether the path exists r not,if not then it as to create the path n save the file.i need java code Java - Java Interview Questions Java How many types of Class Loaders Are there? Hi friend, Each Java class must be loaded by a class loader. When the JVM is started... the classes in core Java libraries 2. Extensions class loader : loads the classes java java diff bt core java and java java code - JSP-Servlet the path of tomcat in Environment Variables. So set the classpath and restart...java code hi i have made an application in which i have a fuctionality. in which i get the location of xml files as links on jsp page and when java - Java Beginners to compile the Java program and how to set class path to run this java file. and where i put this jxl jar file and how to set class-path for this jar file...java I have one xls sheet. i want to read this xls sheet in java java java what is java Installing Java file JDK-1_0_2-win32-x86.exe. Add the directory \java\bin to the class path... Installing Java  .... The combination of two features Platform Independent and Object Oriented makes the java Java compilation error - Java Beginners in qshell - javac -classpath /home/java/build/ ServiceLevelReport1.java */ public...Java compilation error I place the following code in to my program... writeWorkbookToFile() { // create the full path & file name java tool - Java Beginners java tool HI,i'm doing a project to implement BASIS PATH TESTING..so,i want a java tool which takes the c-program as the input n converts it into a control flow graph...can anyone suggest? thanking you java installing - Java Beginners java installing HI sir.... can any tell me in detail that how can I download java software and install in my system(windows vista 2007),and start working on it and setting the class path also. Regards shireesha JAVA JAVA how the name came for java language as "JAVA java java why iterator in java if we for loop java java explain technologies are used in java now days and structure java Ask Questions? If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://www.roseindia.net/tutorialhelp/comment/64816
CC-MAIN-2013-20
refinedweb
2,344
55.44
Articles Index One of the nice things about programming in Java is that the language and its class library are designed to provide practical solutions to real object-oriented programming problems. In the object programming community, many of these solutions have been well documented as design patterns; each pattern is a generic solution to a common problem. One such pattern is known as the Observer pattern, which is a solution to the updating problem that arises when some objects have a dependency relationship with others. Java provides a ready-made implementation for this pattern through the Observable class and the Observer interface. Observable Observer However, one occasionally runs into design issues that require careful thinking in order to create a solution appropriate to the task at hand. This article discusses the use of the Observable class, and the Observer interface, and shows how to solve some potential problems along the way. Introducing Observer Patterns A common scenario for using the Observer pattern involves a subject that has multiple views. Each view object needs to be updated whenever the subject changes. One example is a drawing program, where the subject is the internal representation of the drawing and views are different windows opened on the drawing. Any time you make a change in the drawing, each of the windows needs to be updated. The Observer pattern provides a way for each of the views to be notified whenever the subject has changed, without requiring that the subject know anything about the views, other than that they are observers. If the subject has to know more about the views, the program can quickly become difficult to manage--the code in the subject that handles updates becomes dependent on each of the views it must support. The Observer pattern solves this update problem as follows: The Subject (the object being observed) maintains a list of its observers. Whenever the subject makes a change to itself, it notifies all observers that a change has been made. It might also provide some generic change information with that notification. Each view gets the same notification. Usually this notification takes the form of a method call on the observer, with update information stored as a parameter to that call. The only thing that the subject knows about an observer is that it understands that method call. An Observer Pattern Implementation in Java The Java utilities package provides a ready-made implementation of the Observer pattern with the Observable class, which implements the updating behavior of the Subject and the Observer interface, which contains the update method to be called by the Observable, and can be easily implemented by any candidate observer objects. Here are the interfaces for Observable and Observer, with their methods grouped according to their function:); } A Simplest Possible Example Here is a very small example showing how the Observer pattern can be implemented using Observable and Observer. First create a Subject class with a simple data value that inherits from Observable. Subject public class Subject extends Observable { private String value; public void setValue(String s) { value = s; setChanged(); // set changed flag notifyObservers(value); // do notification } } When the Subject is changed through the setValue method two calls need to be made to ensure notification of all observers, one to set the changed flag, the other to notify observers. If the changed flag is not set, the notifyObservers call will do nothing. When notifyObservers is called, all observer objects will have their update method called. The Subject has no detailed information on the observers at all; it only has to make these two calls when it changes. setValue notifyObservers update Next a View class is created that accepts notifications whenever its Subject is updated. For this example, the View class just prints out a message. View class View implements Observer { public void update(Observable o, Object str) { System.out.println("update: " + str.toString()); } } Finally, you need to create a Subject and View, and link them together. This main method can be inserted as a method in the Subject class in order to run it as a standalone application. public static void main(String[] args) { Subject s = new Subject(); View v = new View(); s.addObserver(v); // calling setValue on the subject // will trigger an update call to the view s.setValue("test value"); } A More Realistic Example The above example shows an Observer pattern implementation using Observable and Observer that is small enough to get an idea of how the pattern works, but is too small for one to run into any real-world problems, or get a good understanding of how powerful the Observer pattern is. The next example shows an implementation that requires some problem solving to reach a solution. It involves a counter with two views, a textual view, and a scrollbar view. The counter is a GUI widget comprising a label, a button, and an integer value. Whenever the button is pushed, the integer value is incremented and the label is updated. Both views will also be updated. Here is an early design of the Counter, one that does not expect to be viewed by other objects: class Counter extends Panel { Label countLabel; Button incButton; private int _count = 0; /** create a new counter with a Label and an increment Button */ public Counter() { setLayout(new BorderLayout()); add("Center",countLabel = new Label("Count: 0")); add("South",incButton = new Button("Increment")); } /** return counter value */ public int value() { return _count; } /** handle button push */ public boolean action(Event evt, Object arg) { ... } } To make the Counter observable, this example has two observers, a textual view, and a scroll view. The obvious way to achieve that behavior is to have Counter inherit from Observable. However, the counter shown already inherits from the AWT class Panel, which investigation reveals that it must do. Otherwise, due to type restrictions, it would not be able to properly fit into a component hierarchy. Defining the Counter so that it satisfies a component interface is not an option, as AWT does not define an interface (only classes) for GUI components, and Java does not allow an interface to extend a class. So a way must be found to add Observable behavior to the counter without using inheritance. Counter A reasonable question to ask at this point is: "Why not make the Counter object a non-GUI object (so it can inherit from Observable), and have the label and pushbutton part of the original Counter as one of the views?" Well, even though this Counter object could easily be reconfigured in that way, there are many cases where inheritance cannot be shifted around. One likely case is that the subject is already provided and can be subclassed, but cannot have its inheritance modified. Or the subject may be part of a complex data structure (such as an abstract syntax tree) where, as with AWT components, inheritance is used to define a hierarchy for organizing objects. This example shows a way to use the Observable object without having to inherit from it by delegating the behavior that Counter needs to a contained Observable object. Access to the Observable object is provided through an accessor method so that other objects can add observers. Here are the code additions to the Counter class: class Counter extends Panel { ... Observable _observable; // new data member public Counter() { ... // initialize observable in constructor _observable = new MyObservable(); } // new method to get access to observable public Observable observable() { return _observable; } However, in the process of implementing the delegation approach, another problem pops up. Two important methods of Observable are protected: setChanged and clearChanged; the intention being that only the observed subject should have direct control over this flag. But with the delegation approach shown above, the Counter will not be able to access the changed flags. To solve this problem, a new subclass of Observable is created to open up its interface and make those two methods visible publicly. setChanged clearChanged class MyObservable extends Observable { public void clearChanged() { super.clearChanged(); } public void setChanged() { super.setChanged(); } } Now the Counter class can contain an Observable object (an instance of MyObservable) and access these methods. The change flag provided by the Observable object is still protected from users other than the Counter, as the Counter's observable method returns an object of class Observable, in which those methods are protected. MyObservable observable The New JDK 1.1 Event Model By the way, the observer/observable change notification structure is important for reasons beyond the scope of this article. The new event model for the upcoming JDK 1.1 is based on the same relationship between objects that change state and objects that are notified about state change. This new event model is referred to as the delegation-based event model (or delegation event model, for short). The delegation model relies on event source objects and event listener objects. Objects that want to be informed about state changes in AWT components register themselves with event source objects by supplying an event handler or listener object to the source. This registration occurs by calling a listener registration method in the source with the listener object supplied as an argument. The source maintains a collection of all registered listener objects. When a state change occurs in the source, each listener object is notified by the source object calling a method with a predetermined name and signature in the listener object. In the new AWT event model, event sources are like the observable objects presented here; listener objects play a role similar to observer objects. For more information on the new AWT delegation-based event model, read the section titled "Java AWT: Delegation Event Model" in " JDK1.1 AWT Enhancements." Back to the Regularly Scheduled Solution Here is the code for a textual view of the counter. It implements the Observable interface by providing the necessary update method. For this example, the second argument to the update method is expected to be the counter itself. Whenever update is called, the value of the counter is retrieved and the text field updated. It is up to the programmer to determine how the second argument is used to pass change information, but it should be the same for all views, and clearly documented in all views as well as the subject. class TextView extends TextField implements Observer { public TextView(Counter c) { super(10); setEditable(false); setText(String.valueOf(c.value())); } /** update method called by observed Counter, the second argument is the Counter object */ public void update(Observable o, Object counter) { setText(String.valueOf(((Counter)counter).value())); } } The ScrollView object is very similar to the TextView, it just replaces the TextField object with a Scrollbar object. class ScrollView extends Scrollbar implements Observer { /** create a horizontal scrollbar with a range from 0 to 10 */ public ScrollView(Counter c) { super(Scrollbar.HORIZONTAL,0,1,0,10); setValue(c.value()); } /** update method called by observed Counter, the second argument is the Counter object */ public void update(Observable o, Object counter) { setValue(((Counter)counter).value()); } } Now to wrap things up with a complete listing of the Counter class, as well as an Applet class, to demonstrate how to link the observers to the counter they observe. There is a link at the end of the article to the complete source file. Note also how the Counter method for event handling notifies the Counter's observers as soon as its value is changed. /** This is a counter class that delegates observable behavior to a contained MyObservable object. A method is provided to access this object. The counter object has a private integer variable to hold the counter value, a label to display it, and a button to increment the value. Note that since this Counter is a GUI widget, it inherits its behavior from the AWT class Panel. Since you also want it to be observable, that behavior must be provided through delegation. */ class Counter extends Panel { Label countLabel; Button incButton; private int _count = 0; MyObservable _observable; public Counter() { setLayout(new BorderLayout()); add("Center",countLabel = new Label("Count: 0"); add("South",incButton = new Button("Increment")); _observable = new MyObservable(); } public int value() { return _count; } /** make observable object accessible */ public Observable observable() { return _observable; } /** handle clicks on the button, notify observers of change */ public boolean action(Event evt, Object arg) { if(evt.target == incButton) { _count++; countLabel.setText(String.valueOf(_count)); _observable.setChanged(); _observable.notifyObservers(this); return true; } return false; } } /** A simple applet to show the Counter and its two Observers, a TextView and a ScrollView */ public class ObserverTest extends Applet { Counter c; TextView tv; ScrollView sv; public ObserverTest() { setLayout(new BorderLayout()); add("North",c = new Counter()); add("Center",tv = new TextView(c)); add("South",sv = new ScrollView(c)); /* link the TextView observer to the Counter observable */ c.observable().addObserver(tv); // do same for the ScrollView object c.observable().addObserver(sv); } } Looking to Scalability in the Real World Using the Observer pattern helps to keep subjects in touch with their viewers, but what happens when a subject becomes complex, and the views have very different interests with regard to the subject? Consider an airline ticket reservation system, where the subject is a ticket object. One of many views might be a billing view transmitted to a credit card company, and another might be a seating view used to determine the all-important window or aisle allocation. If there is a change in seating, the ticket object changes, but the billing view does not need to be updated. This is an example of how the use of the Observer pattern can become complicated by real-world issues. The trick is to find solutions that retain the character of the original solution, and not to introduce more problems than they solve. One approach to the airline ticket problem is the use of aspects, as described in Design Patterns, by Gamma, Helm, Johnson, and Vlissides. Whenever an observer registers with a subject, it is done with respect to an aspect of that subject. Then, whenever the subject determines that an aspect of itself has changed, it notifies only those observers that are interested in that change. This makes coding the update methods easier, as they each have fewer changes to deal with, and makes the program more efficient, as only necessary update methods are called. For the ticket scenario above, the two aspects needed could be billing and seating. The addObserver method would take this into account: addObserver //declaration of addObserver and a couple of aspects public synchronized void addObserver( Observer o, int aspect); public final static int BILLING = 1; public final static int SEATING = 2; // an example call to the new addObserver method aTicket.addObserver( aBillingView,AirlineTicket.BILLING); Implementing this code requires some significant extensions to the Observable class, so you might like to complete that exercise in the privacy of your own home! References Arnold, Ken, and James Gosling. The Java Programming Language, Addison-Wesley, 1996. Gamma, Erich, Richard Helm, Ralph Johnson, and John Vlissides. Design Patterns, Addison-Wesley, 1994. Source Code
http://java.sun.com/developer/technicalArticles/Programming/KeepObjectsInSync/
crawl-002
refinedweb
2,487
50.06
Team Foundation Version Control Concepts: Items versus Item Names Version control is used to store information, generally files and folders. In Team Foundation Server version control, files and folders are known as items. Team Foundation Server rarely makes a distinction between files and folders because both behave so similarly; files and folders are both stored as rows in the same table. This article discusses how Team Foundation version control classifies items and item names so that people who use the product will have a better understanding of these concepts. Items have the following qualities: - Items are unique. Each item has an ID unique to that item. - Items are versioned. Like all version control systems, Team Foundation version control makes it easy to store successive versions of the same item and retrieve old versions. - Items can have more than one name. At a rudimentary level, items have a server path that shows where they live on the server and a local path that shows where they live on each user's local computer. Names are not unique. - Items have a type. Items can be an ANSI, UTF-16, Binary, or similar file, or they can be a directory. Understanding Folders as Items Any operation involving items can potentially be recursive. As a result, many operations, such as deleting a folder, are more complex than they first appear. When you delete a folder, the user interface makes this task appear as a single recursive operation. The server also considers this operation a single recursive operation so that the history for the item will show the same recursive format. If folders were not considered items, the delete operation could not be performed without a large performance cost. For more information, see How to: Delete Files and Folders from Version Control. Non-unique names also apply to folders. This concept can be more difficult to understand for folders than with files. Typically, we are familiar with the idea that data.txt at one point in time might differ from data.txt in the future. These differences could manifest as a different version of the same file or as two separate files. The same scenario can also exist for folders. In Team Foundation version control, a distinction must be made between adding, deleting and then renaming a folder to use the same name as the deleted folder, and adding, deleting, and then un-deleting a folder. Figure 1 In the first case, you actually create two items, whereas in the second case, you create different versions of a single item. When you delete a folder, you also always delete the child items in the folder. Undelete behaves the same way; when you undelete the folder, you also undelete all child items in the folder. Names, or paths, are the user-friendly way in which to identify items. Rarely do you know the ItemID for the item you intend to modify before you modify it. Because names are not first-class objects in the version control system, we need a way to map names back to their corresponding items. Namespaces Each consistent class of item names is called a namespace. The rules for translating a name in a certain namespace to an item - or to another namespace - are called mappings. Mappings are also named functions or transformations. The easiest way to describe mappings is by using terms borrowed from math. Some mappings are injective, or one-to-one, and others are surjective, or many-to-one. Sometimes an explicit set of rules defines the mapping; sometimes the mapping is more of a side effect. Server space Server space, or committed space, is what you see when you work in Source Control Explorer. Server space represents the server paths of the items checked in to source control. The mapping between items and the server space is one of the fundamental properties tracked with each revision to an item, although there is no API to perform the transform explicitly. In addition, the mapping of an item to server space is injective. At any given point in time, each item has exactly one name in server space. Mappings become trickier over time as users commit renames. Local workspace Local workspace is where local file system paths live. Local workspace is defined in relation to server space by workspace mappings, which are explicit and easily exposed to the user. This mapping is surjective; every item in a local workspace maps back to an item in server space. The mapping is usually not injective, unless you map the whole repository ($/) and have no cloaks. For more information, see Working with Version Control Workspaces and Version Control Workspaces and Mapping Overview. Pending space Pending space is also a property of a local workspace. Pending space represents the server paths of items as they would look in server space after you checked in all your currently pending changes. In other words, pending space is defined implicitly, by transforming server space according to any pending renames you have in that workspace. Like local workspace, this mapping is usually surjective only; items in pending space map back to server space, but not vice versa. Target space Target space is a reflection of server space from one branch onto another. The Merge operation must generate a mapping between items in the source branch, or server space, and items in the target branch, or target space, in order to determine the correct target items apply the pending changes on. The mapping is implicitly defined by the history of the two branches; any renames that have occurred, whether they were merged, and if so, how they were resolved. Typically, the mapping is neither injective nor surjective, because items can be moved in and out of the two branches at will. Namespace Examples The scenarios in this section illustrate the namespace concepts discussed in this article. Suppose you performed the following operations, as listed in this history chart: For the purposes of this example, the ItemIDs for each item will stay the same, and the names in server space are specified exactly as shown in the history charts. Based on the information in the history chart, you could create a local workspace with these mappings: After the workspace is synchronized to changeset 20, you can see that the two items have local names: c:\myProj\branch1\test\data.cs and c:\myProj2\test\data.cs. Edits are the simplest kind of pending change in this context. If you make a pending Edit on the items, their names in pending space would just be the same server paths as found in server space. The names are shown in the following table. Pending a namespace operation, such as an Add, Branch, Delete, Undelete, or Rename, provides more interesting results. For example, if you created a third branch, that branch would have a local path and a server path in pending space, c:\myProj\branch3 and $/project/branch3, but would not occur in server space. Add and Undelete work similarly; Delete is the reverse. For more information, see Tf Command-Line Utility Commands. Rename is the most complex. For the purposes of this example, let's check in one Rename and make another pending Rename as shown in this history chart. At this point, the original two items have the following names: One of the downstream effects of recursive operations is that the first item has changed names even though it has never been renamed. When you want to determine the name of a specific item at a specific time, you cannot just view the history of the item. You have to consider the history of its parent folder, grandparent folder, and so on, and also any parent folder it was ever part of. While the changeset that became #30 was still pending, a similar process had to be followed in order to compute its name in pending space. Second, notice that the names of related items have diverged between the two branches. Target space occurs because that relationship must be preserved. For the purposes of this example, let's check in a second rename and make another pending merge, as shown in this history chart: Merge sees that there has been a rename to data.cs in the source, branch2, and must make the pending change in the target, branch1. By looking at the merge history, it determines which target item is related to branch2/test/data-ren.cs, that is the name of the item in target space. During the first merge, the process was straightforward; merge just substituted the branch root, $/project/branch2. This time, the process is more complex, as show in the following table:
https://msdn.microsoft.com/en-us/library/cc707804(v=vs.90).aspx
CC-MAIN-2016-07
refinedweb
1,447
62.68
It's safe to say that Kubernetes is the de facto standard for orchestrating containers and the applications running in them. As the standard, a variety of managed services and orchestration options are available to choose from. In this blog post, we're going to take a look at running the Elastic Stack on Azure Kubernetes Service (AKS) using Elastic Cloud on Kubernetes (ECK) as the operator. Elastic Cloud on Kubernetes, is the official operator for running the Elastic Stack on Kubernetes. ECK helps to manage, scale, upgrade, and deploy the Elastic Stack securely. In the steps below, we will deploy ECK on AKS and then use that deployment to collect logs, metrics, security events from a virtual machine on Azure. Here's what we'll do: - Create an Azure Kubernetes Service cluster - Install Elastic Cloud on Kubernetes - Create an Elasticsearch cluster - Deploy Kibana - Create an Azure VM for us to monitor - Deploy Metric beat to collect VM metrics and events Deploying AKS, ECK, Elasticsearch, and Kibana Note: You need to have Azure Account and the Azure CLI for Microsoft Azure installed to run some platform-specific commands. This helps you to create your cluster using this Azure CLI command. Step 1: Create an AKS cluster az aks create --resource-group resourceGroupName --name clusterName --node-count 3 --generate-ssh-keys Step 2: Connect to the AKS cluster az aks get-credentials --resource-group resourceGroupName --name clusterName Step 3: Install the ECK operator kubectl apply -f -n elastic-system logs -f statefulset.apps/elastic-operator Step 4: Create an Elasticsearch cluster with an external IP We're using the default load balancer that is available with Azure Kubernetes Service. cat <<EOF | kubectl apply -f -apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: quickstart spec: version: 7.9.2 #Make sure you use the version of your choice http: service: spec: type: LoadBalancer #Adds a External IP nodeSets: - name: default count: 1 config: node.master: true node.data: true node.ingest: true node.store.allow_mmap: false EOF Step 5: Monitor the cluster creation kubectl get elasticsearch kubectl get pods -w Step 6: Check the logs of the pod created kubectl logs -f quickstart-es-default-0 kubectl get service quickstart-es-http Step 7: Retrieve the password of Elasticsearch cluster PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode) curl -u "elastic:$PASSWORD" -k "https://<IP_ADDRESS>:9200" Note: The public IP address of Elasticsearch can be picked by running: kubectl get svc quickstart-es-http Step 8: Deploy Kibana cat <<EOF | kubectl apply -f - apiVersion: kibana.k8s.elastic.co/v1 kind: Kibana metadata: name: quickstart spec: version: 7.9.2 #Make sure Kibana and Elasticsearch are on the same version. http: service: spec: type: LoadBalancer #Adds a External IP count: 1 elasticsearchRef: name: quickstart EOF Step 9: Monitor the Kibana deployment kubectl get kibana Ingesting and analyzing Azure metrics Now that we have created an Elasticsearch cluster with Kibana in AKS, let's go ahead and ingest some observability data from Azure Cloud itself. Filebeat and Metricbeat make this easy by coming with out-of-the-box an Azure Module, helping to easily gather logs (activity, sign in, audit) and metrics (vm, container registry, billing) from Azure Cloud Platform. In this tutorial, we will install Metricbeat on an Azure VM and enable the Azure cloud module. Before that, we also need to have credentials to authenticate with Azure Monitor REST API which uses Azure Resource Manager authentication model. We need to have client_id, client_secret, subscription_id, tenant_id which can be obtained by creating an Azure Active Directory App. You can use this guide to Azure AD application and service principal that can access resources. Step 1: Create a Azure VM and SSH into the VM az vm create \ --resource-group myResourceGroup \ --name myVM \ --image UbuntuLTS \ --admin-username azureuser \ --generate-ssh-keys ssh azureuser@<IP_ADDRESS> Step 2: Install Metricbeat wget Step 3: Configure Elasticsearch and Kibana credentials in Metricbeat This helps us to ship the data to Elasticsearch Cluster created on AKS as well as load dashboards in Kibana. vim /etc/metricbeat/metricbeat.yml setup.kibana: host: "https://<public_ip_addr>:5601" Note: The public IP address of Kibana can be picked by running: kubectl get "kubectl get svc quickstart-kb-http" vim /etc/metricbeat/modules.d/azure.yml.disabled Replace the client_id, client_secret, subscription_id, tenant_id for all the metricsets listed in the yml file. A sample might look like this. - module: azure metricsets: - monitor enabled: true period: 300s client_id: '8dec1ab1-1691-48a6-af43-f87de68e971b' client_secret: '~fwL-MhOcguaD2yK1e_.OWHhhqwdp-p974' tenant_id: 'aa40685b-417d-4664-b4ec-8f7640719adb' subscription_id: '70bd6e77-4b1e-4835-8896-db77b8eef364' refresh_list_interval: 600s resources: - resource_query: "resourceType eq 'Microsoft.DocumentDb/databaseAccounts'" metrics: - name: ["DataUsage", "DocumentCount", "DocumentQuota"] namespace: "Microsoft.DocumentDb/databaseAccounts" Step 4: Enable Azure Module and start Metricbeat cd /usr/bin/ ./metricbeat modules enable azure ./metricbeat setup --dashboards ./metricbeat -e Step 5: Monitor Metrics of Azure in Kibana Log into Kibana and head over to Dashboards. Search for "Azure" to look at several preconfigured dashboards regarding storage, database, billing. Here is what a sample monitoring dashboard would look like: Wrapping up And that's that! You have successfully built a secure Elastic Stack deployment on a managed Kubernetes service. You can also deploy other applications like Elastic APM or Elastic Workplace Search. In addition to that, you can also enable cross-cluster search and replication, which enables you to deploy an Elastic Stack on Kubernetes cluster across regions to serve users. We encourage you to try ECK for yourself (on any Kubernetes service), and if you have further questions related to this blog post, you can always reach us out on our community discussion forums or Slack workspace. If you'd like to learn more about how Elastic can help with Kubernetes observability, check out our Best of Elastic Observability webinars series You can follow me at Aravind Putrevu Discussion
https://practicaldev-herokuapp-com.global.ssl.fastly.net/aravind/running-elastic-cloud-on-kubernetes-from-azure-kubernetes-service-nke
CC-MAIN-2020-50
refinedweb
981
51.07
{-| Use `View`s and `Controller`s` that interleaves lines from standard input with periodic ticks * A `View` that. -} {-# LANGUAGE RankNTypes #-} module MVC ( -- * Controllers -- $controller Controller , asInput , keeps -- * Views -- $view , View , asSink , handles -- * Models -- $model , Model , asPipe -- * MVC -- $mvc , runMVC -- * Managed resources -- $managed , Managed , managed -- *ListT , loop -- $listT -- * Re-exports -- $reexports , module Data.Functor.Constant , module Data.Functor.Contravariant , module Data.Monoid , module Pipes , module Pipes.Concurrent ) where import Control.Applicative (Applicative(pure, (<*>)), liftA2) import Control.Category (Category(..)) import Control.Monad.Morph (generalize) import Control.Monad.Trans.State.Strict (State, execStateT) import Data.Functor.Constant (Constant(Constant, getConstant)) import Data.Functor.Contravariant (Contravariant(contramap)) import Data.Monoid (Monoid(mempty, mappend, mconcat), (<>), First) import qualified Data.Monoid as M import Pipes import Pipes.Concurrent import Prelude hiding ((.), id) {- $controller `Controller`s represent concurrent inputs to your system. Use the `Functor` and `Monoid` instances for `Controller` and `Managed` to unify multiple `Managed` `Controller`s `Controller`s interleaves their values. -} {-| A concurrent source > fmap f (c1 <> c2) = fmap f c1 <> fmap f c2 > > fmap f mempty = mempty -} newtype Controller a = AsInput (Input a) -- This is just a newtype wrapper around `Input` because: -- -- * I want the `Controller` name to "stick" in inferred types -- -- * I want to restrict the API to ensure that `runMVC` is the only way to -- consume `Controller`s. This enforces strict separation of `Controller` -- logic from `Model` or `View` logic -- Deriving `Functor` instance Functor Controller where fmap f (AsInput i) = AsInput (fmap f i) -- Deriving `Monoid` instance Monoid (Controller a) where mappend (AsInput i1) (AsInput i2) = AsInput (mappend i1 i2) mempty = AsInput mempty -- | Create a `Controller` from an `Input` asInput :: Input a -> Controller a asInput = AsInput {-# INLINABLE asInput #-} {-| -} keeps :: ((b -> Constant (First b) b) -> (a -> Constant (First b) a)) -- ^ -> Controller a -- ^ -> Controller b keeps k (AsInput (Input recv_)) = AsInput (Input recv_') where recv_' = do ma <- recv_ case ma of Nothing -> return Nothing Just a -> case match a of Nothing -> recv_' Just b -> return (Just b) match = M.getFirst . getConstant . k (Constant . M.First . Just) {-# INLINABLE keeps #-} {- $view `View`s represent outputs of your system. Use `handles` and the `Monoid` instance of `View` to unify multiple `View`s `View`s sequences their outputs. If a @lens@ dependency is too heavy-weight, then you can manually generate `Traversal`s, which `handles` will also accept. Here is an example of how you can generate `Traversal`s -} newtype View a = AsSink (a -> IO ()) instance Monoid (View a) where mempty = AsSink (\_ -> return ()) mappend (AsSink write1) (AsSink write2) = AsSink (\a -> write1 a >> write2 a) instance Contravariant View where contramap f (AsSink k) = AsSink (k . f) -- | Create a `View` from a sink asSink :: (a -> IO ()) -> View a asSink = AsSink {-# INLINABLE asSink #-} {-| -} handles :: ((b -> Constant (First b) b) -> (a -> Constant (First b) a)) -- ^ -> View b -- ^ -> View a handles k (AsSink send_) = AsSink (\a -> case match a of Nothing -> return () Just b -> send_ b ) where match = M.getFirst . getConstant . k (Constant . M.First . Just) {-# INLINABLE handles #-} {- $model `Model`s are stateful streams and they sit in between `Controller`s and `View`s.)@ -} newtype Model s a b = AsPipe (Pipe a b (State s) ()) instance Category (Model s) where (AsPipe m1) . (AsPipe m2) = AsPipe (m1 <-< m2) id = AsPipe cat {-| Create a `Model` from a `Pipe` > asPipe (p1 <-< p2) = asPipe p1 . asPipe p2 > > asPipe cat = id -} asPipe :: Pipe a b (State s) () -> Model s a b asPipe = AsPipe {-# INLINABLE asPipe #-} {- $mvc Connect a `Model`, `View`, and `Controller` and an initial state together using `runMVC` to complete your application. `runMVC` is the only way to consume `View`s and `Controller`s. `View`s and `Controller`s to your program is by unifying them into a single `View` or `Controller` by using their `Monoid` instances. See the \"Controllers\" and \"Views\" sections for more details on how to do this. -} {-| Connect a `Model`, `View`, and `Controller` and initial state into a complete application. -} runMVC :: s -- ^ Initial state -> Model s a b -- ^ Program logic -> Managed (View b, Controller a) -- ^ Effectful output and input -> IO s -- ^ Returns final state runMVC initialState (AsPipe pipe) viewController = _bind viewController $ \(AsSink sink, AsInput input) -> flip execStateT initialState $ runEffect $ fromInput input >-> hoist (hoist generalize) pipe >-> for cat (liftIO . sink) {-# INLINABLE runMVC #-} {- $managed. -} -- | A managed resource newtype Managed r = Managed { _bind :: forall x . (r -> IO x) -> IO x } -- `Managed` is the same thing as `Codensity IO` or `forall x . ContT x IO` -- -- I implement a custom type instead of reusing those types because: -- -- * I need a non-orphan `Monoid` instance -- -- * The name and type are simpler instance Functor Managed where fmap f mx = Managed (\_return -> _bind mx (\x -> _return (f x) ) ) instance Applicative Managed where pure r = Managed (\_return -> _return r ) mf <*> mx = Managed (\_return -> _bind mf (\f -> _bind mx (\x -> _return (f x) ) ) ) instance Monad Managed where return r = Managed (\_return -> _return r ) ma >>= f = Managed (\_return -> _bind ma (\a -> _bind (f a) (\b -> _return b ) ) ) instance Monoid r => Monoid (Managed r) where mempty = pure mempty mappend = liftA2 mappend -- | Created a `Managed` resource managed :: (forall x . (r -> IO x) -> IO x) -> Managed r managed = Managed {-# INLINABLE managed #-} {-| Create a `Pipe` from a `ListT` transformation > loop (k1 >=> k2) = loop k1 >-> loop k2 > > loop return = cat -} loop :: Monad m => (a -> ListT m b) -> Pipe a b m r loop k = for cat (every . k) {-# INLINABLE loop #-} {- $listT `ListT` computations can be combined in more ways than `Pipe`s, `Pipe`sexports "Data.Functor.Constant" re-exports `Constant` "Data.Functor.Contravariant" re-exports `Contravariant` "Data.Monoid" re-exports `Monoid`, (`<>`), `mconcat`, and `First` (the type only) "Pipes" re-exports everything "Pipes.Concurrent" re-exports everything -}
http://hackage.haskell.org/package/mvc-1.0.0/docs/src/MVC.html
CC-MAIN-2016-36
refinedweb
918
50.77
clone 115400 -1 reassign -1 gwrapguile severity -1 wishlist retitle -1 gwrapguile should run test suite (test/run-test.scm) in debian/rules severity 115400 grave thanks First, this isn't remotely a gnucash bug. It's trivially observable by running "test/run-test.scm" in the gwrapguile build directory. ] aj@ia64au:~/nmu/gwrapguile/gwrapguile-1.1.11/test$ ./run-test.scm ] ERROR: In procedure hash_fn_create_handle_x: ] ERROR: Argument out of range: 4294967295 The bug appears to occur when: ] scm_protect_object(gw__module_gw_runtime_scm_ulongmax); is called from g-wrapped/gw-runtime.c:gw_init_module_gw_runtime(). From there on, it's guile internals. They basically look like: gc.c:scm_protect_object hashtab.c:scm_hashq_create_handle_x hashtab.c:scm_hash_fn_create_handle_x(_, _, _, scm_ihashq, _, _) hash.c:scm_ihashq(obj, _, _); hash.c:scm_ihashq looks like: unsigned int scm_ihashq (SCM obj, unsigned int n) { return (SCM_UNPACK (obj) >> 1) % n; } The problem now is pretty simple. SCM_UNPACK casts obj (which is basically 64 bits of something) to an scm_bits_t, which is a long, and the value happens to be ~0 (ULONG_MAX). ((long)-1) >> 1 == -1 (sign preserving), and -1 % n == -1 too (I think n's 31 here, for concreteness, but it shouldn't matter), and casting that to an unsigned int gives 2^32-1, which is out of range. QED. :) The reason this happens on ia64 (and alpha) but not i386 is due to differences in casting: (SCM_UNPACK(obj) >> 1) is a long, and n is an unsigned int. On i386, these are the same size, so the unsigned int takes precendence. On ia64, they're not, and the long can represent the entire range of the unsigned int, so n is upcast, and the sign is preserved [0]. So. Gnucash doesn't seem to be buggy at all. The only reason it showed up there was because it has some decent self tests at build time. Yay gnucash! Gwrapguile also has some self tests that show up the bug, but doesn't run them at build time. Boo gwrapguile. This should be fixed, hopefully the control commands above should remind Monsieur Goerzen to do so sometime. It's not clear to me what the fix is as far as guile is concerned. Changing scm_bits_t from long to unsigned long would fix it, but would probably cause hosts of other issues. Going through and changing all the hash functions to have "n" be unsigned long instead of unsigned int should work. Otherwise going through and adding explicit casts would be a possibility. Upstream probably need to decide on this. It'd also be nice if they had some self tests of their own. I'm not sure if a compiler warning for this would be possible, or worthwhile. A lint warning presumably would never be seen. :-/ Cheers, aj [0] K&R 2, A6.5 Arithmetic Conversions ... Otherwise,]. -- Anthony Towns <aj@humbug.org.au> <> We came. We Saw. We Conferenced. ``Debian: giving you the power to shoot yourself in each toe individually.'' -- with kudos to Greg Lehey -- To UNSUBSCRIBE, email to debian-ia64-request@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
https://lists.debian.org/debian-alpha/2002/02/msg00140.html
CC-MAIN-2017-09
refinedweb
518
66.33
Java String split() method is used to split the string into a string array based on the provided regular expression. Table of Contents Java String split Sometimes we have to split String in Java, for example splitting data of CSV file to get all the different values. String class provides two useful methods to split a string in java. public String[] split(String regex): This java string split method is used to split the string based on the given regular expression string. The whole string is split and returned in the form of a string array. This method was introduced in Java 1.4. Notice that the trailing empty strings are not included in the returned string array. Let’s look at a simple java string split example to understand this. String s = "abcaada"; System.out.println(Arrays.toString(s.split("a"))); Above code will produce output as [, bc, , d]. Notice that the last empty string is excluded from the returned string array. public String[] split(String regex, int limit): This java string split method is used when we want the string to be split into a limited number of strings. For example, let’s say we have a String variable that contains name and address with comma as a delimiter. Now, the address can have commas in them, so we can use this method. A short example of this string split is given below. String s = "Pankaj,New York,USA"; String[] data = s.split(",", 2); System.out.println("Name = "+data[0]); //Pankaj System.out.println("Address = "+data[1]); //New York,USA Note that the first method above actually utilizes the second method by passing limit as 0. public String[] split(String regex) { return split(regex, 0); } Java String split important points Some important points about java String split method are: - regex doesn’t match any part of the input string time,. Java String split example Here is an example of java String split method. package com.journaldev.util; import java.util.Arrays; public class JavaStringSplit { /** * Java String split example * * @param args */ public static void main(String[] args) { String line = "I am a java developer"; String[] words = line.split(" "); String[] twoWords = line.split(" ", 2); System.out.println("String split with delimiter: " + Arrays.toString(words)); System.out.println("String split into two: " + Arrays.toString(twoWords)); // split string delimited with special characters String wordsWithNumbers = "I|am|a|java|developer"; String[] numbers = wordsWithNumbers.split("\\|"); System.out.println("String split with special character: " + Arrays.toString(numbers)); } } Below image shows the output produced by the above String split example program. We can use a backslash to use java regular expression special characters as normal characters like I have used pipe (|) special character in the above program. That’s all for a quick roundup on java string split example. What if we want to split from last occurrence to first for ex: 59273_96923_1066382_1050766_1073327-01 output should be :- first : 1073327-01 , second : 1050766, third :1066382, fourth: 59273_96923. The reason to print last to one is , I do not want to split first one with (-). Hi, I am a student. I want to split the four timing in per day and separator by ‘-‘, But the four timing are get from the database. Thanks. Hi! I want to input a integer then my output will be all separated by a comma, Can you help with this problem? Hi, I would like to print only the string which meets a criteria, in this case a special character (-). Please help String line = “I am a ja-va developer”; Expected: ja-va Hi Zoom, I suppose if there are multiple instances then you need all of them. One way is to create a similar array by splitting and then keeping the instances that you need. Or you can simply search for the special character instances and print the element around it by splitting based on the condition ,it seems that it space character or “.” Thanks I am doing Mphil.In My Research Work,I want split Comma only four then Remaining text needed for me then Perform other remaining steps……… Hi S.GokulaGokhila, You can split the string into five such segments by using something like :: stringName.split(“*”, 4); Where * is the basis of the split. Thanks Nice Articles.. Good work..
https://www.journaldev.com/791/java-string-split
CC-MAIN-2021-25
refinedweb
709
65.73
SVG Essentials/The XML You Need for SVG From WikiContent Current revision The purpose of this appendix is to introduce you to XML. A knowledge of XML is essential if you wish to write SVG documents directly rather than having them generated by some graphics utility. If you're already acquainted with XML, you don't need to read this appendix. If not, read on. The general overview of XML given in this appendix should be more than sufficient to enable you to work with the SVG documents that you will create. For further information about XML, the O'Reilly books, Learning XML by Erik T. Ray and XML in a Nutshell by Elliotte Rusty Harold and W. Scott Means, are invaluable guides, as is the weekly online magazine XML.com. Note that this appendix makes frequent reference to the formal XML 1.0 specification, which can be used for further investigation of topics that fall outside the scope of SVG. Readers are also directed to the "Annotated XML Specification," written by Tim Bray and published online at, which provides illuminating explanation of the XML 1.0 specification; and to "What is XML?" by Norm Walsh, also published on XML.com. What Is XML? XML, the Extensible Markup Language, is an Internet-friendly format for data and documents, invented by the World Wide Web Consortium (W3C). The "Markup" denotes a way of expressing the structure of a document within the document itself. XML has its roots in a markup language called SGML (Standard Generalized Markup Language), which is used in publishing and shares this heritage with HTML. XML was created to do for machine-readable documents on the Web what HTML did for human-readable documents — that is, provide a commonly agreed-upon syntax so that processing the underlying format becomes a commodity and documents are made accessible to all users. Unlike HTML, though, XML comes with very little predefined. HTML developers are accustomed both to the notion of using angle brackets < > for denoting elements (that is, syntax), and also to the set of element names themselves (such as head, body, etc.). XML shares only the former feature (i.e., the notion of using angle brackets for denoting elements). Unlike HTML, XML has no predefined elements, but is merely a set of rules that lets you write other languages like HTML.[1] Because XML defines so little, it is easy for everyone to agree to use the XML syntax, and then to build applications on top of it. It's like agreeing to use a particular alphabet and set of punctuation symbols, but not saying which language to use. However, if you're coming to XML from an HTML background, then prepare yourself for the shock of having to choose what to call your tags! Knowing that XML's roots lie with SGML should help you understand some of XML's features and design decisions. Note that, although SGML is essentially a document-centric technology, XML's functionality also extends to data-centric applications, including SVG. Commonly, data-centric applications do not need all the flexibility and expressiveness that XML provides and limit themselves to employing only a subset of XML's functionality. Anatomy of an XML Document The best way to explain how an XML document is composed is to present one. The following example shows an XML document you might use to describe two authors: <?xml version="1.0" encoding="us-ascii"?> <authors> <person id="lear"> <name>Edward Lear</name> <nationality>British</nationality> </person> <person id="asimov"> <name>Isaac Asimov</name> <nationality>American</nationality> </person> <person id="mysteryperson"/> </authors> The first line of the document is known as the XML declaration. This tells a processing application which version of XML you are using (the version indicator is mandatory) and which character encoding you have used for the document. In the previous example, the document is encoded in ASCII. (The significance of character encoding is covered later in this chapter.) If the XML declaration is omitted, a processor will make certain assumptions about your document. In particular, it will expect it to be encoded in UTF-8, an encoding of the Unicode character set. However, it is best to use the XML declaration wherever possible, both to avoid confusion over the character encoding and to indicate to processors which version of XML you're using. Elements and Attributes The second line of the example begins an element, which has been named "authors." The contents of that element include everything between the right angle bracket (>) in <authors> and the left angle bracket (<) in </authors>. The actual syntactic constructs <authors> and </authors> are often referred to as the element start tag and end tag, respectively. Do not confuse tags with elements! Note that elements may include other elements, as well as text. An XML document must contain exactly one root element, which contains all other content within the document. The name of the root element defines the type of the XML document. Elements that contain both text and other elements simultaneously are classified as mixed content. The SVG <text> element is such an element; it can contain text and <tspan> elements. The sample "authors" document uses elements named person to describe the authors themselves. Each person element has an attribute named id. Unlike elements, attributes can only contain textual content. Their values must be surrounded by quotes. Either single quotes (') or double quotes (") may be used, as long as you use the same kind of closing quote as the opening one. Within XML documents, attributes are frequently used for metadata (i.e., "data about data") — describing properties of the element's contents. This is the case in our example, where id contains a unique identifier for the person being described. As far as XML is concerned, it does not matter in what order attributes are presented in the element start tag. For example, these two elements contain exactly the same information as far as an XML 1.0 conformant processing application is concerned: <animal name="dog" legs="4"/> <animal legs="4" name="dog"/> On the other hand, the information presented to an application by an XML processor on reading the following two lines will be different for each animal element because the ordering of elements is significant: <animal><name>dog</name><legs>4</legs></animal> <animal><legs>4</legs><name>dog</name></animal> XML treats a set of attributes like a bunch of stuff in a bag — there is no implicit ordering — while elements are treated like items on a list, where ordering matters. New XML developers frequently ask when it is best to use attributes to represent information and when it is best to use elements. As you can see from the "authors" example, if order is important to you, then elements are a good choice. In general, there is no hard-and-fast "best practice" for choosing whether to use attributes or elements. The final author described in our document has no information available. All we know about this person is his or her ID, mysteryperson. The document uses the XML shortcut syntax for an empty element. The following is a reasonable alternative: <person id="mysteryperson"></person> Name Syntax XML 1.0 has certain rules about element and attribute names. In particular: - Names are case-sensitive: e.g., <person/> is not the same as <Person/>. - Names beginning with "xml" (in any permutation of uppercase or lowercase) are reserved for use by XML 1.0 and its companion specifications. - A name must start with a letter or an underscore, not a digit, and may continue with any letter, digit, underscore, or period.[2] A precise description of names can be found in Section 2.3 of the XML 1.0 specification, at. Well-Formed An XML document that conforms to the rules of XML syntax is known as well-formed. At its most basic level, well-formedness means that elements should be properly matched, and all opened elements should be closed. A formal definition of well-formedness can be found in Section 2.1 of the XML 1.0 specification, at. Table A-1 shows some XML documents that are not well-formed. Table A-1. Examples of poorly formed XML documents As in HTML, it is possible to include comments within XML documents. XML comments are intended to be read only by people. With HTML, developers have occasionally employed comments to add application-specific functionality. For example, the server-side include functionality of most web servers uses instructions embedded in HTML comments. XML provides other means of indicating application processing instructions,[3] so comments should not be used for any purpose other than those for which they were intended. The start of a comment is indicated with <!--, and the end of the comment with -->. Any sequence of characters, aside from the string --, may appear within a comment. Comments tend to be used more in XML documents intended for human consumption than those intended for machine consumption. The <desc> and <title> elements in SVG obviate much of the need for comments. Entity References Another feature of XML that is occasionally useful when writing SVG documents is the mechanism for escaping characters. Because some characters have special significance in XML, there needs to be a way to represent them. For example, in some cases the < symbol might really be intended to mean "less than" rather than to signal the start of an element name. Clearly, just inserting the character without any escaping mechanism would result in a poorly formed document because a processing application would assume you were starting another element. Another instance of this problem is needing to include both double quotes and single quotes simultaneously in an attribute's value. Here's an example that illustrates both these difficulties: <badDoc> <para> I'd really like to use the < character </para> <note title="On the proper 'use' of the " character"/> </badDoc> XML avoids this problem by the use of the predefined entity reference. The word entity in the context of XML simply means a unit of content. The term entity reference means just that, a symbolic way of referring to a certain unit of content. XML predefines entities for the following symbols: left angle bracket (<), right angle bracket (>), apostrophe ('), double quote ("), and ampersand (&). An entity reference is introduced with an ampersand (&), which is followed by a name (using the word "name" in its formal sense, as defined by the XML 1.0 specification), and terminated with a semicolon (;). Table A-2 shows how the five predefined entities can be used within an XML document. Table A-2. Predefined entity references in XML 1.0 Here's our problematic document revised to use entity references: <badDoc> <para> I'd really like to use the < character </para> <note title="On the proper 'use' of the "character"/> </badDoc> Being able to use the predefined entities is all you need for SVG; in general, entities are provided as a convenience for human-created XML. XML 1.0 allows you to define your own entities and use entity references as "shortcuts" in your document. Section 4 of the XML 1.0 specification, available at, describes the use of entities. Character References You are likely to find character references in the context of SVG documents. Character references allow you to denote a character by its numeric position in Unicode character set (this position is known as its code point). Table A-3 contains a few examples that illustrate the syntax. Table A-3. Example character references in UTF-8 Note that the code point can be expressed in decimal or, with the use of x as a prefix, in hexadecimal. Character Encodings The subject of character encodings is frequently a mysterious one for developers. Most code tends to be written for one computing platform and, normally, to run within one organization. Although the Internet is changing things quickly, most of us have never had cause to think too deeply about internationalization. XML, designed to be an Internet-friendly syntax for information exchange, has internationalization at its very core. One of the basic requirements for XML processors is that they support the Unicode standard character encoding. Unicode attempts to include the requirements of all the world's languages within one character set. Consequently, it is very large! Unicode Encoding Schemes Unicode 3.0 has more than 57,700 code points, each of which corresponds to a character.[4] If one were to express a Unicode string by using the position of each character in the character set as its encoding (in the same way as ASCII does), expressing the whole range of characters would require 4 octets[5] for each character. Clearly, if a document is written in 100 percent American English, it will be four times larger than required — all the characters in ASCII fitting into a 7-bit representation. This places a strain both on storage space and on memory requirements for processing applications. Fortunately, two encoding schemes for Unicode alleviate this problem: UTF-8 and UTF-16. As you might guess from their names, applications can process documents in these encodings in 8- or 16-bit segments at a time. When code points are required in a document that cannot be represented by one chunk, a bit-pattern is used that indicates that the following chunk is required to calculate the desired code point. In UTF-8 this is denoted by the most significant bit of the first octet being set to 1. This scheme means that UTF-8 is a highly efficient encoding for representing languages using Latin alphabets, such as English. All of the ASCII character set is represented natively in UTF-8 — an ASCII-only document and its equivalent in UTF-8 are byte-for-byte identical. This knowledge will also help you debug encoding errors. One frequent error arises because of the fact that ASCII is a proper subset of UTF-8 — programmers get used to this fact and produce UTF-8 documents, but use them as if they were ASCII. Things start to go awry when the XML parser processes a document containing, for example, characters such as Á. Because this character cannot be represented using only one octet in UTF-8, this produces a two-octet sequence in the output document; in a non-Unicode viewer or text editor, it looks like a couple of characters of garbage. Other Character Encodings Unicode, in the context of computing history, is a relatively new invention. Native operating system support for Unicode is by no means widespread. For instance, although Windows NT offers Unicode support, Windows 95 and 98 do not have it. XML 1.0 allows a document to be encoded in any character set registered with the Internet Assigned Numbers Authority (IANA). European documents are commonly encoded in one of the ISO Latin character sets, such as ISO-8859-1. Japanese documents commonly use Shift-JIS, and Chinese documents use GB2312 and Big 5. A full list of registered character sets may be found at. XML processors are not required by the XML 1.0 specification to support any more than UTF-8 and UTF-16, but most commonly support other encodings, such as US-ASCII and ISO-8859-1. Although most SVG transactions are currently conducted in ASCII (or the ASCII subset of UTF-8), there is nothing to stop SVG documents from containing, say, Korean text. You will, however, probably have to dig into the encoding support of your computing platform to find out if it is possible for you to use alternate encodings. Validity In addition to well-formedness, XML 1.0 offers another level of verification, called validity. To explain why validity is important, let's take a simple example. Imagine you invented a simple XML format for your friends' telephone numbers: <phonebook> <person> <name>Albert Smith</name> <number>123-456-7890</number> </person> <person> <name>Bertrand Jones</name> <number>456-123-9876</number> </person> </phonebook> Based on your format, you also construct a program to display and search your phone numbers. This program turns out to be so useful, you share it with your friends. However, your friends aren't so hot on detail as you are, and try to feed your program this phone book file: <phonebook> <person> <name>Melanie Green</name> <phone>123-456-7893</phone> </person> </phonebook> Note that, although this file is perfectly well-formed, it doesn't fit the format you prescribed for the phone book, and you find you need to change your program to cope with this situation. If your friends had used number as you did to denote the phone number, and not phone, there wouldn't have been a problem. However, as it is, this second file is not a valid phonebook document. For validity to be a useful general concept, we need a machine-readable way of saying what a valid document is; that is, which elements and attributes must be present and in what order. XML 1.0 achieves this by introducing document type definitions (DTDs). For the purposes of SVG, you don't need to know much about DTDs. Rest assured that SVG does have a DTD, and it spells out in detail exactly which combinations of elements and attributes make up a valid document. Document Type Definitions (DTDs) The purpose of a DTD is to express the allowed elements and attributes in a certain document type and to constrain the order in which they must appear within that document type. A DTD is generally composed of one file, which contains declarations defining the element types and attribute lists. (In theory, a DTD may span more than one file; however, the mechanism for including one file inside another — parameter entities — is outside the scope of this book.) It is common to mistakenly conflate element and element types. The distinction is that an element is the actual instance of the structure as found in an XML document, whereas the element type is the kind of element that the instance is. Putting It Together What is important to you is knowing how to link a document to its defining DTD. This is done with a document type declaration <!DOCTYPE ...>, inserted at the beginning of the XML document, after the XML declaration in our fictitious example: <?xml version="1.0" encoding="us-ascii"?> <!DOCTYPE authors SYSTEM ""> <authors> <person id="lear"> <name>Edward Lear</name> <nationality>British</nationality> </person> <person id="asimov"> <name>Isaac Asimov</name> <nationality>American</nationality> </person> <person id="mysteryperson"/> </authors> This example assumes the DTD file has been placed on a web server at example.com. Note that the document type declaration specifies the root element of the document, not the DTD itself. You could use the same DTD to define "person," "name," or "nationality" as the root element of a valid document. Certain DTDs, such as the DocBook DTD for technical documentation,[6] use this feature to good effect, allowing you to provide the same DTD for multiple document types. A validating XML processor is obliged to check the input document against its DTD. If it does not validate, the document is rejected. To return to the phone book example, if your application validated its input files against a phone book DTD, you would have been spared the problems of debugging your program and correcting your friend's XML because your application would have rejected the document as being invalid. Most of the programs that read SVG files have a validating XML processor built into them to assure they have valid input (and to keep you honest!). The kinds of XML processors that are available are discussed in Section A.6. XML Namespaces XML. SVG applies the namespace in the root element of SVG documents: <svg xmlns="" width="100" height="100"> .... </svg> The xmlns attribute, which defines the namespace, is actually provided as a default value by the SVG DTD, so the declaration isn't required to appear in SVG documents. (If it does appear, it must have the exact value shown earlier.) The namespace declaration applies to all of the elements contained by the element in which the declaration appears, including the containing element. This means that the element named "svg" is in the namespace. SVG uses the "default namespace" for its content, using the SVG element names without any prefix. Namespaces can also be applied using prefixes, as shown here: <svgns:svg xmlns: .... </svgns:svg> In this case, the namespace URI would apply to all elements using a prefix of "svgns". The SVG 1.0 DTD won't validate against such documents, but future versions of SVG may support this feature. Appendix F shows examples of how to use namespaces to integrate SVG with other XML vocabularies. Namespaces are very simple on the surface but are a well-known field of combat in XML arcana. For more information on namespaces, see XML In a Nutshell or Learning XML. Tools for Processing XML Many parsers exist for using XML with many different programming languages. Most are freely available, the majority being Open Source. Selecting a Parser An XML parser typically takes the form of a library of code that you interface with your own program. The SVG program hands the XML over to the parser, and it hands back information about the contents of the XML document. Typically, parsers do this either via events or via a document object model. With event-based parsing, the parser calls a function in your program whenever a parse event is encountered. Parse events include things like finding the start of an element, the end of an element, or a comment. Most Java event-based parsers follow a standard API called SAX, which is also implemented for other languages such as Python and Perl. You can find more about SAX at. Document object model (DOM) based parsers work in a markedly different way. They consume the entire XML input document and hand back a tree-like data structure that the SVG software can interrogate and alter. The DOM is a W3C standard; documentation is available at. As XML matures, hybrid techniques that give the best of both worlds are emerging. If you're interested in finding out what's available and what's new for your favorite programming language, keep an eye on the following online sources: - XML.com Resource Guide - - XMLhack XML Developer News - - Free XML Tools Guide - XSLT Processors Many XML applications involve transforming one XML document into another or into HTML. The W3C has defined a special language called XSLT for doing transformations. XSLT processors are becoming available for all major programming platforms. XSLT works by using a stylesheet, which contains templates that describe how to transform elements from an XML document. These templates typically specify what XML to output in response to a particular element or attribute. Using a W3C technology called XPath gives you the flexibility to say not only "do this for every `person' element," but to give instructions as complex as "do this for the third `person' element whose `name' attribute is `Fred.'" Because of this flexibility, some applications have sprung up for XSLT that aren't really transformation applications at all, but take advantage of the ability to trigger actions on certain element patterns and sequencers. Combined with XSLT's ability to execute custom code via extension functions, the XPath language has enabled applications such as document indexing to be driven by an XSLT processor. You can see a brief introduction to XSLT in Chapter 12 in Section 12.3. The W3C specifications for XSLT and XPath can be found at and, respectively. Notes - ↑ To clarify XML's relationship with SGML: XML is an SGML subset. By contrast, HTML is an SGML application. SVG uses XML to express its operations and thus is an XML application. - ↑ Actually, a name may also contain a colon, but the colon is used to delimit a namespace prefix and is not available for arbitrary use. Knowledge of namespaces is not required for understanding SVG, but for more information, see Tim Bray's "XML Namespaces by Example," published at. - ↑ A discussion of processing instructions (PIs) is outside the scope of this book. For more information on PIs, see Section 2.6 of the XML 1.0 specification, at. - ↑ You can obtain charts of all these characters online by visiting. - ↑ An octet is a string of 8 binary digits, or bits. A byte is commonly, but not always, considered the same thing as an octet. - ↑ See.
http://commons.oreilly.com/wiki/index.php?title=SVG_Essentials/The_XML_You_Need_for_SVG&diff=7380&oldid=2676
CC-MAIN-2015-18
refinedweb
4,084
53.81
Group fights If I spawn 20+ people and split them up is it even possible to make them fight each other? I tried to do it with Menyoo but instead of fighting group vs group they decide to only target me. I also tried various bodyguard/gang mods but those require me to be a part of the fight, which im trying to avoid. Does anybody know something else I could try, or even a solution? @ashishcw After some searching, i figured it has to be coded. The thing is my coding is limited, I did find someones example script which could possibly work for me with a bit of changing of the code (which im doing right now). @RXMY Oh okay, well good luck with it and if you face any hurdles, just raise the question on forum, as I am sure, many of us modders will be glad to help you further. Take care. (: @ashishcw I have been messing around and trying to get atleast something going, but I guess my limited coding knowledge is not enough. And I was thinking is it possible to assign relationship groups to peds after they are spawned? Because that would allow me to skip the entire spawning part because I can do that manually. - Forrest Gimp @RXMY You can easily do stuff like that in Build a Mission without coding knowledge. Scripting it manually wouldn't be that easy. @RXMY Hello RXMY, Well, I assume, assigning them a relationship group is where you stuck at? if that is the case, it is totally possible, to assign peds a relationship group before they are spawned. Here is a Code snippet from one of my ongoing mods. First, declare the ped and relationship type as a global variable, so later, you could use that variable anywhere in your code and not getting restricted to where have you declared it. For e.g. namespace YOURNAMESPACEHERE { public class YOURSCRIPTNAME: Script { Ped VIP; Ped PlayerPed; int playerviprelationship; } } Now, I want to assign a relationship to both these ped types. I will do this in one of my methods. playerviprelationship = World.AddRelationshipGroup("PLAYERVIPRELATION"); //This code simply adds the relationship group to GTA World. //Now simply assign both Player and Ped to the same Group. PlayerPed.RelationshipGroup = playerviprelationship; VIP.RelationshipGroup = playerviprelationship; //With below code, I get to declare who is leader and who is not GTA.Native.Function.Call(GTA.Native.Hash.SET_PED_AS_GROUP_LEADER, PlayerPed, playerviprelationship); GTA.Native.Function.Call(Hash.SET_PED_AS_GROUP_MEMBER, VIP.Handle, playerviprelationship); So now Ped will always follow Player as a Bodyguard. Next time, please try to add your code snippet to make sure what's going wrong. That way it would be easy for us modders to read and pinpoint the wrong code. P.S. If you are new to scripthook coding I would recommend you to have a look at this native db. Its a bible for the modders who use scripthook as a plugin. Hope this helped. Take care. - Zippo Raid this is definitely possible via editing a ped's relationship characteristics using create-a-mission or map editor like others have stated (or coding if you're capable of that stuff.. i'm certainly not), also: take a look at this thread to bypass the hard-coded AI limit present in the game by default that thread mainly discusses the AI during melee combat but I'm pretty sure one of the values InfamousSabre says to change also affects gunfighting as a whole. good luck, those gameconfig changes in that thread are absolutely essential to me nowadays This post is deleted! This post is deleted!
https://forums.gta5-mods.com/topic/7741/group-fights
CC-MAIN-2018-13
refinedweb
603
70.73
John Stark wrote:You might have to do: import notcert.Cloo class Toon { ... because in your code the Toon class doesn't know about the Cloo class. John Everyone congrats me i want to show author of SCJP 310-065 book John Jai wrote: Everyone congrats me Good Job i want to show author of SCJP 310-065 book Check if below thread is what you were searching for ! Chiranjeevi Kanthraj wrote:The class class Toon is not in the proper package. 1. With out changes also you can execute this by placing the Toon.java in to the notcert package. 2. make Coo class as public, default visiblity means with in the Package only
http://www.coderanch.com/t/552565/java-programmer-SCJP/certification/Compila-errora
CC-MAIN-2014-52
refinedweb
115
83.15
SYNOPSIS #include <sys/mman.h> int mprotect(const hap- pen,] are invalid for the address space of the process, or specify one or more pages that are not mapped. (Before kernel 2.4.19, the error EFAULT was incorrectly produced for these cases.) CONFORMING TO SVr4, POSIX.1-2001. The program below allocates four pages of memory, makes the third of these pages read-only, and then executes a loop that walks upwards through the allocated region modifying bytes. An example of what we might see when running the program is the follow- ing: $ . */ printf("Loop completed\n"); /* Should never happen */ exit(EXIT_SUCCESS); } SEE ALSO mmap(2), sysconf(3) COLOPHON This page is part of release 3.23 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://www.linux-directory.com/man2/mprotect.shtml
crawl-003
refinedweb
138
67.45
Hi, There is a problem. I've installed django-cart into my project. And in views.py, I import it as: from cart.cart import Cart But it keeps showing that in the cart.py, ModuleNotFoundError, which is related to this line: import models There is a models.py located at the same folder with cart.py, the folder is "/home/adrop/venv/lib/python3.6/site-packages/cart". And I also try to add this path to the sys.path, which makes my sys.path looks like this: ['', '/home/adrop/venv/lib/python36.zip', '/home/adrop/venv/lib/python3.6', '/home/adrop/venv/lib/python3.6/lib-dynload', '/usr/lib/pyth on3.6', '/home/adrop/venv/lib/python3.6/site-packages', '/home/adrop/venv/lib/python3.6/site-packages/cart'] And it refused to find models.py as well. So, is there anything I missed?
https://www.pythonanywhere.com/forums/topic/12244/
CC-MAIN-2018-26
refinedweb
146
65.18
Testing¶ Setting up development environment locally¶ Follow our installation instructions and set up a suitable environment to build statsmodels from source. We recommend that you develop using a development install of statsmodels by running: pip install -e . from the root directory of the git repository. The flag -e is for editable. This command compiles the C code and add statsmodels to your activate python environment by creating links from your python environment’s libraries to the statsmodels source code. Therefore, changes to pure python code will be immediately available to the user without a re-install. Changes to C code or Cython code require rerunning pip install -e . before these changes are available. Test Driven Development¶ We strive to follow a Test Driven Development (TDD) pattern. All models or statistical functions that are added to the main code base are to have tests versus an existing statistical package, if possible. Introduction to pytest¶ Like many packages, statsmodels uses the pytest testing system and the convenient extensions in numpy.testing. Pytest will find any file, directory, function, or class name that starts with test or Test (classes only). Test function should start with test, test classes should start with Test. These functions and classes should be placed in files with names beginning with test in a directory called tests. Running Tests¶ Test are run from the command line by calling pytest. Directly running tests using pytest requires that statsmodels is installed using pip install -e . as described above. Tests can be run at different levels of granularity: Project level, which runs all tests. Running the entire test suite is slow and normally this would only be needed if making deep changes to statsmodels. pytest statsmodels Folder level, which runs all tests below a folder pytest statsmodels/regression/tests File level, which runs all tests in a file pytest statsmodels/regression/tests/test_regression.py Class level, which runs all tests in a class pytest statsmodels/regression/tests/test_regression.py::TestOLS Test level, which runs a single test. The first example runs a test in a class. The second runs a stand alone test. pytest statsmodels/regression/tests/test_regression.py::TestOLS::test_missing pytest statsmodels/regression/tests/test_regression.py::test_ridge How To Write A Test¶ NumPy provides a good introduction to unit testing with pytest_class(cls): # set up model data = sm.datasets.spector.load() data.exog = sm.add_constant(data.exog) cls.res1 = sm.Probit(data.endog, data.exog).fit(method='newton', disp=0) # set up results res2 = Spector.probit cls.res2 = res2 # set up precision cls.decimal_tvalues = 3 def test_model_specifc(self): assert_almost_equal(self.res1.foo, self.res2.foo, 4) The main workhorse is the CheckDiscreteResults class. Notice that we can set the level of precision for tvalues to be different than the default in the subclass TestProbitNewton. All of the test classes have a @classmethod called setup_class. Otherwise, pytest would reinstantiate the class before every single test method. If the fitting of the model is time consuming, then this is clearly undesirable. Finally, we have a script at the bottom so that we can run the tests should be running the Python file. Test Results¶. Speeding up full runs¶ Running the full test suite is slow. Fortunately it is only necessary to run the full suite when making low-level changes (e.g., to statsmodels.base) There are two methods available to speed up runs of the full test suite when needed. Use the pytest-xdist package pip install pytest-xdist export MKL_NUM_THREADS=1 export OMP_NUM_THREADS=1 pytest -n auto statsmodels Skip slow tests using --skip-slow pytest --skip-slow statsmodels You can combine these two approaches for faster runs. export MKL_NUM_THREADS=1 && export OMP_NUM_THREADS=1 pytest -n auto --skip-slow statsmodels The test() method¶ The root of statsmodels and all submodules expose a test() method which can be used to run all tests either in the package ( statsmodels.test()) or in a module ( statsmodels.regression.test()). This method allows tests to be run from an install copy of statsmodels even it is was not installed using the editable flag as described above. This method is required for testing wheels in release builds and is not recommended for development. Using this method, all tests are run using: import statsmodels.api as sm sm.test() Submodules tests are run using: sm.discrete.test()
https://www.statsmodels.org/stable/dev/test_notes
CC-MAIN-2022-33
refinedweb
718
57.16
While unit testing in Java is dominated by JUnit, C++ developers can choose between a variety of frameworks. See here for a comprehensive list. Here you can find a nice comparison of the biggest players in the game. Being probably one of the oldest frameworks CppUnit sure has some usability issues but is still widely used. It is criticised mostly because you have to do a lot of boilerplate typing to add new tests. In the following I will not repeat how tests can be written in CppUnit as this is described already exhaustively (e.g. here or here). Instead I will concentrate on the task of how to structure CppUnit tests in bigger projects. “Bigger” in this case means at least a few architectually independent parts which are compiled independently, i.e. into different libraries. Having independently compiled parts in your project means that you want to compile their unit tests independently, too. The goal is then to structure the tests so that they can easily be executed selectively during development time by the programmer and at the same time are easy to execute as a whole during CI time (CI meaning Continuous Integration, of course). As C++ has no reflection or other meta programming elements like the Java Annotations, things like automatic test discovery and how to add new tests become a whole topic of its own. See the CppUnit cookbook for how to do that with CppUnit . In my projects I only use the TestFactoryRegistry approach because it provides the most automatics in this regard. Let’s begin with a simplest setup, the Link-Time Trap (see example source code): Test runner and result reporter are setup in the “main” function that is compiled into an executable. The actual unit tests are compiled in separate libraries and are all linked to the executable that contains the main function. While this solution works well for small projects it does not scale. This is simply because every time you change something during the code-compile-test cycle the unit test executable has to be relinked, which can take a considerable amount of time the bigger the project gets. You fall into the Link Time Trap! The solution I use in many projects is as follows: Like in the simple approach, there is one test main function which is compiled into a test executable. All unit tests are compiled into libraries according to their place in the system architecture. To avoid the Link-Time-Trap, they are not linked to the test executable but instead are automatically discovered and loaded during test execution. 1. Automatic Discovery Applying a little convention-over-configuration all testing libraries end with the suffix “_tests.so”. The testing main function can then simply walk over the directory tree of the project and find all shared libraries that contain unit test classes. 2. Loading If a “.._test.so” library has been found, it simply gets loaded using dlopen (under Unix/Linux). When the library is loaded the unit tests are automatically registered with the TestFactoryRegistry. 3. Execution After all unit test libraries has been found and loaded text execution is the same as in the simple approach above. Here my enhanced testmain.cpp (see example source code). #include ... using namespace boost::filesystem; using namespace std; void loadPlugins(const std::string& rootPath) { directory_iterator end_itr; for (directory_iterator itr(rootPath); itr != end_itr; ++itr) { if (is_directory(*itr)) { string leaf = (*itr).leaf(); if (leaf[0] != '.') { loadPlugins((*itr).string()); } continue; } const string fileName = (*itr).string(); if (fileName.find("_tests.so") == string::npos) { continue; } void * handle = dlopen (fileName.c_str(), RTLD_NOW | RTLD_GLOBAL); cout << "Opening : " << fileName.c_str() << endl; if (!handle) { cout << "Error: " << dlerror() << endl; exit (1); } } } int main ( int argc, char ** argv ) { string rootPath = "./"; if (argc > 1) { rootPath = static_cast<const char*>(argv[1]); } cout << "Loading all test libs under " << rootPath << endl; string runArg = std::string ( "All Tests" ); // get registry CppUnit::TestFactoryRegistry& registry = CppUnit::TestFactoryRegistry::getRegistry(); loadPlugins(rootPath); // Create the event manager and test controller CppUnit::TestResult controller; // Add a listener that collects test result CppUnit::TestResultCollector result; controller.addListener ( &result ); CppUnit::TextUi::TestRunner *runner = new CppUnit::TextUi::TestRunner; std::ofstream xmlout ( "testresultout.xml" ); CppUnit::XmlOutputter xmlOutputter ( &result, xmlout ); CppUnit::TextOutputter consoleOutputter ( &result, std::cout ); runner->addTest ( registry.makeTest() ); runner->run ( controller, runArg.c_str() ); xmlOutputter.write(); consoleOutputter.write(); return result.wasSuccessful() ? 0 : 1; } As you can see the loadPlugins function uses the Boost.Filesystem library to walk over the directory tree. It also takes a rootPath argument which you can give as parameter when you call the test main executable. This solves our goal stated above. When you want to execute unit tests selectively during development you can give the path of the corresponding testing library as parameter. Like so: ./testmain path/to/specific/testing/library In your CI environment on the other hand you can execute all tests at once by giving the root path of the project, or the path where all testing libraries have been installed to. ./testmain project/root Hi volkerkaiser: Thanks for you great post!!! How can i do this in C language? and how to write the build script which hudson will take it? Thanks in advance! @Joey Z: For the most part you can setup a Hudson build script like I described here. You just have to replace the parts specific to CppUnit with the specifics of the unit testing framework you use. hi volkerkaiser: It seems that cunit doesn’t have the automatic register functional. How to do that? would you like give me some ideas? Thanks @Joey Z: AFAIK the automatic register functionality of CppUnit relies on language constructs specific to C++ so I’m not sure if this could be done at all in C. I’m also not familiar with cunit, so sorry, no ideas here. Any suggestions on how to generalize this to other platforms? For instance the file extension for dynamic libraries on OSX is dylib.
https://schneide.wordpress.com/2009/04/14/structuring-cppunit-tests/
CC-MAIN-2017-17
refinedweb
983
56.96
Requirements - PIC16F688 Code PIC16F877A Code. 24 CommentsLogin… Hi, thanks. Above R2, there’s a wire going to JP4. The transmitter is placed in JP4. When the LED is on, the transmitter is on. Plz can you provide me actual circuit diagram of this…..plz it will be very helpful to me for my college level project The schematic diagrams are in the article…. can you please provide a schematic diagram of this wireless thermometer because I am failing to understand the provided diagrams since they are broken into blocks If you draw a line between the corresponding labels, your’re there. Can I use pickit 3 to transfer the code to the PIC? Also, I realised that the zip file for the receiver station has two programs—one for the lcd and one for the PIC. How doe that work? I mean, how do I transfer the code to the lcd? how can I exclude the ICSP connections because I want to burn the program to the PICs whilst outside the circuit. can it be possible, thanks in advance If you don’t want the ICSP header, you can skip it. The only thing that is important, is that you have a 10K resistor from MCLR to +5v. Hi. Yes, the PICKit 3 would work great. The ZIP file on the receiver, is the MPLAB X project folder. You don’t transfer the files to the LCD, but to the PIC16F877A. It should work if you open the project in MPLAB X, and compiled the C file. Thanks so much for the tutorial. I have tried almost everything I know of. However, the LCD only displays black square blocks. I really know what I am doing wrong. I need your help please. I really want to get this to work thanks, is it possible to use the same source cord as yours even if I am going to programme the PICs out of circuit? is there something that need to be changed on Configurations of the source cord AFAIK the source code should work. ‘AFAIK’ what do you mean AFAIK = As Far As I Know Pls I wish to attache a second temperature sensor, to be covered with a wet cloth so as to read wet bulb temperature such that both dry bulb and wet bulb temperature are shown on the display,can you help me on that? Did you measure the Power consumption? It would be great to run the thermostat by battery. I think that 433Hz transmitter is most demanding module, the micro can wait in low-power mode for next acquisition Joseph_Cseh Hello Jens, I want to make a PC Board to this projec but I’m affraid, the connection of transmitter block is not riight. I want to use another RF pair of moduls (433Mhz) from HOPERF because it is easier to get them for me. I checked the pin configuration of your transmitter modul from the picture pin1=DATA, This is OK, pin2=GND, (not VCC), pin3=VCC , (not GND) Beside of this there is a fourth pin next to pin1. This is the Antenna. So pin2 of JP4 should be connetected to the collector of T1 transistor, pin3 of JP4 sould be connected to R2 and these should be connected to +5V not to +12V. Although this is not an error. How to connect the pin ANTENNA of the transmitter? That’s all. Is the firmware written in microC? I used microC PRO but the compiler did not compile the include instructionS and return (EXIT_SUCCESS) instruction. Why is it so? I’m waiting for your answer. Hi. I’m using MPLABX and the XC8 compiler. You can download it for free from the Microchip web site. Please check the first line in the requirements list. With 12v I get a longer transmitting range for my type of transmitter. Please check the datasheet for your transmitter. It would be easy to add an extra pin for the antenna. Joseph_Cseh Hello Jens I want to tell you that the connection of oscillator blocks is not good. It is not good in either the transmitter or the receiver, You short the crystal to the ground if you connect it this way. You have to connect the crystal above the two capacitors C6 and C7 between to them. Sorry , but I cann’t draw here. I want to ask you Are your hex files good? Because the C complier can not open your include instructions, so I cann’t create the hex files by myself, I would appriciate your answer. Oh my gosh, this is wild! You’re right! I’ll make the changes as soon as I can. Thanks. I’ve compiled both files with MPLAX and the XC8 compiler. Both are working. Joseph_Cseh Hello Jens, I see You’ve correcteed the oscillator blocks. It’s right now. You didn’t corrected your transmitter block. You directly connected the GND and the VCC pins to the power supply. So the transmitter remains on all the time. There’s nothing to swich the transmitter on and off. REMIDY: Connect the GND pin of the transmitter and the pin of the Cap. C8 to the collector of the transistor T1. The transitor T1 will switch on and off the transmitter, You just made the LED1 blink I designed PCB to the transmitter and the receiver by ORCAD. I can send them to you if you want to. So you can upload them . But I don’t your email address. I can send in the PCB files. I’m waiting for your answer. Joseph_Cseh Hello Jens I want to tell you that your pin configuration of the LCD module is not good. Pin5 of the LCD module is R/W and not E. Pin6 of the LCD is E not R/W. In other words Pin5 and Pin6 are swapped. I want to ask you ” Is the LCD module connection right?” in spite of this little mistake. I appreciate your answer. Joseph_Cseh Hello Jens, I’ve built up your project but it doesn’t work properly. I corrected your mistakes in the hardware, and designed PCB to it and built up. I could send in my design but I don’t have your E-mail address, mine is .(JavaScript must be enabled to view this email address) I have one question. Why don’t you correct your mistakes in the hardware? Not everyone can find these bugs in your design. It is misleading. I wrote the bugs to you. Look Jens, it seems that something is wrong in your software. In the display the temperature range and your sentences show up . They are scrolling over and over. No matter I switch on the transmitter or not. If I switch on the transmitter . The display should show the temperature range an your sentence you wrote but they are scrolling over and over again. It should not be so. Now I’m debugging your software. It is not easy to find the bugs in a program written by somebody else. I don’t know the xc8 language, I used to write my programs in CCS or microC languages. Could you help me with the debugging? I appreciate your help.
https://www.allaboutcircuits.com/projects/how-to-make-a-wireless-thermometer-with-a-pic-microcontroller/
CC-MAIN-2019-39
refinedweb
1,207
84.57
PyX — Example: graphs/minimal.py Plotting data contained in a file from pyx import * g = graph.graphxy(width=8) g.plot(graph.data.file("minimal.dat", x=1, y=2)) g.writeEPSfile("minimal") g.writePDFfile("minimal") Description This example shows how to draw a graph representing data stored in a file. We assume that the data is arranged in the file minimal.dat in a whitespace-separated two-column form: # minimal.dat 1 2 2 3 3 8 4 13 5 18 6 21 The first step is to create an instance of the graphxy class which can be found in the graph module. By convention, we call it g. The constructor expects at least some information about the desired size of the graph. Here, we specify a width of 8 cm. If we only specify one dimension of the graph size, PyX calculates the other automatically, assuming a ratio corresponding to the golden ratio. Next, we add some data to the yet empty graph. In order to do so, we first create a graph.data.file instance, which reads the file with the name given as the first argument, i.e., in the present case, "minimal.dat". In addition, we have to specify, how the data is organized in the file. To this end, we use the keyword arguments x=1 and y=2, which tell PyX that the first (second) column of the file contains the x (y) values. The graph.data.file instance is then directly passed to the plot method of the graph g. Note that PyX by default ignores comments starting by a # sign when reading in the data from the file. The previous statement is actually not completely correct, as PyX uses the last comment preceding the actual data to give names to the columns. Thus, for a file looking like # my data (this line is ignored by PyX, but not the following) # x y 1 2 ... you wouldn't need to label the columns in the graph.data.file call at all. Finally, we write the graph to an EPS and PDF file. Here, we use that every graph is (by inheritance) an instance of the canvas class, as well, such that we can directly write it into a file. Of course, you can also insert a graph into another canvas and write this canvas later to a file. This way, you can, for instance, easily arrange more than one graph on a page. Later examples will make use of this fact. In PyX, the way data is plotted in a graph is defined by a so-called graph style. A couple of standard graph styles are contained in the module graph.style. Depending on the data source, PyX chooses a default style. Here, we are taking the data from a file and PyX assumes that the values represent a discrete set of data points. Hence, it chooses the symbol style graph.style.symbol to plot the data. To override this default behaviour, you can pass a list of styles as second argument to the plot method. For instance, to have PyX drawing a line through the data points, you can use g.plot(graph.data.file("minimal.dat", x=1, y=2), [graph.style.line()])
http://pyx.sourceforge.net/examples/graphs/minimal.html
CC-MAIN-2015-06
refinedweb
543
74.79
Opened 11 years ago Closed 11 years ago Last modified 44 years ago #148 closed bug (Fixed) Premature Evalutation when using -O Description Hi all, the following program crashes (Prelude.head: empty list) when compiled with optimizations by both ghc-6.0 and ghc-5.04.3 . When the out-commented line is activated it runs fine with 5.04.3, but still crashes when compiled by 6.0 . Without -O both run perfectly fine. Before each command the file is touched: Without the commented-line: % ghc-5.04.3 test1.hs && ./a.out % ghc-5.04.3 -O test1.hs && ./a.out Fail: Prelude.head: empty list % ghc-6.0 test1.hs && ./a.out % ghc-6.0 -O test1.hs && ./a.out Fail: Prelude.head: empty list With it: % ghc-5.04.3 test1.hs && ./a.out % ghc-5.04.3 -O test1.hs && ./a.out % ghc-6.0 test1.hs && ./a.out % ghc-6.0 -O test1.hs && ./a.out Fail: Prelude.head: empty list Full test done on: % ghc --version The Glorious Glasgow Haskell Compilation System, version 6.0 % uname -a Linux localhost 2.4.20 #1 Sat Nov 30 14:46:26 CET 2002 i686 unknown I also tested the prebuilt binaries for linux-x86 and win98, but only with ghc-5.04.3 -O and ghc-6.0 -O (same behaviour). AFAICS this really seems to be a bug in GHC, as a) Laziness should prevent `len' to be evaluated until after program-exit ;o) b) I fail to see why this behaviour would change when upgrading my compiler c) AFAIK -O shouldn't change semantics (well, not in this way at least) d) I don't see why the "_ <- return ()" would make any difference. (Except for messing up the optimizer ;)) If it isn't a bug, please enlighten me. Happy hacking, Remi module Main where import System import Monad main = do args <- getArgs when (null args) $ exitWith ExitSuccess -- _ <- return () let len = read (head args) :: Int print (show len) print len Change History (1) comment:1 Changed 11 years ago by simonpj - Status changed from assigned to closed Note: See TracTickets for help on using tickets.
https://ghc.haskell.org/trac/ghc/ticket/148
CC-MAIN-2014-10
refinedweb
363
75.81
Details - Type: Bug - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: 1.8-beta-3 - Fix Version/s: 1.8-rc-1, 1.9-beta-1 - Component/s: None - Labels:None - Number of attachments : Description If I have a class with it and use the 'log' variable in a static method I get a compilation error. Log4jTest.groovy: import groovy.util.logging.Log4j @Log4j class TestLog4j { public static void main(String[] args){ log.info "Hello World" } } Error Message: Apparent variable 'log' was found in a static scope but doesn't refer to a local variable, static field or class. Possible causes: You attempted to reference a variable in the binding or an instance variable from a static context. You misspelled a classname or statically imported field. Please check the spelling. You attempted to use a method 'log' but left out brackets in a place not allowed by the grammar. Activity The logger injected in the classes is a field, but not a static one, hence why you're seeing this error message. So should the logger be a static field instead? I think that's the question to ask. I think it should be static for several reasons. First it is a defacto Java standard to have a static logger variable and I think most Java devs would expect it to be that way. So anyone coming from the Java world would be surprised that it was not static. Secondly, the non-static version does not work in one important use case which is inside the main() function. Is there any reason for it to not be static? Guillaume, the injected field is static, but the compiler fails before the phase where the field is injected. It fails at semantic analysis where the field is injected at canonicalization. Changing from @GroovyASTTransformation(phase = CompilePhase.CANONICALIZATION) public class LogASTTransformation implements ASTTransformation { to @GroovyASTTransformation(phase = CompilePhase.SEMANTIC_ANALYSIS) public class LogASTTransformation implements ASTTransformation { solves the problem. I'm not sure why the original transform phase is CANONICALIZATION though. [update] Changing the phase doesn't break any test. If the phase is important, there should be test cases for that. Yup, I've changed the compile phase and noticed things were fine too. I haven't added tests yet though. I'm wondering if there were some reason for the canonicalization phase for being chosen though. So it looks like the Log transformation happens in canonicalization phase, which is after the semantic analysis phase where the static verifier kicks in, and throws the exception you've been seeing. Weird, my comment didn't get through initially... hmmm... the JIRA mysteries... Hi all, sorry but I think the issue is still valid (at least for the @Log annotation, my use case). I'm using Groovy-1.8.2 (from latest eclipse plugin) and I have the same error described here. Tell me if you need another ticket (I'm not able to reopen this). Thanks, Sandro I am not sure if this is related but here is a blog that describes the same problem and his attempt to fix. But his fix does not work either (you will get a NPE if you use his).
http://jira.codehaus.org/browse/GROOVY-4609
CC-MAIN-2013-48
refinedweb
530
66.84
Evaluating Names and Other Worksheet Formula Expressions This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist. One of the most important features that Excel exposes through the C API is the ability to convert any string formula that can legally be entered into a worksheet to a value, or array of values. This is essential for XLL functions and commands that must read the contents of defined names, for example. This ability is exposed through the xlfEvaluate function, as shown in this example. int WINAPI evaluate_name_example(void) { wchar_t *expression = L"\016!MyDefinedName"; XLOPER12 xNameText, xNameValue; xNameText.xltype = xltypeStr; xNameText.val.str = expression; // Try to evaluate the name. Will fail with a #NAME? error // if MyDefinedName is not defined in the active workbook. Excel12(xlfEvaluate, &xNameValue, 1, &xNameText); // Attempt to convert the value to a string and display it in // an alert dialog. This fails if xNameValue is an error value. Excel12(xlcAlert, 0, 1, &xNameValue); // Must free xNameValue in case MyDefinedName evaluated to a string Excel12(xlFree, 0, 1, &xNameValue); return 1; } Note that when you are evaluating a worksheet name, either on its own or in a formula, you must prefix the name with ‘!’, at least. Otherwise, Excel tries to find the name in a hidden namespace reserved for DLLs. You can create and delete hidden DLL names using the xlfSetName function. You can get the definition of any defined name, whether it is a hidden DLL name or a worksheet name, using the xlfGetDef function. The full specification for a worksheet name takes the following form: ='C:\example folder\[Book1.xls]Sheet1'!Name Note that Office Excel 2007 introduces a number of new file extensions. You can omit the path, the workbook name, and the sheet name where there is no ambiguity among the open workbooks in this Excel session. The next example evaluates the formula COUNT(A1:IV65536) for the active worksheet and displays the result. Note the need to prefix the range address with ‘!’, which is consistent with the range reference convention on XLM macro sheets. The C API XLM follows this convention: =A1 A reference to cell A1 on the current macro sheet. (Not defined for XLLs). =!A1 A reference to cell A1 on the active sheet (which could be a worksheet or macro sheet) =Sheet1!A1 A reference to cell A1 on the specified sheet, Sheet1 in this case. =[Book1.xls]Sheet1!A1 A reference to cell A1 on the specified sheet in the specified workbook. In an XLL, a reference without a leading exclamation point (!) cannot be converted to a value. It has no meaning because there is no current macro sheet. Note that a leading equals sign (=) is optional and is omitted in the next example. int WINAPI evaluate_expression_example(void) { wchar_t *expression = L"\022COUNT(!A1:IV65536)"; XLOPER12 xExprText, xExprValue; xExprText.xltype = xltypeStr; xExprText.val.str = expression; // Try to evaluate the formula. Excel12(xlfEvaluate, &xExprValue, 1, &xExprText); // Attempt to convert the value to a string and display it in // an alert dialog. Will fail if xExprValue is an error. Excel12(xlcAlert, 0, 1, &xExprValue); // Not strictly necessary, as COUNT never returns a string // but does no harm. Excel12(xlFree, 0, 1, &xExprValue); return 1; } You can also use the xlfEvaluate function to retrieve the registration ID of an XLL function from its registered name, which can then be used to call that function using the xlUDF function.
https://msdn.microsoft.com/en-us/library/bb687836(v=office.12).aspx
CC-MAIN-2015-18
refinedweb
601
57.37
"Think globally, actlocally" Run your GitHub Actions locally! Why would you want to do this? Two reasons: .github/workflows/files (or for any changes to embedded GitHub actions), you can use actto run the actions locally. The environment variables and filesystem are all configured to match what GitHub provides. act, you can use the GitHub Actions defined in your .github/workflows/to replace your Makefile! When. Let's see it in action with a sample repo! act act depends on docker to run workflows. If you are using macOS, please be sure to follow the steps outlined in Docker Docs for how to install Docker Desktop for Mac. If you are using Windows, please follow steps for installing Docker Desktop on Windows. If you are using Linux, you will need to install Docker Engine. act is currently not supported with podman or other container backends (it might work, but it's not guaranteed). Please see #303 for updates. brew install act or if you want to install version based on latest commit, you can run below (it requires compiler to be installed installed but Homebrew will suggest you how to install it, if you don't have it): brew install act --HEAD sudo port install act choco install act-cli scoop install act yay -S act Global install: nix-env -iA nixpkgs.act or through nix-shell: nix-shell -p act If you have Go 1.16+, you can install latest released version of act directly from source by running: go install github.com/nektos/[email protected] or if you want to install latest unreleased version: go install github.com/nektos/[email protected] If you want a smaller binary size, run above commands with -ldflags="-s -w" go install -ldflags="-s -w" github.com/nektos/[email protected] Run this command in your terminal: curl | sudo bash Download the latest release and add the path to your binary into your PATH. # Command structure: act [<event>] [options] If no event name passed, will default to "on: push" # List the actions for the default event: act -l # List the actions for a specific event: act workflow_dispatch -l # Run the default (`push`) event: act # Run a specific event: act pull_request # Run a specific job: act -j test # Run in dry-run mode: act -n # Enable verbose-logging (can be used with any of the above commands) act -v actrun When running act for the first time, it will ask you to choose image to be used as default. It will save that information to ~/.actrc, please refer to Configuration for more information about .actrc and to Runners for information about used/available Docker images. -a, --actor string user that triggered the event (default "nektos/act") --artifact-server-path string Defines the path where the artifact server stores uploads and retrieves downloads from. If not specified the artifact server will not start. --artifact-server-port string Defines the port where the artifact server listens (will only bind to localhost). (default "34567") -b, --bind bind working directory to container, rather than copy --container-architecture string Architecture which should be used to run containers, e.g.: linux/amd64. If not specified, will use host default architecture. Requires Docker server API Version 1.41+. Ignored on earlier Docker server platforms. --container-cap-add stringArray kernel capabilities to add to the workflow containers (e.g. --container-cap-add SYS_PTRACE) --container-cap-drop stringArray kernel capabilities to remove from the workflow containers (e.g. --container-cap-drop SYS_PTRACE) --container-daemon-socket string Path to Docker daemon socket which will be mounted to containers (default "/var/run/docker.sock") --defaultbranch string the name of the main branch --detect-event Use first event type from workflow as event that triggered the workflow -C, --directory string working directory (default ".") -n, --dryrun dryrun mode --env stringArray env to make available to actions with optional value (e.g. --env myenv=foo or --env myenv) --env-file string environment file to read and use as env in the containers (default ".env") -e, --eventpath string path to event JSON file --github-instance string GitHub instance to use. Don't use this if you are not using GitHub Enterprise Server. (default "github.com") -g, --graph draw workflows -h, --help help for act --insecure-secrets NOT RECOMMENDED! Doesn't hide secrets while printing logs. -j, --job string run job -l, --list list workflows --no-recurse Flag to disable running workflows from subdirectories of specified path in '--workflows'/'-W' flag -P, --platform stringArray custom image to use per platform (e.g. -P ubuntu-18.04=nektos/act-environments-ubuntu:18.04) --privileged use privileged mode -p, --pull pull docker image(s) even if already present -q, --quiet disable logging of output from steps --rebuild rebuild local action docker image(s) even if already present -r, --reuse don't remove container(s) on successfully completed workflow(s) to maintain state between runs --rm automatically remove container(s)/volume(s) after a workflow(s) failure -s, --secret stringArray secret to make available to actions with optional value (e.g. -s mysecret=foo or -s mysecret) --secret-file string file with list of secrets to read from (e.g. --secret-file .secrets) (default ".secrets") --use-gitignore Controls whether paths specified in .gitignore should be copied into container (default true) --userns string user namespace to use -v, --verbose verbose output -w, --watch watch the contents of the local repo and run when files change -W, --workflows string path to workflow file(s) (default "./.github/workflows/") In case you want to pass a value for ${{ github.token }}, you should pass GITHUB_TOKEN as secret: act -s GITHUB_TOKEN=[insert token or leave blank for secure input]. MODULE_NOT_FOUND A MODULE_NOT_FOUND during docker cp command #228 can happen if you are relying on local changes that have not been pushed. This can get triggered if the action is using a path, like: - name: test action locally uses: ./ In this case, you must use actions/[email protected] with a path that has the same name as your repository. If your repository is called my-action, then your checkout step would look like: steps: - name: Checkout uses: actions/[email protected] with: path: "my-action" If the path: value doesn't match the name of the repository, a MODULE_NOT_FOUND will be thrown. docker contextsupport The current docker context isn't respected (#583). You can work around this by setting DOCKER_HOST before running act, with e.g: export DOCKER_HOST=$(docker context inspect --format '{{.Endpoints.docker.Host}}') GitHub Actions offers managed virtual environments for running workflows. In order for act to run your workflows locally, it must run a container for the runner defined in your workflow file. Here are the images that act uses for each runner type and size: Windows and macOS based platforms are currently unsupported and won't work (see issue #97) act These default images do not contain all the tools that GitHub Actions offers by default in their runners. Many things can work improperly or not at all while running those image. Additionally, some software might still not work even if installed properly, since GitHub Actions are running in fully virtualized machines while act is using Docker containers (e.g. Docker does not support running systemd). In case of any problems please create issue in respective repository (issues with act in this repository, issues with nektos/act-environments-ubuntu:18.04 in nektos/act-environments and issues with any image from user catthehacker in catthehacker/docker_images) If you need an environment that works just like the corresponding GitHub runner then consider using an image provided by nektos/act-environments: nektos/act-environments-ubuntu:18.04- built from the Packer file GitHub uses in actions/virtual-environments. ⚠️ 🐘 *** WARNING - this image is >18GB 😱*** ghcr.io/catthehacker/ubuntu:full-*- built from Packer template provided by GitHub, see catthehacker/virtual-environments-fork or catthehacker/docker_images for more information To use a different image for the runner, use the -P option. act -P <platform>=<docker-image> If your workflow uses ubuntu-18.04, consider below line as an example for changing Docker image used to run that workflow: act -P ubuntu-18.04=nektos/act-environments-ubuntu:18.04 If you use multiple platforms in your workflow, you have to specify them to change which image is used. For example, if your workflow uses ubuntu-18.04, ubuntu-16.04 and ubuntu-latest, specify all platforms like below act -P ubuntu-18.04=nektos/act-environments-ubuntu:18.04 -P ubuntu-latest=ubuntu:latest -P ubuntu-16.04=node:16-buster-slim To run act with secrets, you can enter them interactively, supply them as environment variables or load them from a file. The following options are available for providing secrets: act -s MY_SECRET=somevalue- use somevalueas the value for MY_SECRET. act -s MY_SECRET- check for an environment variable named MY_SECRETand use it if it exists. If the environment variable is not defined, prompt the user for a value. act --secret-file my.secrets- load secrets values from my.secretsfile. .envformat You can provide default configuration flags to act by either creating a ./.actrc or a ~/.actrc file. Any flags in the files will be applied before any flags provided directly on the command line. For example, a file like below will always use the nektos/act-environments-ubuntu:18.04 image for the ubuntu-latest runner: # sample .actrc file -P ubuntu-latest=nektos/act-environments-ubuntu:18.04 Additionally, act supports loading environment variables from an .env file. The default is to look in the working directory for the file but can be overridden by: act --env-file my.env .env: MY_ENV_VAR=MY_ENV_VAR_VALUE MY_2ND_ENV_VAR="my 2nd env var value" Act adds a special environment variable ACT that can be used to skip a step that you don't want to run locally. E.g. a step that posts a Slack message or bumps a version number. - name: Some step if: ${{ !env.ACT }} run: | ... Every GitHub event is accompanied by a payload. You can provide these events in JSON format with the --eventpath to simulate specific GitHub events kicking off an action. For example: { "pull_request": { "head": { "ref": "sample-head-ref" }, "base": { "ref": "sample-base-ref" } } } act -e pull-request.json Act will properly provide github.head_ref and github.base_ref to the action as expected. Act supports using and authenticating against private GitHub Enterprise servers. To use your custom GHE server, set the CLI flag --github-instance to your hostname (e.g. github.company.com). Please note that if your GHE server requires authentication, we will use the secret provided via GITHUB_TOKEN. Please also see the official documentation for GitHub actions on GHE for more information on how to use actions. Need help? Ask on Gitter! Want to contribute to act? Awesome! Check out the contributing guidelines to get involved. git clone [email protected]:nektos/act.git make test make install
https://awesomeopensource.com/project/nektos/act
CC-MAIN-2022-05
refinedweb
1,807
55.64
Even if for some reason you needed a statement similar to Pascal's "with" (and no, I don't know any example where this could be useful since you'll lose access to anything other then the class), you could implement it with existing syntax: class Test: '''Class to be interrogated.''' def __init__(self, value): self.value = value test = Test(10) class getattrs(dict): '''An auxiliary class.''' def __init__(self, instance): self.instance = instance def __getitem__(self, name): return getattr(self.instance, name) def __setitem__(self, name, value): return setattr(self.instance, name, value) # interrogate test: value += 5 exec('value += 5', getattrs(test)) print(test.value) # prints 15. On Fri, Aug 14, 2009 at 12:11 PM, Nick Coghlan<ncoghlan at gmail.com> wrote: > Steven D'Aprano wrote: >> I don't think there are any problems with a Pascal-style 'with' >> statement that couldn't be overcome, but I don't think the benefit is >> great enough to create a new keyword for it. Can you explain in more >> detail why this proposed feature is useful? > > Also, if you just want to be able to chain multiple namespaces together, > you can do that by implementing an appropriate class with a custom > __getattr__ method. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > --------------------------------------------------------------- > _______________________________________________ > Python-ideas mailing list > Python-ideas at python.org > >
https://mail.python.org/pipermail/python-ideas/2009-August/005546.html
CC-MAIN-2016-30
refinedweb
223
66.03
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project. I am trying to patch gcc 2.95.3 with libstdc++ version 2.90.8. The documentation says that this is possible but that I need to apply a patch to the libstdc++ to mirror the changes made in gcc/glibc. The patch is supposed to be located at: Howver this link appears to be broken. Can I get this from some place else? Any help getting this fix will be much appreciated. Also, the installation instructions say it is better to rebuild all of gcc when applying this patch instead of build libstdc++ by itself. Otherwise, namespaces must be disabled. The question I have is that do namespaces need to be disabled if I already have a working copy of gcc 2.95.3 with stl and namespaces already working? All help appreciated. Samir
http://gcc.gnu.org/ml/libstdc++/2002-08/msg00256.html
CC-MAIN-2019-51
refinedweb
152
68.87
This seems strange. The maximum residency of 12MB sounds about correct for my data. But what's with the 59MB of "slop"? According to the ghc docs: |. There's this page also: but it doesn't really make things clearer for me. Is the slop number above likely to be a significant contribution to net memory usage? Are there any obvious reasons why the code below could be generating so much? The data file in question has 61k lines, and is <6MB in total. Thanks, Tim -------- Map2.hs -------------------------------------------- module Main where import qualified Data.Map as Map import qualified Data.ByteString.Char8 as BS import System.Environment import System.IO type MyMap = Map.Map BS.ByteString BS.ByteString foldLines :: (a -> String -> a) -> a -> Handle -> IO a foldLines f a h = do eof <- hIsEOF h if eof then (return a) else do l <- hGetLine h let a' = f a l a' `seq` foldLines f a' h undumpFile :: FilePath -> IO MyMap undumpFile path = do h <- openFile path ReadMode m <- foldLines addv Map.empty h hClose h return m where addv m "" = m addv m s = let (k,v) = readKV s in k `seq` v `seq` Map.insert k v m readKV s = let (ks,vs) = read s in (BS.pack ks, BS.pack vs) dump :: [(BS.ByteString,BS.ByteString)] -> IO () dump vs = mapM_ putV vs where putV (k,v) = putStrLn (show (BS.unpack k, BS.unpack v)) main :: IO () main = do args <- getArgs case args of [path] -> do v <- undumpFile path dump (Map.toList v) return ()
http://www.haskell.org/pipermail/glasgow-haskell-users/2011-March/020209.html
CC-MAIN-2014-41
refinedweb
254
78.25
XML Binding Namespace The extensions used to describe XML format bindings are defined in the namespace. CXF tools use the prefix xformat to represent the XML binding extensions. Add the following line to your contracts: Editing To map an interface to a pure XML payload format do the following: - Add the namespace declaration to include the extensions defining the XML binding. - Add a standard WSDL bindingelement to your contract to hold the XML binding, give the binding a unique name, and specify the name of the WSDL portTypeelement that represents the interface being bound. - Add an xformat:bindingchild element to the bindingelement to identify that the messages are being handled as pure XML documents without SOAP envelopes. - Optionally, set the xformat:bindingelement's rootNodeattribute to a valid QName. - For each operation defined in the bound interface, add a standard WSDL operationelement to hold the binding information for the operation's messages. - For each operation added to the binding, add the input, output, and faultchildren elements to represent the messages used by the operation. These elements correspond to the messages defined in the interface definition of the logical operation. - Optionally add an xformat:bodyelement with a valid rootNodeattribute to the added input, output, and faultelements to override the value of rootNodeset at the binding level. If any of your messages have no parts, for example the output message for an operation that returns void, you must set the rootNodeattribute. For messages with one part, CXF will always generate a valid XML document even if the rootNode attribute is not set. However, the message below would generate an invalid XML document. Without the rootNode attribute specified in the XML binding, CXF will generate an XML document similar to the one below for the message defined above. The generated XML document is invalid because it has two root elements: pairName and entryNum. If you set the rootNode attribute, as shown below CXF will wrap the elements in the specified root element. In this example, the rootNode attribute is defined for the entire binding and specifies that the root element will be named entrants. An XML document generated from the input message would be similar to the one shown below. Notice that the XML document now only has one root element..
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=56199&showComments=true&showCommentArea=true
CC-MAIN-2018-17
refinedweb
377
51.58
CPSC 124, Fall 1996 Second Test This is the second test given in CPSC 124: Introductory Programming, Fall 1996. See the information page for that course for more information. The answers given here are sample answers that would receive full credit. However, they are not necessarily the only correct answers. Question 1: Define the following terms, as they relate to this course: a) Reference to an object Answer: A reference to an object is just the address of that object in memory. A variable can never hold an object, only a reference to an object. The object itself is stored on the heap. A reference acts as a "pointer" to the object, which can be used to find the object when necessary. b) SubclassA subclass is a class that extends (or is based on) another class, which is called its superclass. The subclass inherits all the behaviors and properties of the class. It can then add to or modify the inherited behaviors. Answer: c) Polymorphism Answer: Polymorphism refers to the fact that different objects can respond to the same method in different ways, depending on the actual type of the object. This can occur because a method can be overridden in a subclass. In that case, objects belonging to the subclass will respond to the method differently from objects belonging to the superclass. (Note: If B is a subclass of A, then a variable of type A can refer to either an object of type A or an object of type B. Let's say that var is such a variable and that action() is a method in class A that is redefined in class B. Consider the statement "var.action()". Does this execute the method from class A or the method from class B? The answer is that there is no way to tell! The answer depends on what type of object var refers to, a class A object or a class B object. The method executed by var.action() depends on the actual type of the object that var refers to, not on the type of the variable var. This is the real meaning of polymorphism.) d) this (when used in a method in a Java program) Answer: When used in the definition of an instance method, this refers to the object to which the method belongs. Think of calling an instance method as sending a message to an object. Then this refers to the object that received the message. Question 2: What is the relationship between "classes" and "objects"? What is the difference between them? Answer: A class is used as a template for making objects. That is, a class contains information about what sort of data an object should contain and what sort of behaviors an object should have. One class can be used to make many objects. All the objects have the same behavior, as specified by the class, and they all hold the same kind of instance variables (although the data stored in those instance variables can be different from one object to another). Classes and objects are totally different sorts of things! Objects do not exist until they are created, after a program is running, and they are destroyed before the program ends. A class is actually part of the program, so of course it exists the whole time a program is running. If you have a class, you can create a subclass based on that class, but you can't do anything similar with an object. Question 3: Write a subroutine named middle with three parameters of type int. The subroutine should return the middle number among its three parameters, in order of size. For example, middle(3,17,12) would have the value 12; middle(737,-33,1266) would have the value 737; and middle(42,42,42) would have the value 42. Answer: There are many ways to write this subroutine. Here is the shortest:public static int middle(int x, int y, int z) { if ( x <= y && y <= z || z <= y && y <= x ) return y; // y is in the middle if ( y <= x && x <= z || z <= x && x <= y ) return x; // x is in the middle return z; // it must be z that is in the middle } You might find it easier to think about it this way: There are six different orders in which x, y, and z can occur. You can go through and check each possible order as one case in an if statement. So here is another way to write the subroutine:public static int middle(int x, int y, int z) { if ( x <= y && y <= z ) return y; else if ( x <= z && z <= y ) return z; else if ( y <= x && x <= z ) return x; else if ( y <= z && z <= x ) return z; else if ( z <= x && x <= y ) return x; else // This is the case where z <= y && y <= x return z; } (It might be worth noting that Java won't let you use else if on the last line instead of just else. The problem is that with else if, it looks like there is a possibility that this routine won't return any value at all, and Java won't let you write such a routine. Of course, you know that you've covered all the possibilities and that in fact some value will always be returned. But Java isn't smart enough to figure that out.) Question 4: Suppose that you want a very simple class to represent the money in a bank account. It needs three methods, one to deposit a given amount into the account, one to withdraw a given amount, and one to check how much money is in the account. It also needs a constructor that creates an account containing a specified initial amount of money. The class should have one instance variable, for storing the amount of money in the account. Write a complete Java class satisfying this requirement. Answer:public class BankAccount { protected double balance; // amount of money in the account public BankAccount(double initialDeposit) { // constructor balance = initialDeposit; } public void deposit(double amount) { balance = balance + amount; } public void withdraw(double amount) { balance = balance - amount; } public double checkBalance() { return balance; } } Question 5: Draw the picture that will be produced by the following paint() method:public static void paint(Graphics g) { for (int i=10; i <= 210; i = i + 50) for (int j = 10; i <= 210; j = j + 50) g.drawLine(i,10,j,60); } Answer: The outer loop is executed for values of i equal to 10, 60, 110, 160, and 210. For each of these values, the inner loop is executed for j equal to 10, 60, 110, 160, and 210. The drawLine is therefore executed 25 times -- and so, 25 different lines are drawn. These lines connect the five points (10,10), (60,10), (110,10), (160,10), and (210,10) to the five points (10,60), (60,60), (110,60), (160,60), and (210,60) in all possible pairings. Here is the picture: Question 6: Suppose that you are given a class named Sorter. An object of this class keeps a sorted list of numbers. When a new number is added, it is inserted into its correct place in the list. For example, if 17.42 is added to the list 3.4, 12.1, 19.96, 20.0, then the list will be modified to 3.4, 12.1, 17.42, 19.96, 20.0. The constructor in the Sorter class creates an initially empty list. The class includes the methodsvoid add(double newNum) -- adds a number to the list double get(int N) -- returns the Nth number from the list Write a complete Java program that uses a Sorter object to sort a list of 10 numbers entered by the user. The program should ask the user to enter 10 numbers. It should read each number and add it to the Sorter object. Then it should retrieve the numbers from the Sorter object in order and print out each number on the console. Answer:public class TestSorter { public static void main(String[] args) { Console console = new Console(); Sorter sort = new Sorter(); // create a Sorter object console.putln("Please enter 10 numbers:"); for (int i = 0; i < 9; i++) { console.put("? "); double x = console.getlnDouble(); // read a number from the user... sort.add(x); // ...and add the number to the Sorter } console.putln(); console.putln("The numbers in sorted order are: "); for (int i = 0; i < 9; i++) { double num = sort.get(i); // get the i-th number from the sorter... console.putln(num); // ...and print it out to the console } console.close(); } // end of main() } // end of class TestSorter Question 7: Programs written for a graphical user interface have to deal with "events." Explain what is meant by the term "events." Give at least two different examples of events, and discuss how a program might respond to those events. Answer: An event is anything that can occur asynchronously, not under the control of the program, to which the program might want to respond. GUI programs are said to be "event-driven" because for the most part, such programs simply wait for events and respond to them when they occur. In many (but not all) cases, an event is the result of a user action, such as when the user clicks the mouse button on a canvas, types a character, clicks a button, or makes a selection from a pop-up menu. The program might respond to a mouse-click on a canvas by drawing a figure, to a typed character by adding the character to an input box, or to a click on a button by clearing the canvas. More generally, a programmer can set up any desired response to an event by writing an event-handling routine for that event. Question 8: One of the big problems in software development is the "reuse problem." That is, how can pieces of programs that have already been written be reused in other projects, in order to avoid duplication of effort. Discuss how object-oriented programming in general, and inheritance in particular, can help to solve the software reuse problem. An example would be helpful. Answer: Reuse is desirable because developing new software is very expensive, so any opportunity to cut down on the size of the job by reusing previous work is a good thing. However, reusing old software can itself be difficult. In object-oriented programming, a class is designed from the beginning to be reusable. The same class can be used over and over in projects that require the same capabilities. More important, inheritance makes it possible to customize a class to a particular project without going into the class itself and modifying its source code (and possibly introducing new errors into the class in the process). All you have to do is make a subclass and put any additions and modifications that you need in the subclass. Question 9: Object-oriented programming means more than just using objects and classes in a program. It also means applying object-oriented ideas to the analysis of problems and the design of programs. Explain briefly how object-oriented analysis and design would work. What is the process used? What is the goal? Answer: The goal of object-oriented analysis and design is to produce a set of classes and objects that can be used to solve the problem under consideration. The basic idea is simple: Identify concepts that are involved in the problem, and create classes to represent those concepts. Then identify behaviors that the objects in the program will need, and write methods to provide those behaviors. One way to approach the problem is to start with an English description of what the program is supposed to do. The nouns in this description are candidates to be classes or objects in the program. The verbs are candidates to be methods. You could then try to discover other classes and methods that you might need with a role-playing game, in which you see how the classes you've already designed will carry out their assigned tasks. As the role-playing proceeds, you are likely to find other classes and methods that you can add to your program. David Eck
https://math.hws.edu/eck/cs124/javanotes1/tests96/test2.html
CC-MAIN-2021-31
refinedweb
2,048
69.62
Member 90 Points Dec 22, 2015 11:37 AM|SautinSoft|LINK Hi Comminy! We are happy to announce about releasing new DOCX Document .Net library! It's 100% standalone and independent .Net assembly, completely written in C#. DOCX Document .Net helps you to develop any .Net (ASP.Net, SilverLight, WPF, Console ...) application which: The library doesn't require MS Office, it even doesn't use System.Drawing. The main and single requirement is .Net 4.0 or higher. This easy C# sample shows "How to create a new DOCX document in .Net": using System; using System.IO; using SautinSoft.Document; namespace Sample { class Sample { static void Main(string[] args) { // Let's create a simple DOCX document. DocumentCore docx = new DocumentCore(); // Add new section. Section section = new Section(docx); docx.Sections.Add(section); // Let's set page size A4. section.PageSetup.PaperType = PaperType.A4; // Add paragarph docx.Content.End.Insert("Hello World!", new CharacterFormat() { Size = 25, FontColor = Color.Blue, Bold = true }); // Save DOCX to a file docx.Save(@"d:\HelloWorld.docx"); } } } Download:. Code Samples:. You are welcome with your questions and offers. Best wishes, Max 0 replies Last post Dec 22, 2015 11:37 AM by SautinSoft
https://forums.asp.net/p/2080773/6002374.aspx?DOCX+Document+Net+allows+to+create+and+parse+DOCX+documents+in+C+
CC-MAIN-2019-43
refinedweb
196
55
Hello, The code initializes a irregular array of integers, prints the length of each row in the array. Then I attempted to print the table[i][j], but I run into an out of bounds exception. In the for loops, the conditional statements are based on the value of table.length, which changes depending upon the row your looking at. I believe this has to do with the out of bounds exception. Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 3. at hslengthdemoarray2.HSLengthDemoArray2.main(HSLengt hDemoArray2.java:32) Line 32 is: System.out.print(table[i][j] + " "); Output: 2d table array: 1 2 3 The program should output the following: 2d table array: 1 2 3 4 5 6 7 8 9 2d table array with the one value changed: 1 66 3 4 5 6 7 8 9 When I turn this into a regular 2d array, where each row has 3 elements, the code runs fine and output is as expected. I think this is because the nested for loop only acknowledges one value of table.length for all of i and j. For example; I thought when i = 0, table[0].length would be 3, in j loop, table.length = 3 based on the i loop. Then, when i = 1, table[1].length = 2, in j loop, table.length = 2 based on the i loop.. On the third step of the outerloop, when i = 2, table[2].length = 4, in j loop, table.length = 3 based on i loop. This appears not to be the case. It appears that table.length either holds a value based on i throughout both for loops or table.length changes based on i and j. It would be good if it only depended on i, I think. Code Java: package hslengthdemoarray2; public class HSLengthDemoArray2 { public static void main(String[] args) { int[][] table = { {1,2,3}, {4,5}, {6,7,8,9}, }; // A variable length table. // Print the length of each row within the array. System.out.println("length of table[0] is " + table[0].length); System.out.println("length of table[1] is " + table[1].length); System.out.println("length of table[2] is " + table[2].length); System.out.println(); System.out.print("2d table array: "); for(int i=0; i<table.length; i++) { for(int j=0; j<table.length; j++) { System.out.print(table[i][j] + " "); } } table[0][1] = 66; System.out.print("2d table array with the one value changed: "); for(int i=0; i<table.length; i++) { for(int j=0; j<table.length; j++) { System.out.print(table[i][j] + " "); } } } }
http://www.javaprogrammingforums.com/%20loops-control-statements/10173-out-bounds-error-2d-irregular-array-integers-printingthethread.html
CC-MAIN-2014-52
refinedweb
434
70.19
Table Of Contents Gesture recognition¶ This class allows you to easily create new gestures and compare them: from kivy.gesture import Gesture, GestureDatabase # Create a gesture g = Gesture() g.add_stroke(point_list=[(1,1), (3,4), (2,1)]) g.normalize() # Add it to the database gdb = GestureDatabase() gdb.add_gesture(g) # And for the next gesture, try to find it! g2 = Gesture() # ... gdb.find(g2) Warning You don’t really want to do this: it’s more of an example of how to construct gestures dynamically. Typically, you would need a lot more points, so it’s better to record gestures in a file and reload them to compare later. Look in the examples/gestures directory for an example of how to do that. - class kivy.gesture.Gesture(tolerance=None)[source]¶ A python implementation of a gesture recognition algorithm by Oleg Dopertchouk: Implemented by Jeiel Aranal (chemikhazi@gmail.com), released into the public domain. - add_stroke(point_list=None)[source]¶ Adds a stroke to the gesture and returns the Stroke instance. Optional point_list argument is a list of the mouse points for the stroke. - dot_product(comparison_gesture)[source]¶ Calculates the dot product of the gesture with another gesture. - get_rigid_rotation(dstpts)[source]¶ Extract the rotation to apply to a group of points to minimize the distance to a second group of points. The two groups of points are assumed to be centered. This is a simple version that just picks an angle based on the first point of the gesture. - get_score(comparison_gesture, rotation_invariant=True)[source]¶ Returns the matching score of the gesture against another gesture. - class kivy.gesture.GestureDatabase[source]¶ Bases: object Class to handle a gesture database. - find(gesture, minscore=0.9, rotation_invariant=True)[source]¶ Find a matching gesture in the database. - class kivy.gesture.GestureStroke[source]¶ Gestures can be made up of multiple strokes. - normalize_stroke(sample_points=32)[source]¶ Normalizes strokes so that every stroke has a standard number of points. Returns True if stroke is normalized, False if it can’t be normalized. sample_points controls the resolution of the stroke. - points_distance(point1=GesturePoint, point2=GesturePoint)[source]¶ Returns the distance between two GesturePoints.
http://kivy.org/docs/api-kivy.gesture.html
CC-MAIN-2015-14
refinedweb
348
50.53
This is my code so far double amount; int years; double ans; double extra = 400; //Ask for amount System.out.print("Please enter sales amount for the week: "); //Ask for amount amount = MyInput.readDouble(); //Store amount //Ask for years System.out.print("Please enter the number of years the sales person has been with the company: "); years = MyInput.readInt(); //Store amount double commission = commission(amount); double bonus = bonus(years); System.out.println("Your weekly commission is $" + bonus ); } public static double commission(double amount) { if (amount >= 5000) { //Calculate sales if amount entered is greater than or equal to 10001 return (amount * 0.10) + 400; } else if (amount >= 1000 ){ return (amount * .08); } else if (amount >= 1000){ return (amount * 0.08); } else if (amount >= 500 ){ return(amount * .05); } else { return 0.0; } } public static double bonus(double commission, int years) { if (years >= 2) { return (commission() baseSalary* years/100); } else { return 0.0; } } } I need to write three methods to 1st to calculate the commission which i've done and second to calculate bonus which checks for how man years the sales person has been there and they add the number of years they've been there to get weekly sales for instance if (amount >= 5000 && years < 2){ ans = (amount * 0.10) + (baseSalary + 400); But since I've already calculate commission which is half of this statement, how do I get commission and use it in my bonus method?
http://www.dreamincode.net/forums/topic/20231-java-methods/
CC-MAIN-2017-43
refinedweb
235
71.95
ok, so my last posting on the subject wasn't exactly correct (big suprise). what I'd really like to know is if the solution is complete. I figured that if anyone could break my code it'd be you guys. =) the purpose of the library is of course to provide a compile-time dereferencing mechanism for pointers and specialized types. for example: below I've attached the header file and some minor examples. any comments or suggestions are welcome.below I've attached the header file and some minor examples. any comments or suggestions are welcome.Code:#include <iostream> #include "content.h" using namespace std; using namespace xtd; int main( void ) { int i = 1024, * p = &i, ** pp = &p, *** ppp = &pp; extract( ppp )++; cout << extract( ppp ) << endl; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/99108-recursive-dereferencing-template-revisited.html
CC-MAIN-2014-49
refinedweb
129
67.86
(19) Mahesh Chand(9) Giuseppe Russo(3) Dipal Choksi(3) Ashish Singhal(3) TimothyA Vanover(2) Tran Khanh Hien(2) Filip Bulovic(2) Mokhtar B(2) Anand Narayanswamy(2) Manisha Mehta(2) Jigar Desai(2) Doug Doedens(2) Utpal (2) David Sandor(2) Sushila Patel(2) John Hudai Godel(2) samuel.ludlow (2) Santhi Maadhaven(2) Abhijeet Warker(1) Gregory Nickonov(1) Kunal Cheda(1) John Schofield(1) Prasad (1) Rajadurai P(1) Sankar Ramanathan(1) Prasad H(1) Patrick Lam(1) David Kulbok(1) Rajesh VS(1) casper boekhoudt(1) Deepak Dutta(1) Ajit Kanada(1) TH Mok(1) Daniel Olson(1) James OReilly(1) Emad Barsoum(1) Srimani Venkataraman(1) Tony Tromp(1) Shripad Kulkarni(1) Ivar Lumi(1) Abebe Assefa(1) Chris Blake(1) Vijay Cinnakonda(1) Pramod Singh(1) Dmitry Belikov(1) Shalilesh Kumar Saha(1) Shakthivel A(1) John Conwell(1) mercy_gp (1) Binoy R(1) kiran_kuchiba (1) BorisM (1) A Jean Michel Cobb(1) Resources No resource found Printing in C# Jan 26, 2000. An application show you all Printing, Print Preview functionality using C#. Adding Menu Support to a Windows Form Dec 18, 2000. This sample code shows you how to use the MainMenu and MenuItem classes to add a menu and click handler for a Windows Form. GDI+ Tutorial for Beginners Dec 26, 2000. GDI+ is next evolution of GDI. In Visual Studio .NET, Microsoft has taken care of most of the GDI problems and have made it easy to use. Creating your own cool Volume Control using GDI+ Jan 23, 2001. In this article, I'll give you an example of creating your own control. Handling Mouse Events in C# Jan 25, 2001. This article explains how to handle mouse events in C# or VB. In C#, you write a delegate and then write an event handler.. Reminder Program Jun 08, 2001. The program allows you to set a running timer to remind you of an upcoming event. Creating a screen saver Jun 21, 2001. Creating a screen saver is an interesting topic. A screen saver is a maximized form that contains no borders and caption. Developing Windows Applications Jul 23, 2001. This tutorial explains you step by step how to create your Windows Applications using Visual C#. Events and Delegates Jul 26, 2001. Events in C# are based on delegates, the Originator defining one or more callback functions as delegates and the listening object then implements then.. Printing out your W2 Form using C# and .NET Aug 07, 2001. This article covers a fairly practical aspect of using a computer - dealing with forms.. Timer Control Aug 22, 2001. The sample project attached with this article shows how to use the Timer control available in .NET and C#. Simple Windows Forms Events and Interfaces Sep 03, 2001. This article contains a c# code which makes use of the concepts of Events and Interfaces together. Mouse and Key Events Sep 11, 2001. This article explains the usage of key and mouse events. The following code shows you how you can read mouse position on mouse move. Visual Inheritance in C#-Part1 Sep 24, 2001. We all know that Inheritance means a extending a class with more Features without worrying about the implementation of features of hidden inside the class to be inherited. WinChat For .NET Oct 10, 2001. WinChat For .NET is a simple peer-to-peer chatting program that functions very similarly to the WinChat program provided by Windows 2000. It provides all the functionalities that the original WinChat program provides. Calculator in C# (Windows Application) Oct 19, 2001. This is a simple calculator program that was written using Visual Studio.NET and C#. Exploring delegates in C# Oct 30, 2001. Delegates are a kind of type safe function pointers which are actually declared as class derived from System.MulticastDelegate. Working with Namespaces in C# Nov 07, 2001. In C#, namespaces are used to logically arrange classes, structs, interfaces, enums and delegates. The namespaces in C# can be nested. That means one namespace can contain other namespaces also... Event Handling in C# Dec 10, 2001. This article shows you how to write control, mouse, and keyboard event handlers in C#. Observer and .NET event delegates Dec 17, 2001. The purpose of this article is to try to introduce observer pattern and compare it to .NET event delegate handling of notifications.. Graphics Programming in C# Dec 26, 2001. The new improved version of GDI is called GDI+. The .NET framework provides a rich set of classes, methods and events for developing applications with graphical capabilities. An Animation Component using C# Feb 08, 2002. Sometimes its desirable to get those graphics moving a bit and this article show the control to implement it. Event Handling in .NET using C# Mar 13, 2002. In this article I discuss the event handling model in .NET using C#. The discussion starts with an introduction to the concept of delegates and then it extends that concept to events and event handling in .NET... Multithreading Part 4: The ThreadPool, Timer Classes and Asynchronous Programming Apr 16, 2002. In this article, I would discuss few more .NET classes and how and what role do they play a role in building multithreading applications. Ripple.NET: A Windows Forms Demo Apr 25, 2002. Finally, in the OnPaint Event Handler I clear the Form and step through the rippleLIst drawing each of the RippleObjs in it that has a location Point that's not Empty. Expression Evaluator Apr 30, 2002. This program uses the transformation from infix notation to postfix notation to evaluate most of the mathematical expressions. Knob Control using Windows Forms and GDI+ May 13, 2002. Control creation for windows form was never so easy like its now with .Net, although it needs some math skills if you want to create self drawn control. Comparison of C# with Java: A Developer Perspective May 29, 2002. The .NET is a language and operating system (on Windows as of now) independent platform pretty similar to Java.. Editable GridView Control in C# and .NET - Part-III Printing the GridView Jun 24, 2002. In our last two articles, we talked about how to create an editable GridView and how to make it persistent in XML.. Imlememnting Drag and Drop in ListView Controls Jul 08, 2002. Drag and Drop operations in Windows can be achieved using 3 simple events - DragEnter, DragLeave, and DragDrop... Using Installer Classes to Ease Deployment in VS.NET Aug 07, 2002. In this article I will demonstrate how to incorporate installer classes with your Visual Studio .NET msi's to handle any supporting tasks that your assemblies may need. DataGrid Customization Part-IV:Exchanging a DataGrid Columns using Drag and Drop Aug 21, 2002. This article covers customized sorting and a DataGrid column hiding programmatically.. Customize User Interfaces and Pass User Input to Installer Classes Oct 29, 2: Designing Application Oriented Server Controls Oct 31, 2002. This article is part II of a three part demo application dubbed Net-Worked Financial System written in C# and .NET framework. Drag and Drop using C# Nov 13, 2002. To allow your program to accept files using the drag and drop, you must first pick a control that you wish to be able to accept them... Events in C# Advanced - Lesson2 Jan 02, 2003. In the previous lesson we created an event and consumed it. In doing so you probably noticed that our code would have been a little better if we could have determined whether or not the file actually existed. Events in C# Made Easy - Lesson1 Jan 02, 2003. Events are useful for updating a user interface with changed data, or causing a piece of code to run after another piece of code has completed. .NET has brought us a powerful model for programming events. Recording Sheet Music Using C# and .NET Feb 07, 2003. This is a article that allow you to record and replay the music you performed on the piano. Math Equation Editor in C# Apr 07, 2003. The Equation Editor I created in C# allows you to create a few simple equations using the keyboard. With the editor you can open and save files of your equations. Implementing Custom Paging in ASP.NET DataGrid Control Jun 01, 2003. This article shows you how to implement custom paging in ASP.NET DataGrid control.. EggTimer in C# Aug 26, 2003. This simple timer app will count down from whatever value is set in the textbox.. Creating Word Find Puzzles for the Web in C# and GDI+ Part II Oct 06, 2003. This article shows you how to create Word Find Puzzle application for the Web using C#, GDI+, and ASP.NET. .NET Remoting - Events, Events? Events! Nov 04, 2003. You want the server to fire an event and all of the clients or only some specific must receive it. This article describes several approaches to the problem using .NET Remoting events model. Events Programming in C# Nov 24, 2003. In this article, author discusses the events model in .NET and how to implement events in your applications using C#. Office11 Solution using .NET - A White Paper Nov 25, 2003. This detailed white paper contains the information about Office 11 support for Microsoft .NET. It also explains Office 11 object model and how to access Word and Excel documents using Visual Studio .NET. A Simple Virtual Voltmeter Using GDI+ and the GP-3 Board Dec 07, 2003. This is a less complex, nevertheless interesting example of how to use the same board to create a simple voltmeter. ApplicationContext to Encapsulate Splash Screen Functionality Jan 05, 2004. The enclosed article also gives a detail explanation of what happens behind the scenes when a WinForm application is started. Hello World in different Styles Jan 09, 2004. I've attempted to write the traditional 'Hello World' in different styles. This explores the different possibilities of addressing a problem - 'Hello World' with different features of C# language and .NET framework. Real Life SQL and .NET :Using SQL with C#: Part VIII. Text Transformation using GDI+ and C# Apr 27, 2004. This article shows you how to use GDI+ classes defined in the .NET Framework class library to apply transformations on text. Dynamic Database Creation - 2 May 06, 2004. This article explains how we can display data using Dataset and DataGrid control after the database is created.. Short Cuts for Toolbar Buttons Jul 26, 2004. This tutorial tells the story about "How to create short cuts for toolbar buttons?".. ASP.NET Page Life Cycle Mar 04, 2005. In this article, we will see the stages of execution of the ASP.NET Page.. Anonymous Method to Retrieve Data Reader Passed from DAL May 03, 2005. Anonymous method is a new feature in C# 2.0 that allows you to define an anonymous method called by a delegate. Space Invaders for C# and .NET Jun 29, 2005. This is an update of the space invaders game posted on C# Corner 3 years ago for Visual Studio 2005. This version adds spiraling bombs and a ship lives indicator. About Delegates-And.
http://www.c-sharpcorner.com/tags/Delegates-And-Events
CC-MAIN-2016-36
refinedweb
1,852
67.65