text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Note: This article was updated in February 2020.
NoCAN networks are managed by a Raspberry Pi, which can usually provide a pretty accurate time source, thanks to NTP. In that situation, if some nodes need date/time information, they could subscribe to a channel called "rtc" (for Real Time Clock) and then you can push the date/time information to the nodes on a regular basis. On a Rasberry-Pi, it could be as simple as typing the following command:
nocanc publish "rtc" `date -u -Iseconds`
The above example assumes that you are sending time in UTC using ISO 8601 format and that your nodes can process that information. You could, of course, use another format.
Oops no Internet
NTP is fine if you have Internet access, but it won't work if your NoCAN network is managed by a Raspberry-Pi that has no such connection. Luckily, it turns out that you can configure any CANZERO node into an RTC clock. Simply set the initial time/date at the beginning and then let the node keep track of time for you and broadcast that information to your Raspberry-Pi and the rest of your NoCAN network.
The CANZERO comes with a 32.756kHz crystal that can be used to drive the internal RTC of the SAMD21G18 32-bit onboard MCU. And there is even an Arduino library for that. In the Arduino IDE, simply go to Sketch > Include Library > Manage Libraries... and search for RTCZero by Arduino. Install the library.
We will now create a sketch that uses two NoCAN channels:
- rtc: this channel will broadcast the time every second.
- rtc/set: this channel will be used to set up the current date and time, typically once at the start.
The code is a bit long because of the addition of functions to parse date/time strings in ISO 8601 format:
#include <nocan.h> #include <RTCZero.h> NocanChannelId cid; NocanChannelId scid; RTCZero rtc; byte last_second = 0; int string_UTC(char *dest, byte year, byte month, byte day, byte hour, byte minute, byte second); void setup() { // put your setup code here, to run once: Nocan.open(); Nocan.registerChannel("rtc", &cid); Nocan.registerChannel("rtc/set",&scid); Nocan.subscribeChannel(scid); rtc.begin(); // initialize RTC to defaults 1:00 on Jan 1, 2001 rtc.setTime(1, 0, 0); rtc.setDate(1, 1, 1); } void loop() { // put your main code here, to run repeatedly: NocanMessage msg; if (Nocan.receivePending()) { Nocan.receiveMessage(&msg); // We don't need to check the channel_id here since we only subscribed to one channel. if (msg.data_len==20) update_rtc_from_string_UTC(msg.data); } else { if (last_second != rtc.getSeconds()) { last_second = rtc.getSeconds(); msg.channel_id = cid; msg.data_len = 20; string_UTC(msg.data, rtc.getYear(), rtc.getMonth(), rtc.getDay(), rtc.getHours(), rtc.getMinutes(), rtc.getSeconds()); Nocan.publishMessage(msg); } } } #define DIGH(x) ((x)/10+'0') #define DIGL(x) ((x)%10+'0') int string_UTC(unsigned char *dest, byte year, byte month, byte day, byte hour, byte minute, byte second) { // ISO 8601 format is 2011-08-30T13:22:53Z dest[0] = '2'; dest[1] = '0'; dest[2] = DIGH(year); dest[3] = DIGL(year); dest[4] = '-'; dest[5] = DIGH(month); dest[6] = DIGL(month); dest[7] = '-'; dest[8] = DIGH(day); dest[9] = DIGL(day); dest[10] = 'T'; dest[11] = DIGH(hour); dest[12] = DIGL(hour); dest[13] = ':'; dest[14] = DIGH(minute); dest[15] = DIGL(minute); dest[16] = ':'; dest[17] = DIGH(second); dest[18] = DIGL(second); dest[19] = 'Z'; return 20; } int to_num(const unsigned char *src) { int r; if ((src[0]<'0') || (src[0]>'9')) return -1; r = (src[0]-'0')*10; if ((src[1]<'0') || (src[1]>'9')) return -1; return r + (src[1]-'0'); } int update_rtc_from_string_UTC(const unsigned char *src) { int year, month, day, hour, minute, second; if ((year = to_num(src+2))<0) return -1; if ((month = to_num(src+5))<0) return -1; if ((day = to_num(src+8))<0) return -1; if ((hour = to_num(src+11))<0) return -1; if ((minute = to_num(src+14))<0) return -1; if ((second = to_num(src+17))<0) return -1; rtc.setTime(hour, minute, second); rtc.setDate(year, month, day); return 0; }
To keep things simple, we left error checking to the minimum in the above example. The above sketch is just an example. In your application, the node does not need to broadcast time every second.
You can upload the Arduino sketch directly from the Arduino IDE as described here.
Once the node is up and running, you can set the RTC of the node with the following command:
nocanc publish "rtc/set" `date -u -Iseconds`
After that, you can read the RTC clock of your CANZERO node with the following command:
nocanc read-channel "rtc"
Alternative ways to upload the sketch
If you don't want or can't use the Arduino sketch upload feature mentioned above, you can upload the sketch manually as follows:
- In the Arduino IDE, generate the firmware by selecting Sketch > Export Compiled Binary.
- Next, locate the compiled binary by selecting Sketch > Show Sketch Folder, and look for the file ending with .omzlo_canzero.hex.
- You have two options for the manual sketch upload:
- You can use the
nocanctool to upload this firmware to the target node as details in our big NoCAN tutorial.
- If you don't like the command line, you can also use the new NoCAN web interface we announced in a previous post, and simply drag and drop the firmware file to upload it, as shown on the video below.
NoCAN is cool!
Spread the word, follow us on Twitter or on our Facebook page. | https://www.omzlo.com/articles/adding-an-rtc-to-nocan | CC-MAIN-2020-45 | refinedweb | 921 | 60.04 |
Priority: P3
Bug ID: 119563
Assignee: ooo-issues@incubator.apache.org
Summary: [From Symphony]the shadow color in aoo is black when
import the .doc with a customized color
Severity: normal
Issue Type: DEFECT
Classification: Application
OS: All
Reporter: louqingle@gmail.com
Hardware: PC
Status: CONFIRMED
Version: AOO340
Component: open-import
Product: word processor
Created attachment 77781
-->
FFC30FFCSW_BasicShapes_0045.doc
1. open the FFC30FFCSW_BasicShapes_0045.doc in aoo 3.4
2. right-click on the shape and select area from context menu
3. check the shadow color, it's black.
4. open the sample file in MS word, the shadow color is gray-50%
--
You are receiving this mail because:
You are the assignee for the bug. | http://mail-archives.apache.org/mod_mbox/incubator-ooo-issues/201205.mbox/%3Cbug-119563-248469@https.issues.apache.org/ooo/%3E | CC-MAIN-2016-18 | refinedweb | 115 | 51.65 |
2. Message widget in Tkinter
By Bernd Klein. Last modified: 16 Dec 2021.
The widget can be used to display short text messages. The message widget is similar in its functionality to the Label widget, but it is more flexible in displaying text, e.g. the font can be changed while the Label widget can only display text in a single font. It provides a multiline object, that is the text may span more than one line. The text is automatically broken into lines and justified. We were ambiguous, when we said, that the font of the message widget can be changed. This means that we can choose arbitrarily a font for one widget, but the text of this widget will be rendered solely in this font. This means that we can't change the font within a widget. So it's not possible to have a text in more than one font. If you need to display text in multiple fonts, we suggest to use a Text widget.
The syntax of a message widget:
w = Message ( master, option, ... )
Let's have a look at a simple example. The following script creates a message with a famous saying by Mahatma Gandhi:
import tkinter as tk master = tk.Tk() whatever_you_do = "Whatever you do will be insignificant, but it is very important that you do it.\n(Mahatma Gandhi)" msg = tk.Message(master, text = whatever_you_do) msg.config(bg='lightgreen', font=('times', 24, 'italic')) msg.pack() tk.mainloop()
The widget created by the script above looks like this:
| https://python-course.eu/tkinter/message-widget-in-tkinter.php | CC-MAIN-2022-05 | refinedweb | 255 | 67.76 |
Built-in Types for Interpreter Internals
A number of objects used by the internals of the interpreter are exposed to the user. These include traceback objects, code objects, frame objects, generator objects, slice objects, and the Ellipsis as shown in Table 3.10. It is relatively rare for programs to manipulate these objects directly, but they may be of practical use to tool-builders and framework designers.
Table 3.10 Built-in Python Types for Interpreter Internals
Code Objects
Code objects represent raw byte-compiled executable code, or bytecode, and are typically returned by the built-in compile() function. Code objects are similar to functions except that they don’t contain any context related to the namespace in which the code was defined, nor do code objects store information about default argument values. A code object, c, has the following read-only attributes:
Frame Objects
Frame objects are used to represent execution frames and most frequently occur in traceback objects (described next). A frame object, f, has the following read-only attributes:
The following attributes can be modified (and are used by debuggers and other tools):
Traceback Objects
Traceback objects are created when an exception occurs and contain stack trace information. When an exception handler is entered, the stack trace can be retrieved using the sys.exc_info() function. The following read-only attributes are available in traceback objects:
Generator Objects
Generator objects are created when a generator function is invoked (see Chapter 6, “Functions and Functional Programming”). A generator function is defined whenever a function makes use of the special yield keyword. The generator object serves as both an iterator and a container for information about the generator function itself. The following attributes and methods are available:
Slice Objects
Slice objects are used to represent slices given in extended slice syntax, such as a[i:j:stride], a[i:j, n:m], or a[..., i:j]. Slice objects are also created using the built-in slice([i,] j [,stride]) function. The following read-only attributes are available:
Slice objects also provide a single method, s.indices(length). This function takes a length and returns a tuple (start,stop,stride) that indicates how the slice would be applied to a sequence of that length. Here’s an example:
s = slice(10,20) # Slice object represents [10:20] s.indices(100) # Returns (10,20,1) --> [10:20] s.indices(15) # Returns (10,15,1) --> [10:15]
Ellipsis Object
The Ellipsis object is used to indicate the presence of an ellipsis (...) in an index lookup []. There is a single object of this type, accessed through the built-in name Ellipsis. It has no attributes and evaluates as True. None of Python’s built-in types make use of Ellipsis, but it may be useful if you are trying to build advanced functionality into the indexing operator [] on your own objects. The following code shows how an Ellipsis gets created and passed into the indexing operator:
class Example(object): def _ _getitem_ _(self,index): print(index) e = Example() e[3, ..., 4] # Calls e._ _getitem_ _((3, Ellipsis, 4)) | http://www.informit.com/articles/article.aspx?p=1357182&seqNum=8 | CC-MAIN-2019-13 | refinedweb | 516 | 52.6 |
Deno is a simple, modern and secure runtime for JavaScript and TypeScript that uses V8 and is built in Rust.
deno info) and a code formatter (
deno fmt).
Deno ships as a single executable with no dependencies. You can install it using the installers below, or download a release binary from the releases page.
Shell (Mac, Linux):
$
curl -fsSL | sh
PowerShell (Windows):
$
iwr -useb | iex
Chocolatey (Windows):
$
choco install deno
See deno_install for more installation options.
Try running a simple program:
$
deno run
Or a more complex one:
import { serve } from "";const s = serve({ port: 8000 });console.log("");for await (const req of s) {req.respond({ body: "Hello World\n" });}
You can find a more in depth introduction, examples, and environment setup guides in the manual..
The manual also contains information about the built in tools that Deno provides.
Next to the Deno runtime, Deno also provides a list of audited standard modules that are reviewed by the Deno maintainers and are guaranteed to work with a specific Deno version. These live alongside the Deno source code in the denoland/deno repository.
These standard modules are hosted at deno.land/std and are distributed via URLs like all other ES modules that are compatible with Deno.
Deno can import modules from any location on the web, like GitHub, a personal webserver, or a CDN like Skypack, jspm.io, jsDelivr or esm.sh.
To make it easier to consume third party modules Deno provides some built in tooling like
deno info and
deno doc. deno.land also provides a web UI for viewing module documentation. It is available at doc.deno.land.
deno.land also provides a simple public hosting service for ES modules that work with Deno. It can be found at deno.land/x. | https://deno.land/ | CC-MAIN-2021-17 | refinedweb | 295 | 57.27 |
Your application needs randomness, and you want it to be able to run on Unix-based platforms that lack the /dev/random and /dev/urandom devices discussed in Recipe 11.3?for example, machines that need to support legacy operating systems.
Use a third-party software package that gathers and outputs entropy, such as the Entropy Gathering and Distribution System (EGADS). Then use the Entropy Gathering Daemon (EGD) interface to read entropy. EGD is a tool for entropy harvesting and was the first tool to export this API.
When implementing our randomness API from Recipe 11.2, use entropy gathered over the EGD interface in places where entropy is needed; then, to implement the rest of the API, use data from that interface to seed an application-level cryptographic pseudo-random number generator (see Recipe 11.5).
A few entropy collection systems exist as processes outside the kernel and distribute entropy through the EGD socket interface. Such systems set up a server process, listening on a Unix domain socket. To read entropy, you communicate over that interface using a simple protocol.
One such system is EGADS (described in the next recipe and available from). Another system is EGD itself, which we do not recommend as of this writing for several reasons, primarily because we think its entropy estimates are too liberal.
Such entropy collection systems usually are slow to collect good entropy. If you can interactively collect input from a user, you might want to use one of the techniques in Recipe 11.19 instead to force the user to add entropy to the system herself. That approach will avoid arbitrary hangs as you wait for crucial entropy from an EGD-compatible system.
The EGD interface is more complex than the standard file interface you get when dealing with the /dev/random device. Traditionally, you would just read the data needed. With EGD, however, you must first write one of five commands to the socket. Each command is a single byte of data:
Query the amount of entropy believed to be available. This information is not at all useful, particularly because you cannot use it in any decision to read data without causing a race condition.
Read data if available. This command takes a single-byte argument specifying how many bytes of data should be read, if that much data is available. If not enough entropy is available, any available entropy may be immediately returned. The first byte of the result is the number of bytes being returned, so do not treat this information as entropy. Note that you can never request or receive more than 255 bytes of entropy at a time.
Read data when available. This command takes the same argument as the previous command. However, if not enough entropy is available, this command will block until the request can be fulfilled. In addition, the response for the command is simply the requested bytes; the initial byte is not the number of bytes being returned.
Write entropy to the internal collector. This command takes three arguments. The first is a two-byte value (most significant byte first) specifying how many bits of entropy are believed to be in the data. The second is a one-byte value specifying how many bytes of data are to be written. The third is the entropic data itself.
Get the process identifier of the EGD process. This returns a byte-long header that specifies how long the result is in bytes, followed by the actual process identifier, most significant byte first.
In this recipe, we implement the randomness interface from Recipe 11.2. In addition, we provide a function called spc_rand_add_entropy( ), which provides an interface to the command for providing the server with entropy. That function does not allow the caller to specify an entropy estimate. We believe that user-level processes should be allowed to contribute data to be put into the mix but shouldn't be trusted to estimate entropy, primarily because you may have just cause not to trust the estimates of other processes running on the same machine that might be adding entropy. That is, if you are using an entropy server that gathers entropy slowly, you do not want an attacker from another process adding a big known value to the entropy system and claiming that it has 1,000 bits of entropy.
In part because untrusted programs can add bad entropy to the mix, we recommend using a highly conservative solution where such an attack is not likely to be effective. That means staying away from EGD, which will use estimates from any untrusted process. While EGADS implements the EGD interface, it ignores the entropy estimate supplied by the user. It does mix the entropy into its state, but it assumes that it contains no entropy.
The following code implements the spc_entropy( ) and spc_keygen( ) functions from Recipe 11.2 using the EGD interface. We omit spc_rand( ) but assume that it exists (it is called by spc_keygen( ) when appropriate). To implement spc_rand( ), see Recipe 11.5.
When implementing spc_entropy( ) and spc_keygen( ), we do not cryptographically postprocess the entropy to thwart statistical analysis if we do not have as much entropy as estimated, as you can generally expect servers implementing the EGD interface to do this (EGADS certainly does). If you want to be absolutely sure, you can do your own cryptographic postprocessing, as shown in Recipe 11.16.
Note that the following code requires you to know in advance the file on the filesystem that implements the EGD interface. There is no standard place to look for EGD sockets, so you could either make the location of the socket something the user can configure, or require the user to run the collector in such a way that the socket lives in a particular place on the filesystem.
Of course, the socket should live in a "safe" directory, where only the user running the entropy system can write files (see Recipe 2.4). Clearly, any user who needs to be able to use the server must have read access to the socket.
#include <sys/types.h> #include <sys/socket.h> #include <sys/un.h> #include <sys/uio.h> #include <unistd.h> #include <string.h> #include <errno.h> #include <stdio.h> #define EGD_SOCKET_PATH "/home/egd/socket" /* NOTE: this needs to be augmented with whatever you need to do in order to seed * your application-level generator. Clearly, seed that generator after you've * initialized the connection with the entropy server. */ static int spc_egd_fd = -1; void spc_rand_init(void) { struct sockaddr_un a; if ((spc_egd_fd = socket(PF_UNIX, SOCK_STREAM, 0)) = = -1) { perror("Entropy server connection failed"); exit(-1); } a.sun_len = sizeof(a); a.sun_family = AF_UNIX; strncpy(a.sun_path, EGD_SOCKET_PATH, sizeof(a.sun_path)); a.sun_path[sizeof(a.sun_path) - 1] = 0; if (connect(spc_egd_fd, (struct sockaddr *)&a, sizeof(a))) { perror("Entropy server connection failed"); exit(-1); } } unsigned char *spc_keygen(unsigned char *buf, size_t l) { ssize_t nb; unsigned char nbytes, *p, tbytes; static unsigned char cmd[2] = {0x01,}; if (spc_egd_fd = = -1) spc_rand_init( ); for (p = buf; l; l -= tbytes) { /* Build and send the request command to the EGD server */ cmd[1] = (l > 255 ? 255 : l); do { if ((nb = write(spc_egd_fd, cmd, sizeof(cmd))) = = -1 && errno != EINTR) { perror("Communication with entropy server failed"); exit(-1); } } while (nb = = -1); /* Get the number of bytes in the result */ do { if ((nb = read(spc_egd_fd, &nbytes, 1)) = = -1 && errno != EINTR) { perror("Communication with entropy server failed"); exit(-1); } } while (nb = = -1); tbytes = nbytes; /* Get all of the data from the result */ while (nbytes) { do { if ((nb = read(spc_egd_fd, p, nbytes)) = = -1) { if (errno = = -1) continue; perror("Communication with entropy server failed"); exit(-1); } } while (nb = = -1); p += nb; nbytes -= nb; } /* If we didn't get as much entropy as we asked for, the server has no more * left, so we must fall back on the application-level generator to avoid * blocking. */ if (tbytes != cmd[l]) { spc_rand(p, l); break; } } return buf; } unsigned char *spc_entropy(unsigned char *buf, size_t l) { ssize_t nb; unsigned char *p; static unsigned char cmd = 0x02; if (spc_egd_fd = = -1) spc_rand_init( ); /* Send the request command to the EGD server */ do { if ((nb = write(spc_egd_fd, &cmd, sizeof(cmd))) = = -1 && errno != EINTR) { perror("Communcation with entropy server failed"); exit(-1); } } while (nb = = -1); for (p = buf; l; p += nb, l -= nb) { do { if ((nb = read(spc_egd_fd, p, l)) = = -1) { if (errno = = -1) continue; perror("Communication with entropy server failed"); exit(-1); } } while (nb = = -1); } return buf; } void spc_egd_write_entropy(unsigned char *data, size_t l) { ssize_t nb; unsigned char *buf, nbytes, *p; static unsigned char cmd[4] = { 0x03, 0, 0, 0 }; for (buf = data; l; l -= cmd[3]) { cmd[3] = (l > 255 ? 255 : l); for (nbytes = 0, p = cmd; nbytes < sizeof(cmd); nbytes += nb) { do { if ((nb = write(spc_egd_fd, cmd, sizeof(cmd) - nbytes)) = = -1) { if (errno != EINTR) continue; perror("Communication with entropy server failed"); exit(-1); } } while (nb = = -1); } for (nbytes = 0; nbytes < cmd[3]; nbytes += nb, buf += nb) { do { if ((nb = write(spc_egd_fd, data, cmd[3] - nbytes)) = = -1) { if (errno != EINTR) continue; perror("Communication with entropy server failed"); exit(-1); } } while (nb = = -1); } } }
EGADS by Secure Software, Inc.:
Recipe 2.4, Recipe 11.2, Recipe 11.3, Recipe 11.5, Recipe 11.16, Recipe 11.19 | http://etutorials.org/Programming/secure+programming/Chapter+11.+Random+Numbers/11.7+Using+an+Entropy+Gathering+Daemon-Compatible+Solution/ | CC-MAIN-2017-04 | refinedweb | 1,533 | 62.48 |
I received several replies which all worked in their own way but
I wanted to automate it so that an aborted email download with
souper would still be imported into yarn regardless.
Well I seem to have found an answer.
I got/created an areas file and placed it in a directory on its own
and I included a couple of lines to my getmail.bat file
It now reads
@echo off
e:
cd \yarn\temp
del areas*.*
SET HOME=E:\HOME
SET YARN=E:\YARN
SET NNTPSERVER=news.brisnet.org.au
souper -n mail.dyson.brisnet.org.au mraiteri password
SET COPYCMD=/Y
copy e:\online\areas*.* e:\yarn\temp
import -u
Well it seems to work for me. No more forgetting that I have an
aborted download and starting again only to overwrite the original
one and lose it.
Cheers
Mike
-- Internet: mraiteri@dyson.brisnet.org.au <Michael Raiteri> Brisbane, Queensland, Australia | http://www.vex.net/yarn/list/199705/0039.html | crawl-001 | refinedweb | 154 | 67.96 |
I have a program which reads entries from a directory. These entries are stored in a vector called fdata.
Using vectors gives me an easy way to sort the entries.
The program copies files from one directory (the source directory) onto another directory (the destination directory). While the copying is performed the program will show a progress bar to indicate how long time the copy process will be using.
By using fork() I thought I could make a parent process performing the copying, and a child process which monitors the copying by reading the destination file size and using that data to display a progress bar.
But my problem is that when I run the code above I get the following error message when the first file copy operation is completed:
"terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc"
So I need a way to tell the child process that the copy operation is terminated. How can I achieve that ?
// == Definitions == // struct paraminfo { std::string srcdir; std::string dstdir; } struct finfo { string filename; long fsize; unsigned char DirType; bool operator() (finfo i, finfo j) { if(i.fsize == j.fsize) return i.filename < j.filename; return (i.fsize > j.fsize); } } fcont; vector<finfo> fdata; string srcfile, dstfile FILE *iFile, *oFile; pid_t pPID; int index1; int SLEEPTIME = 2; // == Code snippet == // for (index1=0;index1<fdata.size();index1++){ srcfile = paramflags->srcdir + "/" + fdata[index1].filename; dstfile = paramflags->dstdir + "/" + fdata[index1].filename; iFile = openfile(srcfile,"read"); oFile = openfile(dstfile,"write"); if (fdata[index1].fsize == 0){ fclose(oFile); fclose(iFile); } else if (fdata[index1].fsize != 0){ pPID = fork(); if (pPID > 0){ while (GetFileSize(dstfile) < fdata[index1].fsize) { // >> Progress bar displays here << sleep(SLEEPTIME); } } else if (pPID == 0){ filecopy(iFile,oFile,buf); fclose(oFile); fclose(iFile); } } } | https://www.daniweb.com/programming/software-development/threads/494620/fork-and-communication-between-processes | CC-MAIN-2021-49 | refinedweb | 293 | 57.37 |
Archives
System.Transactions.TransactionScope and IDbTransaction
I've enjoyed reading about the new System.Transactions namespace and the easy to use TransactionScope object. But there was always something that I wasn't sure about, and from questions I've received its clear I wasn't the only one. The question is basically what happens if you have a traditional ADO.NET IDbTransaction and you enclose it in a TransactionScope? I finally got around to testing out this scenario and I'm pleased to report that everything works exactly as you would hope. In other words, even though you may have an IDbTransaction that you have committed, it will still get rolled back if its enclosed in a TransactionScope that does not Complete. This means that you can continue to use all your existing code and libraries that work with IDbTransactions -- and still use the new TransactionScope when you need something more. It also means that I don't need to do anything special for my ORMapper -- I can continue to use IDbTransaction and my users can use the new TransactionScope also. By the way, I knew Access didn't support distributed transactions, but I was pleasantly surprised that I got an exception when I tried out of curiosity. Finally, for all my readers that think this behavior with IDbTransactions should be obvious, I agree but I've also learned to test things instead of assuming.
Scott Guthrie and Web Projects in VS 2005
Scott Guthrie's promised post with more details on the VS 2005 web projects is now up. I very much appreciate the new capabilities that the ASP.NET team has worked very hard on. I'm still not sure why any of that required getting rid of the project file though. :) Anyhow, its obviously what we will live with, and while its a change, there are many good changes too.
Went to See March of the Penguins
Go see March of the Penguins if you have not -- its really good -- more like a traditional movie than a documentary.
WilsonORMapper v4.1.1 Released -- With a Big Thanks to Paul Welter
Update: WilsonORMapper v4.1.1.0 (8/13/2005) includes the following:
- Added new InitialState.Updated enum value for the StartTracking method.
- This allows immediate PersistChanges -- useful for distributed systems.
- Now supports generic relation collections -- even for the lazy-load case.
- This feature was added by Paul Welter -- his templates are also updated.
Scott Guthrie Comments on Web Projects in VS 2005
Check out the comment on my previous post from the man that started this whole ASP.NET thing.
Is the Sky Falling? Do we need a Beta 3? Do I even care?
Why do I complain about the lack of project files in VS 2005? Do I think the sky is falling and that a Beta 3 is needed?
No, I don't think .NET v2 is going to be a failure, nor do I think there should be a Beta 3. But the reason I don't think there should be a Beta 3 is because I know how big a deal a beta is and I want v2 as soon as possible and not slowed down -- because I will be gladly using it.
But then why do I make my "negative" posts and complain about VS 2005? Because the whole point of the open process is to get our feedback, and I'm giving it -- sometimes loudly. Why? Because I want the RTM to be as good as it can be, because I don't plan on waiting until a SP.
Will I survive without a project file? Most assuredly YES. I can make multiple server forms possible, and make web apps stay alive, and do just about anything I want -- given the time. And that's the rub -- why should it take lots of my time to do something that is very easy today?
MS Changes Internal Implementation of Nullable Types
The MS .NET teams have heard some of our pain with nullables and have implemented some post Beta2 changes described here by Soma:
After looking at several different workarounds and options, it became clear to all that no amount of tweaking of the languages or framework code was ever going to get this type too work as expected. The only viable solution was one that needed the runtime to change. . . . The outcome is that the Nullable type is now a new basic runtime intrinsic.Hopefully this also addresses a lot of the other issues with nullables not being integrated into the entire platform, but I'm not convinced yet.
MS Responded to Yesterday's Post with Another Work-Around
The MS ASP.NET team has apparently heard some of our pain with web projects no longer having project files in VS 2005 and created another work-around since my last post:
ASP.NET Simplicity -- When Is Too Much Simplicity a Bad Thing
I'm a big fan of simplicity -- but some of the things the MS ASP.NET team is doing in the name of simplicity are adding complexity -- and some don't even have realistic workarounds. I started thinking about this post while writing my last one about multiple server forms, but there are other examples where MS is making things harder in the name of simplicity.
1. A big cause of grief going around right now is the impact on real enterprise developers of the decision to have web projects not have project files. Yes, this was a decision made in the name of simplicity -- and for 99% of the web developers out there that ASP.NET targets it probably was a very good decision. But once again the problem is that this simplicity is being forced on the rest of us, and many of us enterprise developers really need many of the capabilities the project file gave us. Should MS have known better -- most definitely yes -- I for one tried to warn them about this very issue at the first Whidbey preview in October of 2002, and I think I was probably the one attendee that best represented real enterprise developers as opposed to authors and trainers.
2. Another "problem" that was introduced with ASP.NET v1 and is now getting worse is that MS gives us "rich" web controls with properties like backcolor, font, and width. Unfortunately, there are no such html attributes -- everyone of these properties is instead actually part of the html element's style. Yes I realize this simplifies things greatly for newbies, but all its done is "teach" more people that CSs isn't important -- and its made real CSS stylesheets much more troublesome to use in ASP.NET. Think I'm off my rocker? Did you know that the Calendar's Title CssClass did not work at all in v1 since no one ever bothered to test stylesheets adequately? Want another example? Did you know that WebPartZones, yes that new wonderful portal technology in ASP.NET v2, will not work with the CssClass properties -- and according to MS that's "by design" and "postponed"? I was really hoping to use WebParts, but all I had to do was try to use it with CSS stylesheets (don't most portals work that way) and I realize they are pretty much worthless -- not to mention take a look at the ugly html they produce. And what's the work-around for this you ask -- Themes and Skins. I know that Themes and Skins are one of the cool new features of ASP.NET v2, but have you ever stopped to think that they wouldn't be needed if we just used CSS stylesheets!
3. My last example is harder to explain -- its not really an issue with "postbacks" so much as a problem with the way most controls handle the postback to determine what to actually do. I'll use the ever popular ASP.NET Forums as an example -- specifically the moderator tools to approve a post or unmoderate a user. I'm looking at a post right now where the Approve link-button has the following link: javascript:__doPostBack('ContentControl$_ctl0$PostRepeater$_ctl0$_ctl0$_ctl0$Approve',''). So what's the problem? This is all based on the relative position of controls, and the data they are assumed to contain, instead of using the actual data key. Sure you can make an argument that its more secure to not expose the data key, but the problem is that without it there is no good way to reliably know what data we're supposed to be talking about! If you use viewstate to move all the data with the postback then its not a problem -- but of course we all hopefully know that moving all of the data in viewstate (the default though in most cases in ASP.NET) is not a good thing. So instead the data is re-queried on postback to determine what data goes with which control, and therefore what post should be approved in this case. But the data can and does change on a real website with lots of activity (hmmm, like the ASP.NET Forums)! So what ends up happening is that the wrong posts are often approved and/or the wrong users are often unmoderated (or we just don't bother to unmoderate anyone).
Now to their credit, they have fixed a lot of these issues in the ASP.NET Forums over the years -- the delete post link now uses the actual key for instance -- but many of these very real bugs persist. The standard method of not using the key is also not very performant, since even if you forget the case of data changing you still have to re-query the database just to do something (and then query it again to get the final set of data to display after the change) -- unless of course you put all the data in viewstate. Note that I do not want to imply that postbacks, viewstate, controls, or any other feature is inherently wrong -- I only want to point out that the simplification has went too far and can actually affect data integrity and/or performance if you use a lot of the defaults. For me this means that I never use the datagrid for more than the simplest of pages or prototypes (if even then) -- and that's really not even a burden once you've used the repeater a few times, or written your own "grid". But the problem persists (and most people don't even realize it) -- and just as bad there are now thousands of articles out there to help you figure out how to get around all the various short-comings of these simple controls. Why? Because this simplicity is great when it works, but unfortunately its teaching many web developers to never do anything else -- I can't even begin to count the number of "huhs" I get when I respond to questions about the datagrid that I don't know because I don't use it.
OK, end of post -- I hope I didn't lose everyone -- or make you think I've lost it. I really do love ASP.NET, and I do appreciate the folks on the MS ASP.NET team, and I do understand the needs for some of this simplicity so that they can reach out to a much wider pool of web developers -- I do not want their job. I just think that sometimes what is simple in the short-term or small cases ends up being anything but simple in the long-term or bigger cases -- so there needs to be a better balance in some cases to not make things too easy just because someone wants it that way. I encounter the same types of things even with my ORMapper -- why can't you just do this thing that would make my code go from 3 lines to 1 line, even though it will make other situations far worse -- that's not simplicity to me.
Yes Virginia, You Can Have Multiple Server Forms in ASP.NET
A current article in MSDN Magazine by a respected author is just wrong -- you can have multiple server forms in ASP.NET, and they can post to other pages. For those of you that still aren't aware of this, my WilsonWebForm has been around for quite some time, and its been totally free and open source for a while now too:. The typical usage is to continue to use the regular server form for the main page content, with validators, while using the WilsonWebForm for small side forms for things like search, login, or preferences. The source code of this control is available in both C# and VB.NET, along with a simple demo application that fully show's how to use it.
I can probably even update it to support validators in ASP.NET v2, since the problem was only the fact that the ASP.NET model assumed all validators we're scoped to the whole page, but now with ASP.NET v2 there are validation groups -- so maybe I'll get around to looking at this, or someone else can since its open source. :)
Note: I do agree that the ASP.NET single-form approach simplifies development and it should not be discarded lightly just because you can. But the fact is that all web browsers do support multiple forms (can someone say "standard"), they do make some things easier (can someone say multiple default buttons without javascript), and they are sometimes even required (ever need to post to an external site). So while I applaud the ASP.NET team for trying to simplify things, I do not agree with the approach that completely eliminates the use of a wide-spread standard approach that is sometimes even required -- all in the name of simplicity. Its actually almost funny watching them now add back some work-arounds for cross-page posting, default buttons, and validation groups -- things that would all have automatically been supported in the first place. :)
Tim Haines is putting up $150 to get some AJAX brainstorming underway
Tim Haines is conducting a "blogversation" to get people brainstorming some ways to use AJAX with his e-commerce store. The interesting thing to me is the incentive -- he's putting up a $150 Amazon voucher to the "winner". See his blog post for the rules if you want to compete, and if nothing else leave a comment for one of the participants as your "vote". Sure I think AJAX is cool (just like it was 5-6 years ago when some of use were doing it without the fancy name), but I'm more intrigued by Tim's method here than anything else -- will this result in some novel ideas? | http://weblogs.asp.net/pwilson/archive/2005/08 | CC-MAIN-2015-11 | refinedweb | 2,476 | 67.18 |
Opened 4 years ago
Closed 4 years ago
#6027 closed feature request (fixed)
Allow changing fixity of new type operators
Description
Here is the problem:
{-# LANGUAGE TypeOperators #-} type (&) = () data (?) class (#) infixr 2 & infixr 2 ? infixr 2 #
testop.hs:5:10: The fixity signature for `&' lacks an accompanying binding testop.hs:6:10: The fixity signature for `?' lacks an accompanying binding testop.hs:7:10: The fixity signature for `#' lacks an accompanying binding
My solution is inspired by the 'type' keyword in the export list.
infixr 2 type & infixr 2 type ?, type #
Attachments (3)
Change History (10)
Changed 4 years ago by atnnn
comment:1 Changed 4 years ago by simonpj
- difficulty set to Unknown
- Milestone set to 7.6.1
- Owner set to pcapriotti
Currently in GHC changing the fixity of a term-level operator, such as (+) also changes the fixity of the corresponding type-level operator. We could have different fixities for the term-level (+) than the type level (+), but it would be confusing!
So rather than les tyou specify different fixities for each, as your patch implies, I think it'd be better simply to make infixr 2 ? work even when there is only a type-level (?) in scope.
Paolo might you look at this? (It in the renamer, not a big deal I think.)
Simon
comment:2 Changed 4 years ago by pcapriotti
- Status changed from new to patch
I believe the current behavior is just an unintended consequence of the new type operator syntax.
Fixity declarations are already resolved in the TcClsName namespace, but only if the reader name is in DataName.
The attached patch relaxes this constraints, and always looks up fixity declaration names in TcClsName, as well as the original namespace.
comment:3 Changed 4 years ago by simonpj
Yes that looks right. But it took me a little while to figure out how your change worked. In effect lookupFixSigNames and lookupLocalDataTcNames are identical except that
- The latter has a special case for Exact names; it would do no harm for this to be used in both
- lookupFixSigNames, used ony in fixity signatures, always adds a TcClsName
- lookupLocalDataTcNames, used only in warnings (rnSrcWarnDecls), adds a TcClsName to a DataName.
The difference is troubling, and I bet it should not exist. Can we just combine the two? And then you won't need to separate out lookupMultipleNames.
And add a Note that gives an example, both of giving fixity for a type constructor, and for a type constructor operator.
Thanks
Changed 4 years ago by pcapriotti
Changed 4 years ago by pcapriotti
comment:4 Changed 4 years ago by pcapriotti
Simon, you're right. The other places using lookupLocalDataTcNames also need the extended lookup now that type operators are not necessarily parsed as DataName.
I updated the patch, and added two test cases: one for the original issue, and one that shows the correct lookup behavior of :info in GHCi.
comment:5 Changed 4 years ago by simonpj
OK good. go ahead and push, thanks
comment:6 Changed 4 years ago by p.capriotti@…
commit 5bfd8933024cb2120c38e01346b1b47d6dde10cb
Author: Paolo Capriotti <p.capriotti@gmail.com> Date: Wed Apr 25 14:10:40 2012 +0100 Fix lookup of fixity signatures for type operators (#6027) Extend name lookup for fixity declaration to the TcClsName namespace for all reader names, instead of only those in DataName. compiler/rename/RnEnv.lhs | 58 +++++++++++++++++++++++++++++------------- compiler/rename/RnSource.lhs | 4 +- 2 files changed, 42 insertions(+), 20 deletions(-)
comment:7 Changed 4 years ago by pcapriotti
- Resolution set to fixed
- Status changed from patch to closed
The implementation of dataTcOccs in the above commit is slightly different from that of the patch, since we have to avoid generating two results when looking up something which is already in the TcClsName namespace, as can happen using :info in GHCi (testcase ghci020 would break).
this patch has arguments in the correct order | https://ghc.haskell.org/trac/ghc/ticket/6027 | CC-MAIN-2016-30 | refinedweb | 645 | 53.81 |
automate mails in lotus notes
(Outlook.Application)
Could someone please help us with this as soon as possible.
The from... they need to add in the email message body and then finally send the email please... need to open up the Lotus Notes client from a JSP page..
Currently in the JSP
Please Send - Java Beginners
Please Send Hi,
this is perfect ur sending code
I want java script coding
Steps:-If user click on refresh button then page iage will be refresh
Thanks Hi friend,
Thanks
vineet
code and specification u asked - Java Beginners
code and specification u asked you asked me to send the requirements... can build in java are extensive and have plenty of capability built in.
We....
BACK-Go to Back one page in the browser?s history.
REFRESH- Refresh the Page
JAVA S/W download
JAVA S/W download Hi
I'm new to JAVA, I want to download JAVA S/W from where I need to download. Please help me with the link.
Thank you,
Hi !
u can download JAVA from
CORE JAVA
CORE JAVA CORE JAVA PPT NEED WITH SOURCE CODE EXPLANATION CAN U ??
Core Java Tutorials
Core Java
Core Java Hi,
can any one please send me a code to count the dupicates charaters from a string.
Thanks a lot in advance!!
The given...(new InputStreamReader(System.in));
System.out.print("Please enter string
Core Java
Core Java Hi,
I have written a board program using Java Swing... to Right using mouse. I implemented MouseMotionListener but could not move further.
Please help me with some better logic in coding for buttom movements.
Please
Core Java
Core Java Hi,
I am trying to remove duplicated charater from a given... anyone please share the code for that???
Thanks a lot in advance!!
... removeDuplicates(String s) {
StringBuilder build = new
Thank U - Java Beginners
Thank U Thank U very Much Sir,Its Very Very Useful for Me.
From SUSHANT
core java
core java
class Arrayd
{
static int max(int x[]){
int...);
}
}
public static void main(String... s)
{
int k=0... in the terminal it gives me:
class,interface or enum expected.
Please help me
core java
core java i need core java material
Hello Friend,
Please visit the following link:
Core Java
Thanks
Core Java - Java Beginners
Core Java How can I take input? hai....
u can take input...
{
System.out.println("Whats the name,please?");
BufferedReader br=new BufferedReader(new... information :
Thanks
Core Java - Java Beginners
Core Java Hi Sir/Madam,
Can u please explain about the Double in java.
I have problem with Double datatype.
public class DoubleTesting {
public static void main(String[] args) {
Double amt=137.17*100
can u plz try this program - Java Beginners
can u plz try this program Write a small record management... Records.
Each Record contains: Name(max 100 char), Age, Notes(No Maximum Limit....
---------------------
<%@ page language="java
core java - Java Beginners
core java Hi Guys,
what is the difference between comparable... Comparable, we could write:
Collections.sort(aList); // This will do default... this will help u How can i write own compile time and runtime exceptions in java
Hello Friend,
Please visit the following links:
http
core java
core java please give me following output Hello sir,What is logic behinde the core java programms,How may programmas are there,for example,sorting of two numbers,grade... and descending any more programms are thier if any please tell
Core java
Core java How to use hyperlink that is href tag in core java without swing, frames etc.
My code is
StringBuffer oBodyStringBuffer = new... to html and send. Here if i add a hyperlink with text also it shows as plain can i use native keyword with abstract method ? if yes explain and if no please explain
Core Java
Core Java Hi,
Can any one please tell me the program to print the below matrix in a spiral order.
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
Thanks a lat in advance
core
core where an multythread using
Please go through the following link:
Java Multithreading
Java Core Code - Java Beginners
Java Core Code My question is that how can i calculate and display the true downloading speed from my download manager(I made it in java using core methods no struts or any adv technology used) using java methods
core java
; Please visit the following links: java how to write or update into excel file using
Core Java
Core Java Hi,
Can any one please share a code to print the below:
1
23
456
78910
thanks a lot in advance
Core Java
Core Java can any one please tell me the difference between Static and dynamic loading in java???
The static class loading is done through the new operator while dynamic class loading is achieved through Run time
core java - Java Beginners
core java how many keywords are in java? give with category? Hi Friend,
Please visit the following links: what does the term web container means exactly?please also give some examples
Hi,
In Java Platform, Enterprise Edition specification, servlet container comes into picture. It is also know as web container
Tomcat Quick Start Guide
Java and JSP concepts. Even though, If you want to learn the same, please visit our Core Java section and JSP Tutorial Section?
Now, get ready for Tomcat and JSP...
Tomcat Quick Start Guide
java please please help
java please please help Dear Friends plz help me to complete this program
import java.util.*;
public class StringDemo {
static...=a.replace("{","");
String s=st.replace("}","");
String[] b - Java Beginners
core java what is object serialization ?
with an example
Hi Friend,
Please visit the following link:
Thanks
core java
core java public class Sample{
public static void main(String args[]){
int a;
}
}
Q.why the above code is not compiled ?
Q.why the below... main(String args[]){
int a;
String s="hajju";
}
}
Core Java - Java Beginners
Core Java Can u give real life and real time examples of abstraction, Encapsulation,Polymarphism....? I guess you are new to java and new... ,, and u will find everything about java here for sure plese help me solve below problem.
I have 2 hash map where key is String.I want to store the value of both the hashmap...)
}
please help me to solve
core java - Java Beginners
core java what is thread ? i can't understand it's need? Hi Friend,
Please visit the following link:
Hope that it will be helpful for you.
Thanks
CORE JAVA Hi,
pelase share the code to calulate the length of string, to reverse the string,
to split the string without using buitin function... String reverseString(String s) {
char[] reverseStringArray = new char
please send me java script for html validation - Java Beginners
please send me java script for html validation please send me code for javascript validation .........please send me its urgent
a.first:link { color: green;text-decoration:none; }
a.first:visited{color:green;text
Core java - Java Beginners
Core java Hello sir/madam,
Can you please tell me why multiple inheritance from java is removed.. with any example..
Thank you... a link. This link will help you.
Please visit for more information.
http hi..
one probledm from my end
how we can perform the TaskSheduling in java ?
coming to my project: Its is NMS domain
My project... to genarate the alert message and snd this alert to another machine
i hope u may solve
core java - Java Beginners
core java hi,
what is the difference between method....
Please visit the following links to know more about Overloading and Overriding:
core java
core java how to display characters stored in array in core java
core java
core java basic java interview question
core java - Java Interview Questions
core java - Use of polymorphism in object oriented programming Hi.... Can anyone please help? Use of polymorphism in object orient... in OOP?s Strategy of programming emulated or being followed by high-level
core java - Java Beginners
core java 1. What are the Advantages of Java?
2. What are the Differences between c,c++ & java?
3. Where we need to Write Java Programs?
4...[],what happens if we dont mention? Hi Friend,
1)Please visit
core java - Java Beginners
core java sir why did u declare or intilize the variables in static main method()..
But non-static members are does't decalred in the static function??
we only declare the static members in static function only
core java - Java Beginners
core java how to create a login page using only corejava(not servlets,jsp,hibernate,springs,structs)and that created loginpage contains database(ms-access) the database contains data what ever u r enter and automatically date
core java - Java Beginners
core java Diff b/w Throws and Throw Hi Friend,
Please visit the following link:
Thanks throw is used for throwing exceptions
core java - Java Beginners
core java is Runnable is marker interface?is there any rule... it should not
be a Marker Interface but it contains a run() and when u r
extending the Thread Class it is making u to execute run()
Indirectly
java core collection - Java Interview Questions
java core collection why program in collection package throw two warnings(notes
core java - Java Beginners
change the fields in the caller?s objects they point to. In Java, you cannot...core java pl. tell me about call by value and call by reference... changes it makes to those values have no effect on the caller?s variables
Java Variables Help Please!
Java Variables Help Please! Hi, I just started with java and i need help with my school project, this is it so far:
import java.util.*;
public... answer?");
System.out.println("1: s");
System.out.println("2: u
Core Java - Java Beginners
Core Java How can we explain about an object to an interviewer ....
An object is a combination of messages and data. Objects can receive and send... to :
StringBuilder v/s StringBuffer - Java Beginners
StringBuilder v/s StringBuffer Hi, Thank you for your prompt... that if multiple threads are accessing it at the same time, there could be trouble...(new InputStreamReader(System.in)); System.out.println("Please enter String
Plz send - Java Beginners
Plz send Hi,
please send whole code i understood ur sending sql query
Thanks hai dear frnd........
look how can i write the code... without knowing ur table structure...from where should the search occur
Core JAva
Core JAva how to swap 2 variables without temp in java
core java
core java how can we justify java technology is robust i want to get java projects in core java
Core java - Java Interview Questions
Core java Dear Sir/Mam
why java support dynamic object creation?please discuss advatage and disadvantage
core java - Applet
core java Hellow sir,
how can canvert number to words.
Output like.... (........)In Words.
Please help me.
this is my college project. email me...] ) ;
show ( " " ) ;
show ( st4[q-2] ) ;
}
}
}
public void show(String s
please send me javascript validation code - Java Beginners
please send me javascript validation code hallo sir , please send me java script code for this html page.since i want to do validation.i am a new user in java ....please send me its urgent
minor project on service center management in core java
minor project on service center management in core java I need a minor project on service center management in core java...If u have then plz send me what is a class
minor project in core java
minor project in core java I am a student of BE IT branch 6sem ,i have to prepare a minor project in core java please sujest some topics other then management topics
Core Java Exceptions - Java Beginners
Core Java Exceptions HI........
This is sridhar... Error? How can u justify? Hi friend,
Read for more information.
Thanks
New to Java Please help
New to Java Please help Hi I need help, can some one help me.... Thanks!
If you are new in java, then you need to learn core java concepts.So go through the following link:
Core Java Tutorials
Here, you will find
...;java QuickSort
RoseIndia
Quick Sort... to sort integer values of an array using quick
sort.
Quick sort algorithm | http://www.roseindia.net/tutorialhelp/comment/13475 | CC-MAIN-2014-52 | refinedweb | 2,043 | 64.3 |
ConcurrentHashMap Examples
4 steps when accessing a cache implemented with java.util.ConcurrentHashMap (javadoc):
- get the value from the ConcurrentMap;
- if null, assume it's the first access, and create the value;
- call putIfAbsent on the concurrentMap to store the new value;
- if return value is not null (it's rare but happens), use the return value as the golden copy, and discard the newly-created object.
import java.util.*;To compile and run the SqrtTest (
import java.util.concurrent.*;
public class SqrtTest {
private static final String CONCURRENCY_LEVEL_DEFAULT = "50";
private static final String CONCURRENCY_KEY = "concurrency";
private ConcurrentMap<Double, Double> sqrtCache = new ConcurrentHashMap<Double, Double>();
public static void main(String args[]) {
final SqrtTest test = new SqrtTest();
final int concurrencyLevel = Integer.parseInt(System.getProperty(CONCURRENCY_KEY, CONCURRENCY_LEVEL_DEFAULT));
final ExecutorService executor = Executors.newCachedThreadPool();
try {
for(int i = 0; i < concurrencyLevel; i++) {
for(String s : args) {
final Double d = Double.valueOf(s);
executor.submit(new Runnable() {
@Override public void run() {
System.out.printf("sqrt of %s = %s in thread %s%n",
d, test.getSqrt(d), Thread.currentThread().getName());
}
});
}
}
} finally {
executor.shutdown();
}
}
// 4 steps as outlined above("discard calculated sqrt %s and use the cached sqrt %s", sqrt, existing);
sqrt = existing;
}
}
return sqrt;
}
}
-Dconcurrency=123can be used to adjust the concurrency level):
$ javac SqrtTest.javaFrom the above output, we can see at least one calculation is discarded since the value already exists in the cache. It had been added to the cache by another thread between step 1 and step 3.
$ java SqrtTest 0.5 11 999 0.1
calculated sqrt of 0.5 = 0.7071067811865476
sqrt of 0.5 = 0.7071067811865476 in thread pool-1-thread-1
calculated sqrt of 11.0 = 3.3166247903554
sqrt of 11.0 = 3.3166247903554 in thread pool-1-thread-2
calculated sqrt of 999.0 = 31.606961258558215
sqrt of 999.0 = 31.606961258558215 in thread pool-1-thread-1
sqrt of 11.0 = 3.3166247903554 in thread pool-1-thread-2
calculated sqrt of 0.1 = 0.31622776601683794
calculated sqrt of 0.1 = 0.31622776601683794
sqrt of 999.0 = 31.606961258558215 in thread pool-1-thread-1
sqrt of 11.0 = 3.3166247903554 in thread pool-1-thread-8
sqrt of 0.5 = 0.7071067811865476 in thread pool-1-thread-4
sqrt of 0.5 = 0.7071067811865476 in thread pool-1-thread-7 calculated sqrt of 0.1 = 0.31622776601683794
discard calculated sqrt 0.31622776601683794 and use the cached sqrt 0.31622776601683794sqrt of 0.1 = 0.31622776601683794 in thread pool-1-thread-6
...
Multiple input double numbers are used to increase thread contention. When testing with one single input number, I couldn't trigger the race condition as evidenced by the "discard calculated sqrt" log message. It is probably because it takes time for the thread pool to create the second thread, and by the time it kicks in, the result is already calculated by the first thread and well established in the cache.
11 comments:
Fantastic example. Its also good to know differences between HashMap and ConcurrentHashMap in Java , which helps to decide when to use CHM
See
Article describes in detail about the internals of HashMap and CocurrentHashMap
Very good article on ConcurrentHashMap. Your readers may be interested in performance of ConcurrentHashMap vs. HashMap as well as thread safety concerns.
Thanks.
P-H
Nice Article. visit more java hashmap examples
Great and Useful Article.
Online Java Training
Java Online Training India
Java Online Course
Java EE course
Java EE training
Best Recommended books for Spring framework
Java Interview Questions
Java Course in Chennai
Java Online Training India
I believe the DISCARD race condition is occurring because of the printf for the calculated sqrt above the exists test. Comment it out and see that there are no longer race conditions (at least for me on my Mac OS). Then replace that printf with a Thread.sleep and increase from 1 ms to 100ms and see the DISCARD race condition error increase along with the time delay. and Interesting article... Thanks for sharing your views and post....
Java Training in Chennai
blogpost
hack7
blogpost
hack7
blogpost
hack7 | https://javahowto.blogspot.com/2012/03/concurrenthashmap-examples.html | CC-MAIN-2020-24 | refinedweb | 678 | 59.19 |
Update permanent ARP entries for allowed_address_pair IPs in DVR Routers
Bug Description
We have a long term issue with Allowed_
The ARP entry for the allowed_
Since DVR does the ARP table update through the control plane, and does not allow any ARP entry to get out of the node to prevent the router IP/MAC from polluting the network, there has been always an issue with this.
A recent patch in master https:/
This patch helped in updating the ARP entry dynamically from the GARP message. But the entry has to be Temporary(NUD - reachable). Only if it is set to 'reachable' we were able to update it on the fly from the GARP message, without using any external tools.
But the problem here is, when we have VMs residing in two different subnets (Subnet A and Subnet B) and if a VM from the Subnet B which is on a different isolated node and is trying to ping the VRRP IP in the Subnet A, the packet from the VM comes to the router namespace where the ARP entry for the VRRP IP is available as reachable. While it is reachable the VM is able to send couple of pings, and later within in 15 sec, the pings timeout.
The reason is that the Router is in turn trying to make sure that if the IP/MAC combination for the VRRP IP is still valid or not, since the entry in the ARP table is "REACHABLE" and not "PERMANENT".
When it tries to re-ARP for the IP, the ARP entries are blocked by the DVR flow rules in the br-tun and so the ARP timesout and the ARP entry in the Router Namespace becomes incomplete.
Option A:
So the way to address this situation is to make use of some GARP sniffer tool/utility that would be running in the router namespace to sniff a GARP packet with a specific IP as a filter. If that IP is seen in the GARP message, the tool/utility should in-turn try to reset the ARP entry for the VRRP IP as permanent. ( This is one option ). This is very performance intensive and so not sure if it would be helpful. So we should probably make it configurable, so that people can use it if required.
Option B:
The other option is, instead of running it on all nodes and in all router-namespace, we can probably just run it on the network_node router_namespace, or in the network node host, and then send a message to the neutron that there was a change in IP/MAC somehow and then neutron will then communicate to all the hosts to do an ARP update for the given IP/MAC. ( Just an idea not sure how simple it is when compared to the former)
Any ideas or thoughts would be helpful.
SWe are talking about this piece of code: https:/
Hi Brian, yes I can take a look at the keepalived_
But not sure if it can be as such used in our case, since the keepalived in our case is running in side the VM and not in the Namespace.
I don't think that we can use the IP-Monitor for our purpose.
We should definitely come up with a GARP-sniff tool similar to IP-Monitor and then use if for our purpose.
If we think that the performance will be an issue. We should probably come up with a dedicated node, that is doing the sniff and reporting it back to Neutron Server.
That way neutron server can then do an rpc update to all agents to add a permanent entry.
Is there any other way to collect the same information that would be collected by garp-sniffing?
What info needs to be propagated?
Also what about putting a flow rule into the openflow tables that route certain packets to all other compute nodes?
For example, a rule could be added to forward GARP packets to all other "members" of dvr group?
And then you can hang a smarter process off each router namespace and process them however is required. Or make it part of the dvr code.
Also on the performance aspect of sniffing garp packets, the reason a system would suffer performance is if it is processing ALL packets, what if we can filter to only garp packets *before* the monitoring tool gets them.
Its basically the same thing as what a host must do. All hosts have to listen to garp packets, so I am not sure you would get any more traffic than what you would get already.
its just instead of dropping the packets on the floor, you pull them into user space for processing.
Perhaps the term "sniffing" is what we need to avoid.
Again we could put a flow rule that pulls specifically garp packets from an ip address/mac combo and give those packets to a process in user space.
One other simple solution would be to forward a packet ( GARP ) for the MAC's that are configured for Allowed_
Before writing the entry probably we should check for the current flows configured in the ARP Responder table and if there is a duplicate entry for the IP/MAC combination, then delete it and rewrite the flows for the GARP packet.
Today we don't have a packet_in_handler for the Ryu controller app.
Similar to this.
@handler.
def packet_
pkt = packet.
for p in pkt:
print p.protocol_name, p
My knowledg is pretty limited in L2 openflow, so if there are any L2 Openflow experts could comment on this, if this would work or not.
Swami I am not sure this last proposal is better in terms of performance than sniffing garp. I don't have all the details and I am new to this problem but have you ever tried instead of setting the NUD to reachable to change the timeout so that arp entries become stale pretty quickly and the GARP can update them? A combination between frequent GARP and entries that get stale quickly might fix this.
Rossella thanks for your feedback. The issue is we are seeing the garp updates the arp entry.But when there is ping to that IP, the router tries to re-arp to confirm the IP and that is were it fails.
Not sure if we can do something similar to ARP Responder to add a dynamic ARP Responder rule to send an ARP reply for the GARP'd MAC.
Adding a rule similar to ARP Responder may not be possible. So the best bet is to forward the GARP packets to an output port (tap port created for this purpose). Then a separate process can listen on the tap port and then parse the packet and provide info to the Network node ( neutron-server) to update the ARP entry on all nodes.
Please provide me your thoughts on this.
If we feel that this process running in the compute node will reduce the performance, then what we should do is probably have a new agent type running this process and then communicating to the Network node. That way we don't need to run this process on all compute nodes.
Adding a rule dynamically to the ARP responder is only possible if we can forward the GARP packet to the openflow controller where the openflow controller can process the IN_PACKETS and then make a decision on creating a flow in the ARP responder table.
Is this possible in neutron openflow native drivers today. Any l2 openflow experts can comment on it.
This would be the simplest approach otherwise, we need to forward it to the user space where we process the packet and then send it to Neutron server to take necessary action.
After talking with Miguel Ajo and Daniel Alvarez, plan is to intecept GARP packets and forward to local controller for processing
Swami had a WIP patch that inserted an OF rule to intercept, would need update to send to controller
https:/
For ryu look here:
https:/
https:/
ovn-controller handles incoming packets (packet-in) as controller here:
https:/
https:/
ODL is tracking this on https:/
Basically flows will be programmed dynamically when a GARP is recognized by the controller.
Change abandoned by Swaminathan Vasudevan (<email address hidden>) on branch: master
Review: https:/
Reason: Abandoning this patch since I have a better one to address this issue.
https:/
Also related to this bug: https:/
Reviewed: https:/
Committed: https:/
Submitter: Zuul
Branch: master
commit 52b537ca22b2d7d
Author: Swaminathan Vasudevan <email address hidden>
Date: Thu Apr 11 11:12:24 2019 -0700
DVR: Modify DVR flows to allow ARP requests to hit ARP Responder table
DVR does the ARP table update through the control plane, and does not
allow any ARP requests to get out of the node.
In order to address the allowed address pair VRRP IP issue with DVR,
we need to add an ARP entry into the ARP Responder table for the
allowed address pair IP ( which is taken care by the patch in [1])
This patch adds a rule in the br-int to redirect the packet
destinated to the router to the actual router-port and also moves
the arp filtering rule to the tunnel or the physical port based on the
configuration.
By adding the above rule it allows the ARP requests to reach the
ARP Responder table and filters the ARP requests before it reaches
the physical network or the tunnel.
[1] https:/
Related-Bug: #1774459
Change-Id: I3905ea56ca0ff3
I'm not a big fan of a process running that is snooping on traffic, it's most likely going to cause a performance issue.
Can doing this like the keepalived_
state_change code work? It uses "ip monitor" to watch for events and triggers action, and could be modified to look for "neigh" events. | https://bugs.launchpad.net/neutron/+bug/1774459 | CC-MAIN-2019-39 | refinedweb | 1,650 | 62.82 |
I was reviewing changes for indic-trans as part of GSoC 2016. The module is an improvisation for our original transliteration module which was doing its job by substitution.
This new module uses machine learning of some sort and utilizes Cython, numpy and scipy. Student had kept pre-compiled shared library in the git tree to make sure it builds and passes the test. But this was not correct way. I started looking at way to build these files and remove it from the code base.
There is a cython documentation for distutils but none for setuptools. Probably it is similar to other Python extension integration into setuptools, but this was first time for me so after a bit of searching and trial and error below is what I did.
We need to use Extensions class from setuptools and give it path to modules we want to build. In my case beamsearch and viterbi are 2 modules. So I added following lines to setup.py
from setuptools.extension import Extension from Cython.Build import cythonize extensions = [ Extension( "indictrans._decode.beamsearch", [ "indictrans/_decode/beamsearch.pyx" ], include_dirs=[numpy.get_include()] ), Extension( "indictrans._decode.viterbi", [ "indictrans/_decode/viterbi.pyx" ], include_dirs=[numpy.get_include()] ) ]
First argument to Extensions is the module name and second argument is a list of files to be used in building the module. The additional inculde_dirs argument is not normally necessary unless you are working in virtualenv. In my system the build used to work without this but it was failing in Travis CI, so added it to fix the CI builds. OTOH it did work without this on Circle CI.
Next is provide this extensions to ext_modules argument to setup as shown below
setup( setup_requires=['pbr'], pbr=True, ext_modules=cythonize(extensions) )
And for the reference here is full setup.py after modifications.
#!/usr/bin/env python from setuptools import setup from setuptools.extension import Extension from Cython.Build import cythonize import numpy extensions = [ Extension( "indictrans._decode.beamsearch", [ "indictrans/_decode/beamsearch.pyx" ], include_dirs=[numpy.get_include()] ), Extension( "indictrans._decode.viterbi", [ "indictrans/_decode/viterbi.pyx" ], include_dirs=[numpy.get_include()] ) ] setup( setup_requires=['pbr'], pbr=True, ext_modules=cythonize(extensions) )
So now we can build the extensions (shared library) using following command.
python setup.py build_ext
Another challenge I faced was missing extension when running test. We use pbr in above project and testrepository with subunit for running tests. Looks like it does not build extensions by default so I modified the Makefile to build the extension in place before running test. The travis target of my Makefile is as follows.
travis: [ ! -d .testrepository ] || \ find .testrepository -name "times.dbm*" -delete python setup.py build_ext -i python setup.py test --coverage \ --coverage-package-name=indictrans flake8 --max-complexity 10 indictrans
I had to build the extension in place using -i switch. This is because other wise the tests won't find the indictrans._decode.beamsearch and indictrans._decode.viterbi modules. What basically -i switch does is after building shared library symlinks it to the module directory, in ourcase indictrans._decode
The test for existence of .testrepository folder is over come this bug in testrepository which results in test failure when running tests using tox. | https://copyninja.info/blog/cython_setuptools.html | CC-MAIN-2018-13 | refinedweb | 525 | 50.94 |
Update: Corrected terminology – it’s “generic functions” not “generic methods”.
Aside from macros, another nice feature that Lisp supports is mixins. In fact, this is one of the (very, very) few things I miss about C++. Of course, Lisp does it in a much more powerful way using something called generic functions. See here and here for a great description of generic functions in Common Lisp – it’s one of the cooler parts of Lisp, in my opinion, especially for someone like me who has come from a C++/C#/Java OO background.
Although not a tool for every occasion, and somewhat against the spirit of Inherit to Be Reused, Not to Reuse, mixins are nonetheless handy at times. Since mixins are generally implemented via multiple inheritance, they haven’t really been an option in C#…until extension methods came along. Now you can do something like this:
using System; public interface TellNameMixin { string Name { get; } } public static class MixinImplementation { public static void TellName(this TellNameMixin subject, string prefix) { Console.WriteLine("My name is: {0}{1}", prefix, subject.Name); } }
using System;
public interface TellNameMixin { string Name { get; } }
public static class MixinImplementation { public static void TellName(this TellNameMixin subject, string prefix) { Console.WriteLine("My name is: {0}{1}", prefix, subject.Name); } }:
public class Craig : MarshalByRefObject, TellNameMixin { public string Name { get { return "Craig"; } } } public class Program { public static void Main() { Craig craig = new Craig(); craig.TellName("Mr. "); // Prints “Mr. Craig” } }
public class Craig : MarshalByRefObject, TellNameMixin { public string Name { get { return "Craig"; } } }
public class Program { public static void Main() { Craig craig = new Craig(); craig.TellName("Mr. "); // Prints “Mr. Cra.
Anyway, it’s not something you’ll use every day, but it’s worth knowing about.
Just a minor nit...in Common Lisp, they're called generic functions. Methods are something different (they're contained within the GFs.)
The funny thing is, as I was writing this, I kept typing "generic functions" and then "correcting" it. Guess I should have gone with my gut. Or looked it up. :)
I don't think mixins are against the spirit of "Inherit to be reused, not to reuse". Done correctly (ala Ruby et al), mixins are a very nice was to get reuse without actually changing your inheritance tree. The mixed in methods in a language like Ruby become YOUR methods and not something you get via inheritance.
Pingback from Reflective Perspective - Chris Alcock » The Morning Brew #133
@Peter: so in your opinion, in languages that support multiple inheritance, reuse is a reasonable use for inheritance? I would tend to agree...and it's yet another reason I've never liked the single inheritance restriction of .NET.
Mixins based on extension methods are cool, but they are also somewhat restricted. For example, storing state on a per-instance basis is difficult - you need to use a dictionary-based approach, but deal with WeakReferences and stuff to avoid keeping objects alive. Mixins based on extension methods cannot introduce additional interfaces to a class. In contrast to C++ mixins, they cannot override base class methods, and target classes cannot override mxin methods. And they can only access public (interface) members of the mixed type, not protected ones.
So, mixins based on extension methods are definitely nice, and especially the compiler support is awesome. However, there are quite a few features they are missing, which is why we are working on a library called re:motion mixins here at my company. We did a presentation about it at Lang.NET in January [1], and there is a homepage:. The latter is, however, terribly out of date, since we are working very hard on getting our first open source release of the framework out of the door. But at least one can download a preview version.
Fabian
[1] langnetsymposium.com/.../2-10%20-%20remotion%20Mixins%20-%20Stefan%20Wenig%20and%20Fabian%20Schmied%20-%20rubicon.html
You won't hear me argue that C# has every feature that I'd like. :)
Pingback from IList Performance Decrease « Tales from a Trading Desk
Bill Wagner suggested the same thing here msdn.microsoft.com/.../bb625996.aspx
So did John Rusk here dotnet.agilekiwi.com/.../extension-methods-solution.html.
Scala supports traits/mixins naturally | http://www.pluralsight.com/community/blogs/craig/archive/2008/07/09/c-mixins.aspx | crawl-002 | refinedweb | 697 | 56.15 |
The operator module exports a set of efficient functions and sequence operations.
The object comparison functions are useful for all objects, and are named after the rich comparison operators they support:
Perform “rich comparisons” between a and b. Specifically, lt(a, b) is equivalent to a < b, le(a, b) is equivalent to a <= b, eq(a, b) is equivalent to a == b, ne(a, b) is equivalent to a != b, gt(a, b) is equivalent to a > b and ge(a, b) is equivalent to a >= b. Note that these functions can return any value, which may or may not be interpretable as a Boolean value. See Comparisons for more information about rich comparisons.
The logical operations are also generally applicable to all objects, and support truth tests, identity tests, and boolean operations:
Return the outcome of not obj. (Note that there is no __not__() method for object instances; only the interpreter core defines this operation. The result is affected by the __bool__() and __len__() methods.)
Return True if obj is true, and False otherwise. This is equivalent to using the bool constructor.
Return a is b. Tests object identity.
Return a is not b. Tests object identity.
The mathematical and bitwise operations are the most numerous:
Return the absolute value of obj.
Return a + b, for a and b numbers.
Return the bitwise and of a and b.
Return a // b.
Return a converted to an integer. Equivalent to a.__index__().
Return the bitwise inverse of the number obj. This is equivalent to ~obj.
Return a shifted left by b.
Return a % b.
Return a * b, for a and b numbers.
Return obj negated (-obj).
Return the bitwise or of a and b.
Return obj positive (+obj).
Return a ** b, for a and b numbers.
Return a shifted right by b.
Return a - b.
Return a / b where 2/3 is .66 rather than 0. This is also known as “true” division.
Return the bitwise exclusive or of a and b.
Operations which work with sequences (some of them with mappings too) include:
Return a + b for a and b sequences.
Return the outcome of the test b in a. Note the reversed operands.
Return the number of occurrences of b in a.
Remove the value of a at index b.
Return the value of a at index b.
Return the index of the first of occurrence of b in a.
Set the value of a at index b to c.
Example: Build a dictionary that maps the ordinals from 0 to 255 to their character equivalents.
>>>. The attribute names can also contain dots. For example:
Equivalent to:
def attrgetter(*items): if any(not isinstance(item, str) for item in items): raise TypeError('attribute name must be a string') if len(items) == 1: attr = items[0] def g(obj): return resolve_attr(obj, attr) else: def g(obj): return tuple(resolve_attr(obj, attr) for attr in items) return g def resolve_attr(obj, attr): for name in attr.split("."): obj = getattr(obj, name) return obj
Return a callable object that fetches item from its operand using the operand’s __getitem__() method. If multiple items are specified, returns a tuple of lookup values. For example:) >>> list(map(getcount, inventory)) [3, 2, 5, 1] >>> sorted(inventory, key=getcount) [('orange', 1), ('banana', 2), ('apple', 3), ('pear', 5)]
Return a callable object that calls the method name on its operand. If additional arguments and/or keyword arguments are given, they will be given to the method as well. For example:
Equivalent to:
def methodcaller(name, *args, **kwargs): def caller(obj): return getattr(obj, name)(*args, **kwargs) return caller
This table shows how abstract operations correspond to operator symbols in the Python syntax and the functions in the operator module.
Many operations have an “in-place” version. Listed below are functions providing.
In those examples, note that when an in-place method is called, the computation and assignment are performed in two separate steps. The in-place functions listed below only do the first step, calling the in-place method. The second step, assignment, is not handled.
For immutable targets such as strings, numbers, and tuples, the updated value is computed, but not assigned back to the input variable:
>>>>> iadd(a, ' world') 'hello world' >>> a 'hello'
For mutable targets such as lists and dictionaries, the inplace method will perform the update, so no subsequent assignment is necessary:
>>> s = ['h', 'e', 'l', 'l', 'o'] >>> iadd(s, [' ', 'w', 'o', 'r', 'l', 'd']) ['h', 'e', 'l', 'l', 'o', ' ', 'w', 'o', 'r', 'l', 'd'] >>> s ['h', 'e', 'l', 'l', 'o', ' ', 'w', 'o', 'r', 'l', 'd']
a = iadd(a, b) is equivalent to a += b.
a = iand(a, b) is equivalent to a &= b.
a = iconcat(a, b) is equivalent to a += b for a and b sequences.
a = ifloordiv(a, b) is equivalent to a //= b.
a = ilshift(a, b) is equivalent to a <<= b.
a = imod(a, b) is equivalent to a %= b.
a = imul(a, b) is equivalent to a *= b.
a = ior(a, b) is equivalent to a |= b.
a = ipow(a, b) is equivalent to a **= b.
a = irshift(a, b) is equivalent to a >>= b.
a = isub(a, b) is equivalent to a -= b.
a = itruediv(a, b) is equivalent to a /= b.
a = ixor(a, b) is equivalent to a ^= b. | http://docs.python.org/3/library/operator.html | CC-MAIN-2014-10 | refinedweb | 888 | 67.76 |
Draft Move
Description
The Move tool moves or copies the selected objects from one point to another.
The Move tool can be used on 2D shapes created with the Draft Workbench or Sketcher Workbench, but can also be used on many types of 3D objects such as those created with the Part Workbench or Arch Workbench.
To produce various copies in different arrangements use Draft Array, Draft PathArray and Draft PointArray.
Moving an object from one point to another point
How to use
- Select the objects that you wish to move or copy.
- Press the
Draft Move button, or press M then V keys. If no object is selected, you will be invited to select one.
- Click a first point on the 3D view, or type a coordinate and press the
add point button. This serves as the base point of the operation.
- Click another point on the 3D view, or type a coordinate and press the
add point button. This is the new position of the base point.
Limitations
When moving Move tool will restart after you finish the operation, allowing you to move or copy the objects again without pressing the tool button again.
- Press P or click the checkbox to toggle copy mode. If copy mode is on, the Move tool will keep the original shape in its place but will make a copy at the second point.
- You can use both T and P to place several copies in sequence. In this case, the duplicated element is the last placed copy.
- Hold Alt after the first point to also toggle copy mode. Keeping Alt pressed after clicking on the second point will allow you to continue placing copies; release Alt to finish the operation and see all copies.
- Hold Ctrl while moving to force snapping your point to the nearest snap location, independently of the distance.
- Hold Shift while moving to constrain your next point horizontally or vertically in relation to the last one.
- Press Esc or the Close button to abort the current command; copies already placed will remain.
Scripting
See also: Draft API and FreeCAD Scripting Basics.
The Move tool can be used in macros and from the Python console by using the following function:
movedlist = move(objectslist, vector, copy=False)
- Moves the base point of the objects in
objectslistby the displacement and direction indicated by
vector.
objectslistis either a single object or a list of objects.
- The displacement vector is relative to the base point of the object, which means that if an object is moved 2 units, and then another 2 units, it will have moved 4 units in total from its original position.
- If
copyis
Truecopies are created instead of moving the original objects.
movedlistis returned with the original moved objects, or with the new copies.
movedlistis either a single object or a list of objects, depending on the input
objectslist.
Example:
import FreeCAD, Draft Polygon1 = Draft.makePolygon(5, radius=1000) Polygon2 = Draft.makePolygon(3, radius=500) Polygon3 = Draft.makePolygon(6, radius=220) Draft.move(Polygon1, FreeCAD.Vector(500, 500, 0)) Draft.move(Polygon1, FreeCAD.Vector(500, 500, 0)) Draft.move(Polygon2, FreeCAD.Vector(1000, -1000, 0)) Draft.move(Polygon3, FreeCAD.Vector(-500, -500, 0)) List1 = [Polygon1, Polygon2, Polygon3] vector = FreeCAD.Vector(-2000, -2000, 0) List2 = Draft.move(List1, vector, copy=True) List3 = Draft.move(List1, -2*vector, | https://wiki.freecadweb.org/index.php?title=Draft_Move/en&oldid=418659 | CC-MAIN-2020-45 | refinedweb | 557 | 65.62 |
Get an api up and running quickly
Project description
Quickest API builder in the West! Lovingly crafted for First Opinion.
5 Minute Getting Started
Installation
First, install endpoints with the following command.
$ pip install endpoints
If you want the latest and greatest you can also install from source:
$ pip install git+
Note: if you get the following error
$ pip: command not found
you will need to install pip using the following command.
$ sudo easy_install pip
Set Up Your Controller File
Create a controller file with the following command:
$ touch mycontroller.py
Add the following code to your new Controller file. These classes are examples of possible endpoints.
from endpoints import Controller class Default(Controller): def GET(self): return "boom" def POST(self, **kwargs): return 'hello {}'.format(kwargs['name']) class Foo(Controller): def GET(self): return "bang"
Start a Server
Now that you have your mycontroller.py, let’s use the built-in WSGI server to serve them:
$ endpoints-wsgiserver --prefix=mycontroller --host=localhost:8000
Test it out
Using curl:
$ curl "boom" $ curl "bang" $ curl -d "name=Awesome you" "hello Awesome you"
That’s it. Easy peasy!
How does it work?
Endpoints translates requests to python modules without any configuration.
It uses the following convention.
METHOD /module/class/args?kwargs
Endpoints will use the base module you set as a reference point to find the correct submodule using the path specified by the request. the class named Default will be used.
This makes it easy to bundle your controllers into something like a “Controllers” module.
Below are some examples of HTTP requests and how they would be interpreted using endpoints.
Note: prefix refers to the name of the base module that you set.
As shown above, we see that endpoints essentially travels the path from the base module down to the appropriate submodule according to the request given.
Example
Let’s say your site had the following setup:
site/controllers/__init__.py
and the file controllers/__init__.py contained:
from endpoints import Controller class Default(Controller): def GET(self): return "called /" class Foo(Controller): def GET(self): return "called /foo"
then your call requests would be translated like this:
Try it!
Run the following requests on the simple server you created. You should see the following output following each request.
$ curl "" boom $ curl "" bang $ curl -H "Content-Type: application/json" -d '{"name": "world"}' "" hello world
Can you figure out what path endpoints was following in each request?
We see in the *first request* that the Controller module was accessed, then the Default class, and then the GET method.
In the *second request*, the Controller module was accessed, then the Foo class as specified, and then the GET method.
Finally, in the *last request*, the Controller module was accessed, then the Default class, and finally the POST method with the passed in argument as JSON.
Fun with parameters, decorators, and more
If you have gotten to this point, congratulations. You understand the basics of endpoints. If you don’t understand endpoints then please go back and read from the top again before reading any further.
There are a few tricks and features of endpoints that are important to cover as they will add functionality to your program.
Handling path parameters and query vars
You can define your controller methods to accept certain path params and to accept query params:
class Foo(Controller): def GET(self, one, two=None, **params): pass def POST(self, **params): pass
your call requests would be translated like this:
Post requests are also merged with the **params on the controller method, with the POST params taking precedence:
For example, if the HTTP request is:
POST /foo?param1=GET1¶m2=GET2 body: param1=POST1¶m3=val3
The following path would be:
prefix.Foo.POST(param1="POST1", param2="GET2", param3="val3")
Handy decorators
The endpoints.decorators module gives you some handy decorators to make parameter handling and error checking easier:
For example, the param decorator can be used similarly to Python’s built-in argparse.add_argument() method as shown below.
from endpoints import Controller from endpoints.decorators import param class Foo(Controller): @param('param1', default="some val") @param('param2', choices=['one', 'two']) def GET(self, **params): pass
Other examples of decorators include get_param and post_param. The former checks that a query parameter exists, the latter is only concerned with POSTed parameters.
There is also a require_params decorator that provides a quick way to ensure certain parameters were provided.
from endpoints import Controller from endpoints.decorators import param class Foo(Controller): @require_params('param1', 'param2', 'param3') def GET(self, **params): pass
The require_params decorator as used above will make sure param1, param2, and param3 were all present in the **params dict.
Authentication
Endpoints tries to make user authentication easier, so it includes some handy authentication decorators in endpoints.decorators.auth.
Perform basic authentication:
from endpoints import Controller from endpoints.decorators.auth import basic_auth def target(request, username, password): return username == "foo" and password == "bar" class Foo(Controller): @auth(target) def GET(self, **params): pass
The auth decorators can also be subclassed and customized by just overriding the target() method.
Versioning requests
Endpoints has support for Accept header versioning, inspired by this series of blog posts.
You can activate versioning just by adding a new method to your controller using the format:
METHOD_VERSION
So, let’s say you have a controllers.py which contained:
# controllers.py from endpoints import Controller class Default(Controller): def GET(self): return "called version 1 /" def GET_v2(self): return "called version 2 /" class Foo(Controller): def GET(self): return "called version 1 /foo" def GET_v2(self): return "called version 2 /foo"
Then, your call requests would be translated like this:
Note: attaching the ;version=v2 to the Accept header changes the method that is called to handle the request.).
Built in servers
Endpoints comes with wsgi support and has a built-in python wsgi server:
$ endpoints-wsgiserver --help
Sample wsgi script for uWSGI
import os from endpoints.interface.wsgi import Application os.environ['ENDPOINTS_PREFIX'] = 'mycontroller' application = Application()
That’s all you need to set it up if you need it. Then you can start a uWSGI server to test it out:
$ uwsgi --http :9000 --wsgi-file YOUR_FILE_NAME.py --master --processes 1 --thunder-lock --chdir=/PATH/WITH/YOUR_FILE_NAME/FILE
Development
Unit Tests
After cloning the repo, cd into the repo’s directory and run:
$ python -m unittest endpoints_test
Check the tests_require parameter in the setup.py script to see what modules are needed to run the tests because there are dependencies that the tests need that the rest of the package does not.
License
MIT
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/endpoints/1.1.7/ | CC-MAIN-2018-26 | refinedweb | 1,123 | 53.61 |
From: Beman Dawes (beman_at_[hidden])
Date: 1999-07-16 07:50:19
At 10:00 AM 7/16/99 +0100, Paul Baxter wrote:
>I certainly have references to several random number generator tests
>(diehard etc)
>
>The better generators usually have papers associated with them
detailing
>their performance in the various random number tests and how they
compare
>with the others.
Pick your favorite, do an implementation using the same interface as
min_rand, and send it to me, along with a short description including
those references.
This seems a case where implementation variations are best handled by
template specialization. So namespace boost will have:
random_number_generator<min_rand> rng1;
random_number_generator<twister> rng2;
>I also think its worth, say in the case of Mersenne twister, to also
compile
>the authors original code (C code), seed the two routines the same
and
>observe the same random (sic) numbers from each implementation.
Yes. Also that is the point of the ten_thousandth() function. Maybe
there should be a millionth() function, but the orignial "minimal
standard" people used 10,000 as their test point and I just picked
that up.
> ?
Ah, I was wondering when that would come up. The answer seems to me
that boost should have a boost/stdint.h[pp] header following the spec
in the C9X FDIS. If you don't know about this header, see (or the actual FDIS if you are
on the C or C++ committee.) I have the start of a boost
implementation. But more of this in a week or two when the dust
clears from other current work.
--Beman
------------------------------------------------------------------------
eGroups.com home: - Simplifying group communications
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/1999/07/0356.php | CC-MAIN-2022-05 | refinedweb | 289 | 62.98 |
Interfacing BME280 Sensor with Arduino
The BME280 is a widely used sensor that measures temperature, humidity, barometric pressure, dew point, and altitude. Today we are going to learn Interfacing of the BME280 Sensor with Arduino. It gives your Arduino project the ability to sense the surrounding environment with a BME280 sensor.
The BME280 sensor is relatively simple to use. It is pre-calibrated and you don’t require any extra components. So you can start measuring relative humidity, temperature, barometric pressure, approx. altitude, & dew point using Arduino and BME280 sensor.
Components Required
The following are the list of the components that are required for Interfacing BME280 Sensor with Arduino.
Introduction to BME280 Sensor
The BME280 sensor contains Bosch next-generation digital temperature, humidity, and pressure sensor. It is the successor of the sensors like BMP180, BMP280, or BMP183.
BME280 Sensor Measures
As I already mentioned above, The BME280 is a Five in one Environmental digital sensor that measures:
- Temperature
- Humidity
- Barometric pressure
- Altitude
- Dew Point
Accuracy and Operation Range of BME280 Sensor
The following table shows the Accuracy and operation range of the temperature, humidity, pressure, and altitude of the BME280 Sensor:
Note: Dew Point is measured using temperature and relative humidity parameters. Through some mathematical calculations, the program returns the dew point in Celsius.
BME280 sensor specifications
- Power Requirement 3.3V or 5V (Module comes with built-in LM6206 3.3V regulator)
- Easy to interface with any microcontroller.
- The BME280 consumes less than 1mA during measurements.
- The module features an I2C communication protocol. So, it can be easily interfaced with any microcontroller of your choice.
- We can change the default I2C address of the BME280 module with the solder jumper beside the chip.
BME280 Sensor Pinout
The following table is the BME280 sensor Pinout table:
Interfacing BME280 Sensor with Arduino UNO
Now, Let’s wire the BME
Programming Code
To program the BME280 sensor with Arduino, we need the following libraries.
These two libraries make our program simple. We can just use a simple command to read the temperature, relative humidity, barometric pressure data, Dew point, and approx. Altitude data.
Installing Adafruit BME280 and Adafruit Unified Sensor Library
To install the library, navigate to the Sketch > Include Library > Manage Libraries… Wait some time for Arduino IDE to update the list of installed libraries.
Now in the search field search for “Adafruit BME280” and install the library as shown in the image below.
Again, search for “Adafruit Unified Sensor” and install that library as well.
Check the Default I2C address for BME280 Sensor
To check the default I2C address of your BME280 sensor with Arduino. Simply upload the I2C Address Scanner code provided below. Now, open the serial monitor to check your BME280 default I2C address. This code will also help youME280 sensor from below.
#include <Wire.h> #include <Adafruit_Sensor.h> #include <Adafruit_BME280.h> #define SEALEVELPRESSURE_HPA (1013.25) Adafruit_BME280 bme; // For I2C interface void setup() { Serial.begin(9600); Serial.println(F("BME280 Sensor event test"));("Humidity = "); Serial.print(bme.readHumidity()); Serial.println("%"); Serial.print("Approx. Altitude = "); Serial.print(bme.readAltitude(SEALEVELPRESSURE_HPA)); Serial.println("m"); double dewPoint = dewPointFast(bme.readTemperature(), bme.readHumidity()); Serial.print("Dew point = "); Serial.print(dewPoint); Serial.println(" *C"); Serial.println(); delay(1000); } double dewPointFast(double celsius, double humidity) { double a = 17.271; double b = 237.7; double temp = (a * celsius) / (b + celsius) + log(humidity * 0.01); double Td = (b * temp) / (a - temp); return Td; }
Then upload the code to the Arduino. Now open the serial monitor. You can see the BME280 Sensor test is running. If you see the error message like: “Could not find a valid BME280 sensor, check wiring” then you need to change your BME280 default I2C address in the Adafruit_BME280.h file.
The Adafruit_BME280.h file located at: Documents\Arduino\libraries\Adafruit_BME280_Library. For reference, check the image below.
Now, re-upload the code and check your Serial Monitor. Finally, you got the temperature, relative humidity, barometric pressure data, approx. Altitude, and Dew Point data from the BME280 sensor using Arduino IDE.
Video Tutorial
Conclusion
So, that’s all for Interfacing BME280 Sensor with Arduino. I hope you found the tutorial useful. Now, which project do you like to see using the BME280 sensor module? Let me know in the comments below. | https://theiotprojects.com/interfacing-bme280-sensor-with-arduino/ | CC-MAIN-2021-43 | refinedweb | 708 | 50.43 |
llumbing EngineeringServices Design Gwide
The Institute of
.
,.-
~,~.
. :~,:
*.:.,,
Ilumbing
.,
a
,
:.,
Plumbing Engineering Services Design Guide
The Institute of Plumbing
Compiled and published by
64 Station Lane, Hornchurch, Essex RM12 6NB.
Telephone: +44 (0)1708 472791 Fax: +44 (0)1708 448987
i
Project Co-ordinator
Stan Tildsley
Secretary & Project Manager
Dale Courtman lEng FlOP RP lop Technical Manager
Administration Support
EmmaTolley
Lorraine Courtman
Janice Grande
Jenni Cannavan
Technical Editors & Designers -Tarot Millbury
Printers - Saunders & Williams Printers Ltd
ISBN 1 871956 40 4
Published 2002
0 The Institute of Plumbing
The Institute of Plumbing cannot accept responsibility for any errors and omissions in this publication
th&&,,te
of Plumbing
Telephone: +44 (0)1708472791 Fax: +44 (0)1708448987
Plumbing Engineering Services
-.
Page
Column 1
No.
Table
5.
table 5
13.
3.
table 14.
15.
fig 15.
16.
table 16.
18.
2
82.
96.
2, line 5.
103.
graph 6.
108.
fig7.
140.
table 26.
Design Guide Corrigendum
Details
For 100,000 litres storage read 100 rr? for 2 meter height tank
P = 1= 300
T
1200
should read P = t = 2
= 0.025
Basin - 15mm sep taps (33/1200) - Usage ratio should read 0.028
Type of applicance, slop hopper cistenr only- should read cistern
Pipe with section 7 and 8 should be extended to the left
to connect into pipe section 6 (cold water from cistern / tank)
9 .
Notes. 1 meter headjof water should read -
Equation should redd&g/s=-
/rkm
watts 4.i 87 (shc of water) x 1000 x A t
’ <,.
”
.’ ,Y.,“
Leehand notkrshould.:read - 1.5m maximum for WC branch
Right\hand
noteistitiul;;&ad- 2.5m.maximum for other
ap pIiance con nectIons
’
“’3
c:- 1
-i
Category 2 should read 1.5 and Category 3 should read 4.5
The
istitute
HEAD OFFICE of Plumbing 64 Station Lane Hornchurch Essex RM12 6NB
Telephone +I4 (0)1708472791 Fa\+-U(0)1708448987 Email infc@plumbers.o~.ik Web pages \v\v\v.plumbers.ore.11k \v\wv.reeistered~lUmber.com
Issue2
04/03
Sources of water
Water supply companies
Water demand
3
Water storage
Water distribution
4
Hot water production
6
Hotwater generators
8
Control of Legionelh
9
Safewater temperatures
10
Water conservation
Water regulations
11
Distributionpipesizing
12
Hardwater treatment
24
Water supply installations
Disinfection
27
Water quality
28
Corrosion
29
Effects of corrosive environments
32
Prevention of corrosion
36
1
Hot and cold water supplies
The source of water varies dependant on which area of the British Isles a supply is required.The types are:
1.
Upland catchment (reservoir)
2.
Ground water (borehole/artisan)
River extraction.
These sources provide water for supply purposes, each with a wide range of physical, and bacterial quality differences, i.e.
1. Hardness
2. Bacteria count
3. Minerals.
The quality of water supplied for
distribution to, and for use by
and properties is controlled by an Act of Parliament, the Water Supply (Water Quality) Regulations 1989, and subsequent amendments.The enforcement of the Act is undertaken by the DrinkingWater Inspectorate(DWI), who regulate a wide range of key elements to attain and maintain the water supply quality. See Table 1.
The standards cover colour, alkalinity, taste, odour, undesirable and toxic substances, and micro-organismsto specified parameters.The standards are called ‘Prescribed Concentratevalues’ (PCV’s) to either maximum, minimum, average or percentage levels. ’
These standards are imposed on all water supply companies, with relaxation only considered under emergency situations, i.e. extreme drought or flooding, but under no circumstances if there is a risk to public health.
persons
Water supply
companies
The water supply companies operate under the requirements of the Water Industry Act 1991, enforced by the Office of Water Services (OFWAT).
Water supply companies are responsible for the catchment or abstraction of raw water; it’s conditioning, treatment and distributionto consumers within their region.The characteristics of the water supplied varies, region to region and within regions dependant upon the actual source, single or multiple, and the level of treatment provided in order to attain the prescribed quality at the point of connectionto the customer’s supply.
Table 1 Drinking water standards
Temperature
I 25°C
PH
I 5.5-9.5
Colour
I 20 Hazen units
Turbidity
I 4 Formazin units
Qualitative odour
All odour invest
Qualitative taste
All taste investg
Dilution odour
Dilution No
Dilution taste
3 at 25
Conductivity
1500us/cm 20”c
Total hardness
Applies only
Alkalinitv
if softened
Clostridia
1/20 ml
Colony count, 2 day
Comparison
Colony count, 3 day
against average
Oxidisability
5 mgll
Ammonia
0.5 mg/l
Nitrite
0.1 mg/l
Nitrate
50
mgll
Chloride
400
mg/l
Fluoride
1500
ug/l
Phosphorus
2200
Sulphate
250
Magnesium
Iron Manganese
I 200ug/l I 50 ug/l
benzol , 12 perylene
From this connection, which generally incorporatesa water company meter, the consumer is responsible for all aspects of the supply and distribution of water.
Consumers‘ rights
Every consumer has the right to be supplied with water for domestic purposes from the water supply company’s distribution network. New or modified existing connections can incur a charge by the water company for the following services
1. New or replacement supply
2. Meter installation
3. Supply network reinforcement
4. Infrastructurecharge.
All these charges relate to the anticipated daily demand (m3),peak flow rate (k),and number of draw-off fittings being served from the supply.
Consumers water supply installations are required to comply to the Water Supply (Water Fitting) Regulations 1999, and The Water Supply (Water Fitting) (Amendments) Regulations 1999 (2) (the Water Byelaws 2000 (Scotland)).These regulations are enforced by the water company that supplies water to the consumer.
The regulations govern the whole of the consumer’s installationfrom the connection to the water company’s communicationpipe and meter termination, to all the draw-off fittings, inclusive of any alterations.
The regulations require that no water fitting shall be installed, connected, arranged or used in such a manner, or by reason of being damaged, worn or otherwise faulty that it causes, or is likely to cause:
1. Waste
2. Misuse
3. Undue consumption
4. Contamination
5. Erroneous measurement.
The water supply companies are required to be notified of certain proposed installations,which may be
subject to inspection and acceptance
prior to
supply connection.
The regulations are a statutory instrument, which is supported by an interpretation of the regulations. The water supply companies have the authority to apply to the Regulator for
receiving the water company’s
relaxation of any part of the regulation considered inappropriateto a particular case.
Water regulations guide
The Water RegulationsAdvisory Scheme (WRAS) publish a guide which provides formal guidance and recommendations on how the regulationsshould be applied to the actual water installationsand include the Water Byelaws 2000 (Scotland).
The water demand for a building is dependant on a number of factors.
1. Type of building and it’s function
2. Number of occupants, permanent or transitional
3. Requirementfor fire protection systems.
4. Landscape and water features.
In dwellings the resident’s water consumption is divided between the many appliances.A typical percentage break down provided by the Environment Agency is:
32%
2. Washing machine 12%
WC suite
3. Kitchen sink
15%
4. Bath
5. Basin
9%
6. Shower
?%a
7. Outside supply
3%
8. Miscellaneous
Overall consumption increases by around 10% during warmer months when out door usage increases to over 25%. In general, consumption per person decreases with an increase in dwelling size given the shared facilities.
For guidance on the total water demand for typical types of buildings refer to Table 2 for daily water demand. The figures stated have been assembled from a number of sources, including BS 6700, Chartered Institute of Building Services Engineers (CIBSE) and Environmental Agency studies and can be used as a basis for good practice.
The storing of water has a number of purposes,
1. Providing for an interruption of supply
2. Accommodation peak demand
3. Providinga pressure (head) for gravity supplies.
Design Codes recommend that storage
is provided to cover the
incoming mains supply, in order to maintain a water supply to the building.
Water supply companies are empowered to insist on specific terms, includingthe volume or period of storage, within the term of their supply agreement with a consumer. However many water supply companies only recommendthat storage be provided in accordance with the BS 6700, placing the responsibilityand decision firmly on the consumers.
interruption of an
Table 2 provides guidance on typical water usage within buildings over a 24 hour period.
In designing storage capacities, account needs to be taken of the building and its location.
Period and hours of occupation
Pattern of water usage
Potential for an interruption of supply
4.
Available mains pressure, and any inadequacies during the hours of building use
Health 8, Safety, prevention of bacteria, including legionella.
If a building is occupied 24 hours a day,
then an interruption of supply will
greater impact than that for say an office, which may only be occupied for eight to ten hours. Where a building is occupied by elderly or infirmed people then avoiding any disruption of the water supply is an important consideration as
they would be unable to easily leave the building should water become unavailable.
Clients, such as the National Health Service, require their buildings to be provided with storage to safeguard against an interruption of the mains supply. Industrial clients may well require storage to ensure their business and/or production is not interrupted. If water ceases to be available within a building then the occupiers will eventually leave as toilet facilities will become unusable. It is likely that when an interruption of supply occurs then the water available would be conserved as much as possible, thereby extending the time of occupancy beyond that anticipated under normal usage rates.
have a
Table 2 Daily water demand
Type of Building
Litres
CriteriaNnit
Dwellings
I
I Bedroom
- 1
bedroom
210
- 2
130
- 3+ bedrooms
100
- Student en-suite
Bedroom
- Student,communal
90
Bed space
- Nurses Home
120
- Children’s Home
135
- Elderly sheltered
- Elderly Care Home
- Prison
150
Inmate
- Art Gallery
Person
- Library
- Museum
I Person
-Theatre
- Cinema
SUPPORTING INFORMATION If the number of building occupants are not accurately known then as a guide the following criteria can be used. Offices, one person per 14m2 of
dining area. Bars, One person per 0.8m2 of the public
barkeating area
1.0m2of the
When the water supply companies, regulations, or client requirements do not specifically dictate the period to cover an interruption of a mains supply then Table 3 provides recommendations for reasonable periods of storage, expressed as a percentage of the daily water
demand.
Table 3 Period of storage
Type 01 Building
Hospitals Nursing Homes Dwellings Hotels, Hostels Offices Shops Library, Museum, Art Galleries Cinema, Theatre Bars, night-club Sports Facilities Schools, Colleges, Universities Boarding Schools
1d,gte%d
0
50%
- 50%
- 25%
The water distribution installation requires to be able to deliver the correct flow and volume of hot and cold water when and where it is needed. The mains pressure can provide the initial means of delivering water into the building. The water supply companies are requiredto deliver their water to the boundary with a minimum pressure of 1.O bar. Often their delivery pressure can be higher, however at times of high demand, the pressure will be closer to the minimum provision.
Type of system
The type and style of water distribution needed for a particular building will depend mainly on the building height and its use.
a. The building height will determine whether pumping will be required to deliver water to the highest level
b. The building use will determine the amount of storage that will be required.
The type of water system will need to be one or a combination of the following:
a. Direct mains fed
b. High level storage with gravity down feed
c. Pumped from a break cistern or storage provision.
Potentially a one or two storey building in
a locality where an interruption of water
supply is very infrequent and causing little inconvenience,there is an option for the water supply to be direct from the mains without storage being provided. If
the provision of storage is possible at high level then the system could be enhanced to provide storage coupled with it becoming a gravity down feed system. See Figure 1.
Option of gravity feed tank at high level
Rising main, and drop to draw-off points
Ground level
utility CO.
mains 0
]
Figure 7 Supply to a two storey building
Storage tanks
A building requiring a large water storage
provision may not be able to accommodate it at high level, in which case a low level location will be needed,
in conjunction with a pumped distribution
system.
A combination of high and low storage
can be considered if a gravity distribution
part of the building.
This has an advantage of
storage in the event of an interruption of
the water supply, or power supply to the pumps. A storage ratio of 2 : 1 lowlhigh level is a typical arrangement.
Storage can comprise of two compartments or cisterns/tanks in order that maintenance can be carried out without interrupting distribution.
For small storage quantities one piece cisterns can be used, which generally
are of a
storage of 2500 litres or more, sectional panel tanks may be considered more appropriate with a centre divide.
Above 4000 litres storage twin cisterns/tanks may be considered appropriate.See Figure 2.
is preferred for all or
providing some
low height construction. For
Outlets, from opposite
corners of inlet, and
strictly balanced in lengthQ
and configuration
Incoming supply
(balanced pipes not critical)
Cisterns, rectangular,
in parallel
NOTE
Valves to be provided to enable one cisternhank to be isolated whilst other remains open.
Figure 2 Storage cistern/tank layout
Sectional tanks commonly have flanges, being internal or external. External flanges permit tightening without needing to enter the tank, and on the base permit the tank to be self draining through a single drain point, without further draining of any entrapped water between flanges. Such a feature reduces maintenance and assists the prevention of water stagnation which can lead to harmful bacteria growth, including legionella.
In calculating the storage capacity a free board allowance is necessary to accommodate the float valve, over flow installationsand any expansion from the hot water system. Depending on pipe sizes, commonly a 250 - 300 mm free board depth is required on ciserndtanks having a capacity greater than 2500 litres. Raised ball (float) valve housings in conjunction with a weir overflow can provide an increased depth of water stored over the main area of the cistern/tank(s).
The location of the inlet and outlet connections is important. A cross flow through the cistern/tank needs to be achieved to assist the complete regular turn over of water throughout the storage period.
Sub divided, twin and multiple
cisterns/tanks ideally should be installed
in parallel to each other. The
require to be positioned at the same level
to ensure they supply the cisterdtanks in unison, and as far as possible the same flow rate to assist a balanced throughput.The outlet connections and manifold pipe work needs to be arranged with symmetrical and equal lengths, also to provide, as far as is possible a balanced flow from the tanks.
The use of a delayed action float valve may also be considered to ensure a greater turn over of water.
inlets
Access to storage cisterns/tanks
Access for installation and maintenance is required.Table 4 is a guide.
For large buildings, accommodation for water storage has an significant impact. Table 5 provides an outline guide to the space that may be required.
Table 4 Access to storage cisterns/tanks
Location
I (mm)
~~
Around
750
Between, tanks
Above, allowing beams to intrude
1000
Below, between supports
600
For outlet pipe work, incl. access
Tank construction thickness
Insulation (may form part of tank)
25
Raised float valve housing
300
Entry to tank
800 dia
Table 5 Water storage plant room area
~
Storage
Tank Height
(Litres)
1.5 metre
2 metre
3 metre
5,000
18 m2
-
10,000
31m2
23m2
20,000
50m2
40m2
40,000
72m2
60m2
50m3
60,000
80m2
100.000
10m2
. Gravity supplies
For gravity supplies to be effective, the storage requires to be at a sufficient height to deliver the water to the draw-off point at the required flow rate and pressure.The available head is the dimension between the bottom of the storage cistern/tank(s)and the highest draw-off point, or draw-off point with the greatest headpressure loss. See Figure 3.
The advantages of gravity supplies are:
a. Availability of water in the event of water mains or power failure
Figure 3 Gravity supplies available head
Head of water
pressure available in
metres
.-- ----
b. No pump running costs
c. Potentially less noise due to lower pipe flow velocities.
The disadvantages are:
a. Greater structural support
b. Larger pipe sizes due to limited available head, when compared to Pumps
c. Lower delivery pressures.
Pumped supplies
The delivery of water by pumping will
provide flexibility in the positioning of the
storage cisterndtanks. The delivery flow
rate and pressure demanded by the
system are met entirely by selecting the
correct duty for the pumps.The pump set
is required to deliver a constantly varying
flow rate as draw-off points are randomly
used by the
stage variable duty and/or inverters is an
advantage. See Figure 4.
Generally a minimum of two pumps are used, each having 100% system duty and controlled to enable them to be a stand by to each other. To prevent high pressure overrun when demand is less than the design demand, a pressure
occupants. The use of multi-
limiting or variable control flow device needs to be fitted on the outlet from the
pumps.
For high buildings a combination of pumped and gravity may be appropriate. The advantage of this is to provide a proportion of the daily water usage in a
cisterns/tank(s)at roof level, which would provide a gravity down feed service, and continue to provide water in the event of
a failure of the pump. See Figure 5. Such
a system would comprise of:
a. An incoming main
b. Low level break or storage cisternhank
c. Pumpset
d. High level cistern/tank(s)
Figure 4 Pumped supply layout
Duty of pumps -A-------
= static lift plus distribution
loss and
delivery
pressure, in
e. Cold water and hot water cold feed gravity distribution.
The low level pump set can be sized to provide a low volume, more frequent operation and high head to deliver the water to the tanks at roof level.
If a 'mains' water supply is required to be provided specificallyfor drinking water points or drink making equipment, then either of these can be supplied from the incoming main up to the number of floors that the available mains pressure will reach, and from the pumped rising main above that level; or entirely from the pumped rising main. See Figure 6.
Whilst all water supplied for domestic uses has to be suitable for drinking purposes, supplying drinking water points direct from incoming mains or pumped mains provides a cooler, more oxygenated supply for taste purposes.
Cisterns/tank(s)
on roof or roof
plant room
level
Low level break or storage tank
0-mani
I Pump set
Figure 5 Combined pump and gravity
Figure 6 'Mains' water for drinking
Pumped'mains'
to drinking
points and
drinks machines
Utility 'mains'to
drinking points
and drinks
machines
Low level
cisternhank
0 Incoming miin
Pump set
Hot water can be generated by a differing number of methods, and the selection will depend mainly on the quantities of hot water required and the types of energy readily available.
The demand for hot water will vary considerably between types of buildings, governed by their occupants and the activities taking place. For example:
Office buildingswill require small quantities frequently and regularly throughout the ‘normal’ working day, and availability at other times as and when occupant’s ‘overtime’ working hours demand.
A factory with a production line will require sufficient hot water to meet the demand at breaks in the shift when the work force may all wish to wash hands etc.
A sports pavilion will need to be able to provide large quantities of hot water for team’s showering needs over a short period of time following games, whenever they occur.
Selection of hot water production
In the selection of the type of hot water
production, the time available for re-
heating is an important consideration.
If a high volume or rapid re-heat rate is
required then it would be necessary to
ensure that a sufficient energy capacity
is available. If the energy capacity
needed is not available then a greater volume of water storage would have to be provided to ensure hot water is available during the slower re-heat period.
Hot water production and storage temperatures are requiredto comply to the Health & Safety requirements for the minimisation of legionella bacteria.This demands a minimum storage temperature of 60°C to be attained, with
a minimum secondary return (if provided) temperature of 50°C. See Figure 7.
Therefore in calculating the hot water demand for a building it is necessary to ensure that the output water temperature from the hot water production plant is never less than 60°C,and never less than 50°C throughout the distribution system.
The HSC ‘Control of Legionella’ Code L8 states that 50°C should be achieved within 60 seconds at all outlets.
Type of building
Daily Stored
(litres)(litres)
Unit
Hotels
Offices & general work places
-with canteen
15
5
-without canteen
Factorv
IPerson
]Person
- - 1;;15 1 l u ; i l -
:WJ;;;;
Secondar
6th form colle e
Boardin
114
55
PuillPu il
Pu
il
SDorts Hall
20 I 20
lperson
-Swimming Pool
20
- Field Sports
35
- All weather pitch
Places of assembly (excl. staff)
- Bars
- Night Club
- Restaurant
Cover
SUPPORTING INFORMATION
The storage figures stated are based on a re- heat period of two hours, an inlet temperature of 10°C and a stored temperature of 65°C.
If the number of building occupants are not accurately known then as a guide the following criteria can be used.
Offices, One person per 14m2of 1.0m2of the dining‘area.
Bars, One person per 0.8m2 of the public barheating area.
HWS distribution 60°C
Store
~65°C
HWS secondary
circulation >50C
Hot water cold feed <20C
Figure 7 Hot water temperature protocol
When a conventional bulk hot water vessel is used it is necessary to ensure that the contents of the whole vessel achieves the correct stored water temperature as stratification can occur. To overcome this situation the storage vessel should incorporate the following features:
a. Base inlet hot water cold feed supply
b. Top outlet hot water outlet flow
c. Convex ends to vessel
d. Provide a ‘shunt’ pump to move the hot water from the top of the vessel to the base to avoid stratification.
Hot water demand
When assessing the hot water production requirementsfor a building it is
necessary to determine the peak
demand.The peak demand is the volume
of hot water required during the building’s period of greatest usage. This may be over an hour, or shorter period dependant on the occupants and activities taking place.
Having determined the peak demand the volume of hot water needing to be stored can be selected, the rate of recovery and the associated energy input needed can be established.
The buildingstotal daily hot water usage is relevant to the assessment of the peak
demand. Once the daily usage is determined then the more critical peak demand can be assessed.
Traditionally hot water peak usage was based on a two hour storage re-heat period and this has generally proved to be a satisfactory benchmark for peak demands for that period.
Table 6 schedules a compilation of figures currently recommended by the water industry’s design codes, with additional categories added as considered useful.The recommended storage volumes are based on a 65°C storage temperature and a two hour re- heat period, i.e. a bulk storage vessel. This data should be considered as representativeof capacities, which have not given rise to complaints of inadequacy.
Two hour re-heat
The two hour re-heat storage volume figures can provide a guide to the peak water volume used during a peak two hour usage period.The same hot water output could also be achieved by the use of low volumehapid reheat ‘semi-storage’ types of hot water generators, if the energy input capacity is available.
The ‘semi-storage’type of hot water heaters can meet shorter peak demand periods i.e. 1 hour, or less, although
detailed secure information about peak period demands during periods of less
that
and therefore a design risk margin will be
required.
1 hour are not sufficiently available,
The establishedtwo hour peak usage figures cannot simply be evenly sub- divided into shorter periods without the risk of seriously under estimating the actual hot water volume that will be required during that shorter period.The shorter the period, the greater the dis- proportion of the two hour peak storage figure will be required.
For example, the recommended two hour re-heat period storage volume for a budget hotel is 35 litres per bedroom. For a 50 bedroom hotel the stored volume would need to be 1750 litres, which when supplemented by the re-heated water during the envisaged peak two hour draw-off period, less the loss (25%) of hot water due to the mixing effect of the incoming cold water feed, is capable of providing a notional 2625 litres, should that demand occur. This is because 1750 litres of 65°C hot water is notionally available at the start of the notional peak draw-off period, and whilst the stored hot water is being drawn off it is also being re-heated at a rate of 1750 litres per two hours, less the loss through the mixing of incoming cold water and the stored hot water (25%).
Therefore it can be seen that the stored water is there to provide for & peak 1750 litre draw-off occurring over any period from, say ten minutes upwards.
For consideration purposes 1750 litres equates to 35 baths, each using 50 litres
of 60°C stored hot water. Dependant on
the bath usage ratio of either 1200, 2400,
or 4800 seconds frequency of use (see simultaneous demand data) the hot water stored could be used up after a 63 minute period. Alternatively 1750 litres could provide for 73 persons having a shower, each lasting 5 minutes using 24 litres of 60°C of stored hot water (mixed with cold). Dependant on the shower usage rate of 900, 1800, or 2700 seconds frequency of use, the hot water stored could be used up after a 45 minute period.These two examples are based on a peak statistical usage which would likely not reoccur during the remaining time of the two hour re-heat period.
A ‘semi-storage’hot water generator
requiring to meet the same demand for baths would need to be capable of providing, approximately a 3.3 litre per second flow rate of 65°C continuous hot water output, assuming an initial stored volume capacity of 500 litres.
These potential peak demands could be considered as being extreme examples. However they clearly demonstrate the demands capable of being put on hot water generation, when taking account of the maximum simultaneous usage that is imposed on draw-off fittings by the building occupants, and accordingly has to be considered for design purposes.
Whatever the building, the
of hot water usage should be assessed
and considered.The hot water usage will
be directly related to the building function, its occupancy and the type of
likely pattern
Figure 8 Typical demand pattern histogram
3500
3000
2500
2000
e
E
500
activity that is likely to take place. In determining the pattern of usage, it is important to differentiate between a maximum daily demand and an average daily demand, so that the implications of the system not meeting the buildings hot water requirementscan be recognised, and the maximum requirements designed for where necessary.
Measured quantities of hot water
consumption should not stand alone as a
sizing guide. The rate at
amounts are drawn off must also be considered. To project the demand pattern over the operating period of the building, an hour by hour analysis of likely hot water usage should be made, taking into account the number of occupants, the type and level of activity and any other factors that may affect hot water demand. The projected pattern of demand should be recorded in the form of a histogram profile.
Typical examples of daily demand in various types of buildings are illustrated in Figures 8 and 9.
which these
By establishing a hot water
demand
histogram a representative peak demand volume can be established. Typically the peak hour is between 15-20% of the day’s total usage.
When selecting a ‘semi-storage’hot water production unit(s) it needs to be recognisedthat the small stored volume is there to meet the short period peak draw-offs that occur in any water supply system. The shortest of these peak draw- offs is the ‘maximum’simultaneous demand litre per second flow rate figure calculated from the sum of the draw off ‘demand’ or ‘loading’ units used for pipe sizing. However, periods of time that these flow rates occur are very short, and are based on the period of individual draw- off, i.e. length of time to fill a basin,
Period
18
VI z!1001
Time (hours)
VI
.-
2 .E 2000
Hotel
r
C e 1000-
c
ii
c.
2ot0
rj
“0
Time (
urs)
Figure 9 Examples of daily demand patterns for commercial premises
Reproduced from CIBSE Guide G: Public Health Engineering,by permission of the Chartered Institution of Building Services Engineers.
sink, or bath, have a shower, and the number of times the draw-off is used during the peak demand period, i.e. every 5, 10, or 20 minutes, or more. The ‘maximum simultaneous demand’ must not be applied to periods greater than the period and frequency of ‘maximum simultaneous demand.
Hot water generators
The production of hot water
achieved by a varied number of energy
sources.
can be
1. Electric, generally with direct immersed elements
2. Gas, either direct, or indirect by a dedicated circulator
3. Low Temperature Hot Water (LTHW) boiler plant, dedicated or more likely forming part of the space heating plant
4. Steam, when available from a central plant facility.
Energy forms, which provide a direct means of heating hot water, i.e. electric and gas in particular, are the most effective in teps of efficiency because of least loss of heat during the heat transfer process. Sharing hot water generation with space heating plant can decrease the energy efficiency through the
additional transfer process and less efficient operation when space heating is not needed.
Solar heating, when available and viable
is an excellent supplementary heat
source and effective in reducing annual energy tariffs.
Commonly used forms of hot water heating are:
Dwellings and small buildings:
Electric, or gas combination (HWS & Heating) boilers.
Offices:
Electric, local or ‘point of use’ water heaters.
Larger premises and sports facilities:
Gas direct fired water heaters.
Local or central plant
The adoption of local or central plant is generally dependant on the type of building, where hot water is needed and the volume required. For toilet wash basin ‘hand rinse’ purposes only, where relatively little hot water is required then
a local heater positioned adjacent to the
draw-off fittings would be appropriate. This may be considered particularly suitable for office and school toilets.The advantages of this type of installation can be low installation, energy consumption, and maintenancecosts, plus alleviating
the need for secondary circulation pipework and pump to maintain distribution temperatures.
Vented or unvented generators
A vented hot water generator is supplied by a gravity hot water down feed and expansion pipe and having an open vent pipe over the feed cisternhank to provide for pressure relief, in addition to
Option of
combined
cold feed
and open
vent
Hot water
+
generator
Figure 10
Figure 11 Unvented hot water generator
Vented hot water generator
HWS distribution
TemDerature
I--+
reliei valve
Expansion
vessel
Check ialve
presiure
relief valve
expansion.These units generally are storage type units rather that semi- storage units. As an open vessel the maximum pressure is the static head from the cold water feed cisterdtank. Individualvessels should be provided with their own open vent. A pressure and/or temperature relief valve can be considered in place of a separate open vent subject to the vent being combined with the cold feed/expansion pipe, and there being no means of closing off the vent.
Unvented hot water generators are generally supplied from Utility Company’s mains, or pumped distribution systems. Provisionfor expansion and pressure/temperaturerelief is provided by mechanical fittings to provide a safe system. Unvented units are commonly semi-storagetypes.The pressuresthat they are subjected to are the operational head of the ‘mains’ or ‘pumped system inclusive of any ‘closed head’ situation. For unvented units with a capacity of 112 litres or less, the Building Regulations require that the unit is provided complete with all it’s safety fittings. For larger unvented units the ‘Designer’is required to specify the safety fittings in accordance with the Water Regulations.
Multiple hot water generators
Hot water flow distribution
generators in
parallel with
strictly balanced
pipework
connections in
length - and
configuration
secondary
return
Hot water cold feed
Figure 72 Multiple hot water heaters
Where multiple hot water generators become necessary for capacity/output, andlor standby/back up purposes care must be taken to ensure that the interconnectingpipework configuration provides a balanced use and flow through the two of more hot water generators.
Secondary circulation and trace heating
Secondary circulation or trace heating needs to be provided when the length of hot water pipework and the volume of water the pipework contains, becomes such that it would take an unreasonable length of time to draw off the cool water. The Water RegulationsGuide recommends that un-circulatedhot water distribution pipes should be kept as short as possible and if uninsulated not exceed the maximum length stated.
Table 7 WaterRegulations Guide
Max. lengths of uninsulated pipes
Pipe OD
Length
(Seconds)
The ‘seconds’ column illustrates the approximate length of time it would take to draw of the cool water based on the draw off rate of a wash basin tap with an 0.15 I/s flow rate. The Health & Safety Legionella Code L8 states a maximum draw off period for hot water to reach its correct temperature shall be 60 seconds.
Insulatingthe pipes does not stop the hot water cooling, it only slows down the cooling rate. Once the temperature has dropped below 50°C,then the Health & Safety’s ‘60 seconds’ maximum length of
time criteria applies. The insulating of pipes is desirable as it delays the cooling rate of the hot water enabling it to be
‘useful’ for
saves energy and the associated costs.
Once it becomes necessary to provide
secondary circulation or trace heating, then it should be extended to serve the whole of the hot water distribution
system making un-circulated or
heated sections of pipework as short as
practicably possible, and not the maximum lengths stated in the table.
longer, and by that means
trace
Local heaters
These generally comprise of small self contained electrically heated hot water units individually placed near to the position that hot water is required, and serve either a single draw off or a number of draw off’s which are adjacent to each other. Gas heaters are available, but not commonally used due to the need to make provision for flues.The purpose of such units provide hot water in a simple manner, in particular where the draw off is remote and only low volumes of hot water is required, such as for hand rinsing. Office toilet accommodation and single showers are
particularly suited for these type of units.
A number of different types of local
heaters are available. Most commonally are ’unvented units’ supplied directly from
the incoming mains or from the main cold water distribution system within the building. Usually a minimum inlet pressure is required, often being 1.Obar
or above, subject to the manufactures
instructions.
The Water Regulationsgovern the requirementsfor unvented water heaters. Heaters with a capacity of less than 15 litres are classed as being instantaneous and need no temperature or pressure relief valves, or expansion valves or vessels. Units above 15 litres capacity require such devices.
Control of legionella
The means of controlling legionella bacteria is determined by the Health & Safety Approved Code of Guidance L8.
Figure 13 Design temperature and associated risks (CIBSE TM13)
too
80
60
40
A.
Steam humidification
B.
LTHW heating
C.
Hot water storage
D.
Hot water taps outlets
E.
Cold water storage, sprinklers
F.
Spray humidification
G.
Mains cold water and air cooling coil condensate
The Code identifies specific practical guidance on how this is to be achieved in water supply systems.The key aims being:
1. Maintain cold water below 25°C
Maintain stored hot water between
60-65"C
Maintain hot water distribution above 50"C, and preferably at 55°C
Insulate all cold and hot water storage vessels and distribution pipework
Minimise the length of un-circulated and none trace heated hot water pipes
6.
Avoid supplies to little or unused draw-off fittings
7.
Maintain balanced use and flows through multiple cold water cisternshanks and hot water vessels. See Figure 13.
NOTE:
For further details please see 'legionella section', contained within this guide.
Safe water temperatures
Appliance
Application
Max
Temp
Bidet
All
38°C
Shower
41"C
Wash
Hand rinse only
basin
under running
water, i.e. toilets
41°C
For an integral
part of ablution,
i.e. bathroom
43°C
Bath
44°C
Where difficult to
attain an adequate bathing temperature
46°C
--
Graph 1 Temperature and duration of exposure, sufficient to cause burns in thin areas of skin
U)aa
Uaa
01
c"
70
Time in seconds
10000
Design Codes for Health buildings require that all draw-off points that can be used by patients have the temperature of the hot water limited to a safe temperature.
The Health Codes also extends to elderly care homes and sheltered dwellings, which are under the responsibility or licence of the Local Authority. Other buildings that require consideration are nurseries, schools, and anywhere where there is a 'duty of care' by the building owner's landlord, and/or management.
The temperature control is achieved by the deployment of single control mixer taps or valves. The type of valves can vary dependant on their application.
The types of mixing valve, as defined by the Health Guidance Note, are:
Type 1 - a mechanical mixing valve, or tap including those complying with BS 1415 part 1, or 885779 incorporating a maximum temperature control stop device.
Table 9 Recommended application for mixing valves
persons not being at risk
Wash basins for persons at risk, i.e. elderly, infirmed, young, etc
Type
Type 3
Type 2 - a thermostatic mixing valve, complying with BS 1415 part 2.
Type 3 - a thermostatic mixing valve with enhanced thermal performance compiling with the NHS Estates Model Engineering Specification D08.
The efficient management of water usage and supplies is necessary to comply with National and International environmental conservation 'best practice' aims and is covered in detail within the Resource Efficient Design section.The Water Regulations incorporate this requirements under their prevention of 'waste, misuse, and undue consumption, and also the specification for reduced capacity of WC flushing cisterns and limit to automatic urinal flushing cisterns.
The key water conservation areas within a water supply installation are
Low volume flush WC's
Urinal Controls
Draw-off tap controls
Clothes and dish washing machines
Leak detection
Limited need for garden and landscape watering
Rainwater reuse
8.
Grey water recycling.
The DTER (now DEFRA) Water Conservation in Business document (2000) provides proposals and examples for water savings. An example the document proposes is the potential water savings that could possibly be made in an office building.
Table 10 Office water consumption
Activity
Oh water
used
WC flushing
43%
Washing
27%
Urinal flushing
20%
Miscellaneous
10%
Anticipated
%saving
I 30%-6O% 50% - 60%
50% - 80%
Significant additional water reductions can be made by incorporating leak defection sysfems, grey water recycling, rain water collection and water efficient garden and landscape.
The Building Research Establishment provide an assessment method called ‘BREEM’ which provides a range of performance criteria to assess water economy in buildings. For an office building the BREEM design and procurement performance scoring gives a ‘pass’ for 200 points, and an ‘excellent’ for 490 points.Table 11 shows the importance of water conservation design and management,which overall represents 62 points out of the total BREEM score.
Table 11 BREEM 98 for offices water assessment prediction check list
Item
Where predicted water consumption is 10-20m3per person per year
Where predicted water consumption is 5-9m3 Der Derson oer Year
Where predicted water consumption is c5m3 per person per year
Where a water meter is installed to all supplies in the building
Where a leak detection system is installed coverinQall mains supplies1
Where proximity detection shut off is provided to water supplies in WC areas
Where there are established and operational maintenance procedures covering all water system, taps, sanitary fittings and major water consuming plant
Where water consumption monitoring is carried out at least every quarter using historical data
Where storm water run off is controlled at source
Score
The other assessment criteria for the building are Buildingperformance, Design procurement assessments, and management and operational assessments.
The Water Regulations Guide is published by the Water Regulation
Advisory Scheme (WRAS), incorporating the Department of the Environment,
Transport, and the
Regions (DETR, now
DEFRA) Guidance and Water Industry recommendations.
The Guide interperates the Regulations and identifies how water supply systems shall be installedto comply with the Statutory Regulations.
The prevention of contamination is the overall main aim of the Water Regulations, and the identification of risks is one of the main changes between the previous Water Bylaws and the Water Regulations.The risks are categorised into five Fluid Category definitions. Refer to Table 13.
The risk of contamination is made present through back pressure andlor back syphonage, termed as Backflow being the source of risk into the water distribution system.
To Protect against Backflow there are a range of mechanical and non-mechanical devices. Referenceto the Water Regulations is required for the selection of the appropriate device to match the fluid category risk.
Air gaps are the most effective means of protecting against backflow and the resulting risk of contamination, and a correctly provided air gap protects against all fluid categories from 1 up to 5. All other means of protection, will protect between Fluid Categories 1-4.
Notification
The Water Regulations requires that notice shall be given to the water undertaker (company) of work intending to be carried out, which shall not begin without the consent, and shall comply with any conditions set by the water undertaker (company).
Notice of the work shall include details of:
Who requires the work
Who is to carry the work out
Location of premises
A description of the work
Name of the approved contractor, if an approved contractor is to carry out the works.
Table 12 Notifiable installations
The erection of a building or other structure, not being a pond or swimming
(a)
(b)
(c)
(d)
(e)
(f)
(9)
(h)
(i)
The extension or alteration of a water system on any premises other than a
house
A material change of use of any
Dremises
I The installation of:
A
bath having a capacity, as measured
to
the centre line of overflow, of more
than 230 litres
A bidet with an ascending spray or
flexible hose
single shower unit (which may consist
of
one or more shower heads within a
single unit) not being a drench shower installed for reasons of safety or health,
connected directly or indirectly to a supply pipe which is of a type specified by the regulator
A pump or booster drawing more than
12 litres per minute, connected directly
or indirectlv to a SUDD~VDiDe
Iosmoses
A unit which incorporates reverse
A water treatment unit which produces
a waste water discharge, or which
requires the use of water for
regeneration or cleaning
A reduced pressure zone valve
assembly or other mechanical device for
protection against a fluid which is in fluid category 4 or 5
A garden watering system unless
designed to be operated by hand, or
Any water system laid outside a building and either less than 750mm or more than 1350mm below ground level
The construction of a pond or a
swimming pool with a capacity of more than 10,000 litres which is designed to be replenished by automatic means and
is to be filled with water supplied by a
water undertaker.
Crown copyright 1999 with the permission of the
Controller of Her
Majesty’s Stationery Office.
Backflow prevention
It is necessary to protect against the likelihood of the backflow of contaminated water back into the water supply installation,The contaminated water is any water that has been delivered to the draw off point and has left the water supply system.The degree of contamination is as defined by the Water Regulations Guide, categorised as Fluids 1 to 5. Refer to Table 13.
Table 13 Water regulation fluid categories and examples
Fluid Category 1: Wholesome water supplied by a water undertaker and complying with the requirements of reaulations made under section 67 of the Water Industries Act 1991. (The incoming water supply). Fluid Cateqorv 2: Water in fluid cateaorv 1 whose
aesthetic q;ali&
is impaired owing toy
b. sodium hypochlorite (chloros and common disinfections).
Fluid Category 4: Fluid which represents a significant health hazard because of the concentration of toxic substances, including any fluid which contains:
a. Chemicals, carcinogenic substances or pesticides (including insecticides and herbicides), or
b. environmental organisms of potential health significance.
Fluid Category 5: Fluid represents a serious health hazard because of the concentration of pathogenic organisms, radioactive or very toxic substances, including any fluid which contains:
a. Faecal matter or other human waste;
b. butchery or other animal waste; or
c. pathogens from any other source.
Water supplied directly from a water undertaker’s main.
Mixing of hot and cold water supplies.
Domestic softening plant. Drink vending machines having no ingredients injected into the distribution pipe. Fire sprinkler systems without anti-freeze. Ice making machines. Water cooled air conditionina units (without additives). Water in primary circuits and heating systems in a house. Domestic wash basins, baths and showers. Domestic clothes and dishwashing machines. Home dialysing machines. Drink vending machines having ingredients injected. Commercial softening plant. Domestic hand held hoses. Hand held fertilizer sprays. lrriaation svstems.
General: Primary circuits and central heating systems in other than a house. Fire sprinkler systems using anti-freeze solutions. House and gardens: Mini-irrigation systems without fertilizer or insecticide application such as pop-up sprinklers or permeable hoses. Food processing:Food preparation, dairies, bottle washing apparatus. Catering: Commercial dishwashing machines, bottle washing apparatus, refrigeration equipment. Industrial and commercial installation: Dyeing equipment. Industrial disinfecting equipment. Printing and photographic equipment. Car washing degreasing plant. Commercial clothes washing plants. Brewery and distillation plant. Water treatment plant or softeners using other than salt. Pressurised fire fighting systems. pan washers. Mortuary and embalming equipment. Hospital dialys fighting purposes.
ground level
and/or permeable pipes with or without chemical additives. Insecticides or fertiliser
applications. Commercial hydroponic systems.
Commercialagricultural: Commercial irrigation outlets below ground or at
The list of examples of applications shown above for each fluid category is not exhaustive, others will present themselves and require to be matched to a Fluid Category; possibly by seeking guidance from the Water Regulations Advisory Scheme.
The Categories distinguish between domestic use, m eaing dwellings;and non-domesticuses, meaing commercial buildings.
3. The Fluid Categories define that the water within sinks, baths, basins and showers in domestic premises is a lesser Fluid Category risk, than the water within sinks, baths, basins and showers in medical premises, ie hospitals.
Crown copyright 1999 with the permission of the Controller of Her Majesty’s Stationery Office
Distributionpipe
sizing
The sizing of a water distribution pipe system is achieved by establishing the anticipatedflow rates, in litres per second (11s) taking account of the diversity of use of all the various types and numbers of appliances, and equipment requiring a water supply connection.
in practicalterms all the water draw-off points are not in use at the same time. The actual number in use, in relation to the total number capable of being used
varies dependant on the occupational use in the various types of building.
Probabilitytheory
The use of probability theory in assessing simultaneous demand is only fully applicable where large numbers of appliances are involved, as probability theory, as the name implies, is based on the likelihood of situations occurring and therefore its predictions may on occasions be at variance with the actual demand.
The criteria for this Occurrence is deemed to be reasonable if it is taken as
1%. This has been established to be reliable in that it has not led to an under assessment of simultaneous demand calculation.
The probability of a particular number of draw off’s occurring at any one time is determined by dividing the time for the appliance to be filled, by the time between successive usage of the appliance to arrive at the probability factor.
P= t - time in seconds of appliance filling T -time in seconds between successive usage of the appliance
Graph 2 Probability graph
'Ib
io
Probabilitvof dixharae factor
2i)o
Number of appliances
Table 14 Simultaneous demand - base data
Type of appliance
Bucket sink, 15mm taps 0
Slop hopper, cistenr only Slop hopper, cisternhaps Clothes washina m/c, dom. I
7.5
0.1
0.2
75
3600
P = 0.oU
0.030
0.017
0.125
0.100
0.042
Hot and
cold water supplies
An example of this application which utilises the probability graph is if 100 appliances each take 30 seconds to be filled, and are used at 1200 seconds (20 minutes) frequency intervals, then:
p= r=1200
300 = 0.025 probability
Using the Probability graph, and the probability factor in this example, then out of the 100 appliances being supplied, only 7 would be in use at any one time.
Simultaneous demand
The number of draw-off points that may
be used at any one time can be estimated by the application of probability
theory.
The factors, which have to be taken into account, are:
a. Capacity of appliance in litres
b. Draw-off flow rate in litres per second
c. Draw-off period in seconds, i.e. time taken to fill appliance
d. Use frequency in seconds, i.e. time between each use of the appliance.
All of these factors can vary.
The capacity of wash basins, sinks and other appliances all vary in capacity. Draw-off tap sizes and flow rates differ between appliances.The frequency of
13
the use of the appliances are different in varying locations, both within a building, and within different buildings.
Frequency of use
This is the time between each use of the appliance. Refer to Tables 14 and 15.
Low use is deemed to have 1200 seconds (20 minutes) between each use, and is appropriate for dwellings, and in other buildings where appliances are dedicated for use by a single person, or a small group of people, as a private facility.
Medium use is deemed to have 600 seconds (10 minutes) between use,
being appliances that
used by a larger group of people, as and
when they require on a random basis with no set time constraint, typically associated with ‘public use’ toilets.
High use is deemed to have 300 seconds (5 Minutes) between each use for appliances to be used by large numbers of persons over a short period, as would be the case within buildings such as theatres, concert halls and fixed period sports events.
are available to be
considered as representative of flow rates, which have not given rise to complaints of inadequacy.
Care is required with the ‘loading unit’ method of calculation where usage may be intensive.This is particularly applicable to field sports showers, theatre toilets, and factory wash rooms,
etc. where it is necessary to establish the
likely period of constant usage and provide the flow rate to suit.
Flow rates
To determine the design maximum simultaneous flow rate for a specific water distribution system the following process is necessary:
a. Identify the type and position of all the appliances and equipment requiring a water supply.
b. Determine the pipe routes and location for the incoming mains, cold
& hot water distribution, and the
locations of
and hot water generators.
storage cisterndtanks
c. Sketch a scaled plan and a schematic or an isometric of the pipework distribution and plant layout.
d. Identify type, position of all fittings, i.e. couplings, elbows, tees; all valves, (isolation, service, check, double check, pressure reducing) all . cisterns/tanks and vessel entry and exit arrangements.
e. Identify all types of draw-off fitting attached to appliances and equipment.
f. Establish the mains pressure available, in metres, and the cisternhank head available in metres.
g. Identify the index run, ie. the furthest and/or highest outlet, and greatest draw-off volume.
Having established items a-g, proceed to
Type of
appliance
Basin, 15mm sep. taps
Med High
Low
124 add the sanitary and appliance loading units, loading each section of pipe with the number of loading units that it is required to carry.
Bath,l5mm seplmixltap
WC Suite, 6.litre cistern Shower, 15mm head
Urinal. sinqle bowl/stall
Bidet, 15mm mix tap 11
I-
16
1-
Hand SDrav, 15mm
Bucket sink, 15mm taps
SIOD
HODDer. cistern onlv
Slop
Hopper, cistern/taps
Clothes washina m/c. dom.
Dishwasher mkdomestic
This is best achieved on either a plan or isometric of the system. A useful technique is to use a four-quarter frame. See Figure 14.
The pipe size at this initial stage is provisional in order to enable the
calculation to proceed. The provisional
pipe size can be established by
calculating the available head or pressure, in metres head and dividing it by the overall length of the index circuit,
i.e. the longest pipe route with the greatest duty and least head or pressure, plus a 30% factor for, at this stage an assumed loss through fittings.The result
Flow rate
Units LU
I/s
Pipe size
Velocity
Figure 14 Pipe section loading
of the provisional calculation is a ‘head loss in metres, per metre run of pipe.’ This figure can be used with the pipe sizing charts to establish the assumed or provisional pipe size. As the loading unit for each pipe section is established enter the figures into the calculation sheet. See Figure 15.
Pipe sizing chart definitions
Pipe reference
Numbered or lettered sections of the system identifying the start and finish.
Simultaneous maximum demand figure being carried by that section of pipe.
Flow rate (l/s)
Litres per second derived from the loading unit figure.
Assumed pipe diameter (mm)
Nominal internal diameter established from the available head divided by the index circuit length plus 30% for loss through fittings.
length (m)
Length of pipe, in metres of the pipe section being sized, measuring its total route length.
Pipe losses (mh/m)
In metres head per metre of pipe, taken from the pipe sizing charts.
Velocity (m/s)
Velocity, in metres per second of the water flowing through the pipe being sized, taken from the pipe sizing charts.
Pipe loss (mh)
In metres head, being the multiplication of the pipe length and the metres head loss per metre run of pipe length.
14
Sink
'"
Bucket
Continue for the remainder of the system
Figure 15 Pipework isometric and calculation sheef
Fittings head loss (mh)
In metres head, for each Pipe fitting and valve on the section of Pipe being sized.
Total head loss (mh)
In metres head, being the total sum of
the
pipe head loss and the fittings head
loss.
Systemhead loss (mh)
In metres head, being the total sum of all the sections of pipe relevant to the source of head available.
Total head available (mh)
In metres head, being either the mains or pump pressure and/or the height of the gravity feed cistern/tank. See Table 16.
FinalPipe size lmm)
in mm, nominal internal diameter, confirming the pipe size for that section of pipe.
loss of head through fittings
Refer to Table 19 loss of head, in metres through various pipeline fittings
Basin
wc
and terminal outlets, against a range of flow rates and sizes.
Loss of head through Tees should be assumed to occur at the changes of direction only.
For fittings not identified reference shall be made to the respective manufactures literature.
Where the flow
stated figures then the proportionalflow rate difference between the higher and lower figure shall be equally applied to the higher and lower head loss figure.
rate falls between the
156.91
1.57
1.6
160
16.32
17
166.71
1.67
1.7
170
17.34
176.52
1.77
1.8
180
18.36
The use of various units to describe pressure can cause confusion. The calculation of ‘Head loss’ in this Guide Section is declared in ‘Metres head’ as a readily usable means of measurement. Metres head can be easily converted to Bar, kN/m2and/or Pascals pressure figures as there is a close correlation between all of them. The table above provides the comparative figures for eas of reference.
1 litre of water weighs 1 kilogram, or 1000 grams.
1 cubic metre of water = 1000 litres.
1 metre head of water = 9810Pa or kN/m2, or 9.81Pa or kN/m2, or 0.1 bar.
Table 17 Pipework velocities
Noise
NR
Service duct, riser, shaft, plant room
2.0
2.5
Service enclosure, ceiling void
1.5
Circulation area, entrance corridor
Seating area, lecturelmeeting room
30
1.25
1.o
Theatre, cinema
0.75
Recording studio
<20
0.5
Pipe sizing by velocity
Where there is ample head available, or the water supply is by a pump or pump set, then pipe sizing can best be achieved by using an optimum pipe
velocity.
In a gravity down feed system where the head available is a limiting factor, pipe velocities are generally IOW,often in the range of 0.4 to 0.8 metreshecond. Where delivery is to be a pumped supply, then the pipe velocities can be allowed to increase to 1.O to 1.5 metres/second, and possibly higher where pipes are routed in non occupied areas. See Table 17.
Pipe velocities, ultimately are limited by either of the following:
a. Noise
b. Erosionkorrosion
c. Cavitation.
Noise is a major consideration, and velocities above 1.5 metres/second in
pipework passing through occupied areas, in particular bedrooms should be
avoided.
Erosion and corrosion are less of an issue. If velocities are being set to limit noise then erosion and corrosion will not generally be a problem. Where velocities exceed 2.5 metreshecond erosion and/or corrosion can result from the abrasive action of particles in the water. This type of water would normally be associated with a ‘raw’ water rather than a water supply for domestic use purposes where filtration has taken place as part of the treatment process by the Water Companies who have a duty to provide a ‘wholesome’ supply for domestic
purposes.
Cavitation caused by velocity is not considered an issue with water supply systems as velocities should always be below the 7.0 to 10.0 metres/second where velocity cavitation can occur.
Having determined the appropriate velocity for the location of the distribution pipes, using the pipe sizing charts you can determine the pipe size by cross reference to the design flow rate, and record the pipe head loss per metre. From thereon the pipe sizing schedule for the whole system can be completed.
Table 18a Heat emission from insulated pipes (40°C temperature difference)
Value
of pipework insulation thermal conductivity WlmK
Pioe size
Coooer/stainless steel
Plastic
Steel
43
Refer
51
53
62
64
manufacturers
data
84
93
102
65
112
125
143
Pipework taken to be shiny surface, individual, with zero air movement, and a 40°C temperature difference between the pipe content and surrounding air temperature.
Figure 16 Secondary circulation pipework isometric and calculation sheet
Pipe
Heat
Re1
loss
load
(W)
1-2 (flow)
2-3 (flow)
200
3-4 (flow)
4-5 (flow)
5-6 (flow)
6-1 (return)
7-1 (return)
!---
Pump
Vent
HW generator
2 I Balancing
valve
.f B a t h
Hot water secondary circulation
In order to maintain the correct temperature of hot water within the hot
water distribution system, provision of ‘return’ pipe to enable the water to be circulated back to the hot water generator is required.
Hot water circulation can be achieved by gravity or pump circulation means, although in nearly all instances a pumped system is provided.
Secondary circulation pipe sizing
The formal method of sizing the secondary circulation pipework is to calculate the heat loss from all of the ‘flow’ and ‘return’ pipe circuits throughout the system. Calculating the heat loss allows a comparable flow rate to be established, and thereafter the head loss throughout the system is determined, and the duty of the circulating pump.
The total heat loss from each section of pipe is converted to a flow rate necessary to replace the lost heat.
kgls =
Watts 4.187 (shc of water) x 1000
The pipework heat loss is that which is emitted through the pipe wall and insulation material. See Tablesl8a and 18b for pipes with and without insulation.
A ‘Rule of Thumb’ method of sizing pumped HWS secondary circuits is to initially select a return pipe size two sizes lower than the flow. As a guide select smaller sizes over larger pipe sizes, and maintain a check on the HWS return pipe velocities.
Pipe circuit balancing valves will be needed where the HWS return has a number of branches and loops to serve the various parts of the circulation system.These valves restrict the flow to the circuits nearest the pump where there is greater pump pressure, forcing the HWS return to circulate to the furthest circuit. Commonly the circuit valves are a double regulating. | https://it.scribd.com/document/385637223/1-16-Hot-and-Cold-Water-Supplies | CC-MAIN-2020-24 | refinedweb | 10,476 | 51.78 |
I want to build a date widget for a form, which has a select list of months, days, years. since the list is different based on the month and year, i cant hard code it to 31 days. (e.g february has 28 days not 30 or 31 and some years even 29 days)
How can I use the calendar or joda object to build me these lists.
I strongly recommend that you avoid the built-in date and time APIs in Java.
Instead, use Joda Time. This library is similar to the one which will (hopefully!) make it into Java 7, and is much more pleasant to use than the built-in API.
Now, is the basic problem that you want to know the number of days in a particular month?
EDIT: Here's the code (with a sample):
import org.joda.time.*; import org.joda.time.chrono.*; public class Test { public static void main(String[] args) { System.out.println(getDaysInMonth(2009, 2)); } public static int getDaysInMonth(int year, int month) { // If you want to use a different calendar system (e.g. Coptic) // this is the code to change. Chronology chrono = ISOChronology.getInstance(); DateTimeField dayField = chrono.dayOfMonth(); LocalDate monthDate = new LocalDate(year, month, 1); return dayField.getMaximumValue(monthDate); } } | https://codedump.io/share/1gsUnJgl5K6U/1/how-can-i-build-a-list-of-days-months-years-from-a-calendar-object-in-java | CC-MAIN-2017-09 | refinedweb | 209 | 67.25 |
Apple Mail Objc_Util
How can I send an email with Apple mail through an app that I make?
I assume this will use the objc_util module...
- Webmaster4o
You can always use a mailto:// url, see the documentation.
Presumably you could also construct the email and send it using smtplib without ever invoking the mail app.
- alijnclarke
If you could expand a little on what it is you want to do then we could probably come up with some good suggestions :)
This is from the objc_util docs. When I ran it, the 'cancel' didn't work! But anyway...
from objc_util import * # - DeDelegate_(delegate) # Present the mail sheet: root_vc = UIApplication.sharedApplication().keyWindow().rootViewController() root_vc.presentViewController_animated_completion_(mail_composer, True, None) if __name__ == '__main__': show_mail_sheet()
Another thing that would need to be fixed:
The mail composer view doesn't close after 'send'
See, and scroll down a little to where MFMailComposeViewController. The user always has to click send (your app cannot send an email in the user's name without their approval). You can pre-fill parts of the message, add attachments, see here.
If you use
s = smtplib.SMTP('SMTP server ip') s.sendmail(me, me, msg.as_string())
You don't need to confirm the sending
Many ISPs now block port 25 as a way to help curtail spam -- in that case you can only smtp through the isp's servers, using authentication.
I gathered the OP was interested in how to present and show the standard ios mail sheet.
You're right but I only wanted tell him there is another way for! sorry. | https://forum.omz-software.com/topic/3025/apple-mail-objc_util | CC-MAIN-2017-47 | refinedweb | 261 | 64.51 |
Wikiversity:Naming conventions
These naming conventions describe useful rules for naming pages. Naming conventions help Wikiversity participants quickly locate and understand the topic of each learning resource. This policy describes Wikiversity's page and page-section naming conventions. For information on how to organize pages see Wikiversity:Namespaces.
Contents
Keep names simple and concise[edit]
Educators often know more than learners do about a topic. For example, although an educator might know what "Vulpes vulpes" means, "red fox" is more likely to be immediately understood and found by the majority of English learners who wish to learn about red foxes.
By using simple and concise page and section names that avoid undue expectations, learners should be able to quickly locate learning resources and understand what to expect.
It is suggested to title pages with the subject name followed by the descriptor after a comma or between a parenthesis such as:
Subject, descriptor
or
Subject (descriptor)
Another common temptation is to include course and lesson numbers ("Biology 101", "Biology/Lesson 2", etc.) in resource names. However learners may either have preexisting expectations about the content of numbered resources from their experiences at specific brick and mortar schools, or may be completely unfamiliar with their meaning. For newer projects try to keep chapter or lesson numbers out of names for sub-pages, however, it is acceptable to title the link with a chapter or lesson number.
For older pages it is easier and acceptable to leave them as they are, as trying to fix it might cause broken links.
Casing[edit]
You can use title or sentence casing for pages and section names:
- Title Case: Advanced Multivariable Calculus
- Sentence case: Advanced multivariable calculus
Please be consistent with all pages that are part of the same course or curriculum. Note: that Wikipedia uses sentence casing.
Acronyms and abbreviations[edit]
Spell out abbreviations and acronyms in page names. Many acronyms have more than one possible meaning and are not universally understood. For example USA could be "United States of America" or "Union of South Africa". By avoiding acronyms and abbreviations in page names, learners are more likely to find exactly what they expect.
English dialects[edit]
You can use whatever dialect of English you want for page and section names (e.g. American English or British English), but be consistent. For example, if you use American English spelling for a page name, you should also use American English throughout the page.
Summary[edit]
- Keep it simple
- Use descriptive titles for the learning resources not lesson codes
- For resources in the main namespace, don't capitalize, so that it can be linked naturally from running texts from other pages
Namespaces[edit]
- Wikiversity: is the project namespace
- Help: are for helps with Mediawiki techniques
- Portal: A portal is a door to Wikiversity; It helps participants explore according to their interests and organize the resources they want. Portals are also equivalent to Wikipedia's Wikiprojects.
- School: is for cross-department Wikiversity community projects. | https://en.wikiversity.org/wiki/Wikiversity:Naming_conventions | CC-MAIN-2019-04 | refinedweb | 496 | 51.18 |
cc [ flag... ] file... −L/usr/lib/fm −lfmevent −lnvpair [ library... ] #include <fm/libfmevent.h> #include <libnvpair.h> typedef enum fmev_err_t; extern fmev_err_t fmev_errno; const char *fmev_strerror(fmev_err_t err); typedef struct fmev_shdl *fmev_shdl_t; typedef void fmev_cbfunc_t(fmev_t, const char *, nvlist_t *, void *);
fmev_shdl_t fmev_shdl_init(uint32_t api_version, void *(*alloc)(size_t), void *(*zalloc)(size_t), void (*free)(void *, size_t));
fmev_err_t fmev_shdl_fini(fmev_shdl_t hdl);
fmev_err_t fmev_shdl_subscribe(fmev_shdl_t hdl, const char *classpat, fmev_cbfunc_t callback, void *cookie);
fmev_err_t fmev_shdl_unsubscribe(fmev_shdl_t hdl, const char *classpat);
fmev_err_t fmev_shdl_getauthority(fmev_shdl_t hdl, nvlist_t **authp);
fmev_err_t fmev_shdlctl_serialize(fmev_shdl_t hdl);
fmev_err_t fmev_shdlctl_thrattr(fmev_shdl_t hdl, pthread_attr_t *attr);
fmev_err_t fmev_shdlctl_sigmask(fmev_shdl_t hdl, sigset_t *set);
fmev_err_t fmev_shdlctl_thrsetup(fmev_shdl_t hdl, door_xcreate_thrsetup_func_t *setupfunc, void *cookie);
fmev_err_t fmev_shdlctl_thrcreate(fmev_shdl_t hdl, door_xcreate_server_func_t *createfunc, void *cookie);
typedef struct fmev *fmev_t;
nvlist_t *fmev_attr_list(fmev_t ev);
const char *fmev_class(fmev_t ev);
fmev_err_t fmev_timespec(fmev_t ev, struct timespec *res);
uint64_t fmev_time_sec(fmev_t ev);
uint64_t fmev_time_nsec(fmev_t ev);
struct tm *fmev_localtime(fmev_t ev, struct tm *res);
hrtime_t fmev_hrtime(fmev_t ev);
void fmev_hold(fmev_t ev);
void fmev_rele(fmev_t ev);
fmev_t fmev_dup(fmev_t ev);
fmev_shdl_t fmev_ev2shdl(fmev_t ev);
void *fmev_shdl_alloc(fmev_shdl_t hdl, size_t sz);
void *fmev_shdl_zalloc(fmev_shdl_t hdl, size_t sz);
void fmev_shdl_free(fmev_shdl_t hdl, void *buf, size_t sz);
char *fmev_shdl_strdup(fmev_shdl_t hdl, char *str);
void fmev_shdl_strfree(fmev_shdl_t hdl, char *str);
char *fmev_shdl_nvl2str(fmev_shdl_t hdl, nvlist_t *fmri);
The Solaris fault management daemon (fmd) is the central point in Solaris for fault management. It receives observations from various sources and delivers them to subscribing diagnosis engines; if those diagnosis engines diagnose a problem, the fault manager publishes additional protocol events to track the problem lifecycle from initial diagnosis through repair and final problem resolution. The event protocol is specified in the Sun Fault Management Event Protocol Specification. The interfaces described here allow an external process to subscribe to protocol events. See the Fault Management Daemon Programmer's Reference Guide for additional information on fmd.
The fmd module API (not a Committed interface) allows plugin modules to load within the fmd process, subscribe to events of interest, and participate in various diagnosis and response activities. Of those modules, some are notification agents and will subscribe to events describing diagnoses and their subsequent lifecycle and render these to console/syslog (for the syslog-msgs agent) and via SNMP trap and browsable MIB (for the snmp-trapgen module and the corresponding dlmod for the SNMP daemon). It has not been possible to subscribe to protocol events outside of the context of an fmd plugin. The libfmevent interface provides this external subscription mechanism. External subscribers may receive protocol events as fmd modules do, but they cannot participate in other aspects of the fmd module API such as diagnosis. External subscribers are therefore suitable as notification agents and for transporting fault management events.
This protocol is defined in the Sun Fault Management Event Protocol Specification. Note that while the API described on this manual page are Committed, the protocol events themselves (in class names and all event payload) are not Committed along with this API. The protocol specification document describes the commitment level of individual event classes and their payload content. In broad terms, the list.* events are Committed in most of their content and semantics while events of other classes are generally Uncommitted with a few exceptions.
All protocol events include an identifying class string, with the hierarchies defined in the protocol document and individual events registered in the Events Registry. The libfmevent mechanism will permit subscription to events with Category 1 class of “list” and “swevent”, that is, to classes matching patterns “list.*” and “swevent.*”.
All protocol events consist of a number of (name, datatype, value) tuples (“nvpairs”). Depending on the event class various nvpairs are required and have well-defined meanings. In Solaris fmd protocol events are represented as name-value lists using the libnvpair(3LIB) interfaces.
The API is simple to use in the common case (see Examples), but provides substantial control to cater for more-complex scenarios.
We obtain an opaque subscription handle using fmev_shdl_init(), quoting the ABI version and optionally nominating alloc(), zalloc() and free() functions (the defaults use the umem family). More than one handle may be opened if desired. Each handle opened establishes a communication channel with fmd, the implementation of which is opaque to the libfmevent mechanism.
On a handle we may establish one or more subscriptions using fmev_shdl_subscribe(). Events of interest are specified using a simple wildcarded pattern which is matched against the event class of incoming events. For each match that is made a callback is performed to a function we associate with the subscription, passing a nominated cookie to that function. Subscriptions may be dropped using fmev_shdl_unsubscribe() quoting exactly the same class or class pattern as was used to establish the subscription.
Each call to fmev_shdl_subscribe() creates a single thread dedicated to serving callback requests arising from this subscription.
An event callback handler has as arguments an opaque event handle, the event class, the event nvlist, and the cookie it was registered with in fmev_shdl_subscribe(). The timestamp for when the event was generated (not when it was received) is available as a struct timespec with fmev_timespec(), or more directly with fmev_time_sec() and fmev_time_nsec(); an event handle and struct tm can also be passed to fmev_localtime() to fill the struct tm. A high-resolution timestamp for an event may be retrieved using fmev_hrtime(); this value has the semantics described in gethrtime(3C) .
The event handle, class string pointer, and nvlist_t pointer passed as arguments to a callback are valid for the duration of the callback. If the application wants to continue to process the event beyond the duration of the callback then it can hold the event with fmev_hold (), and later release it with fmev_rele(). When the reference count drops to zero the event is freed.
In libfmevent.h an enumeration fmev_err_t of error types is defined. To render an error message string from an fmev_err_t use fmev_strerror(). An fmev_errno is defined which returns the error number for the last failed libfmevent API call made by the current thread. You may not assign to fmev_errno.
If a function returns type fmev_err_t, then success is indicated by FMEV_SUCCESS (or FMEV_OK as an alias); on failure a FMEVERR_* value is returned (see fm/libfmevent.h).
If a function returns a pointer type then failure is indicated by a NULL return, and fmev_errno will record the error type.
A subscription handle is required in order to establish and manage subscriptions. This handle represents the abstract communication mechanism between the application and the fault management daemon running in the current zone.
A subscription handle is represented by the opaque fmev_shdl_t datatype. A handle is initialized with fmev_shdl_init() and quoted to subsequent API members.
To simplify usage of the API, subscription attributes for all subscriptions established on a handle are a property of the handle itself ; they cannot be varied per-subscription. In such use cases multiple handles will need to be used.
The first argument to fmev_shdl_init() indicates the libfmevent ABI version with which the handle is being opened. Specify either LIBFMEVENT_VERSION_LATEST to indicate the most recent version available at compile time or LIBFMEVENT_VERSION_1 (_2, etc. as the interface evolves) for an explicit choice.
Interfaces present in an earlier version of the interface will continue to be present with the same or compatible semantics in all subsequent versions. When additional interfaces and functionality are introduced the ABI version will be incremented. When an ABI version is chosen in fmev_shdl_init (), only interfaces introduced in or before that version will be available to the application via that handle. Attempts to use later API members will fail with FMEVERR_VERSION_MISMATCH.
This manual page describes LIBFMEVENT_VERSION_1.
The libfmevent API is not least-privilege aware; you need to have all privileges to call fmev_shdl_init(). Once a handle has been initialized with fmev_shdl_init() a process can drop privileges down to the basic set and continue to use fmev_shdl_subscribe() and other libfmevent interfaces on that handle.
The implementation of the event transport by which events are published from the fault manager and multiplexed out to libfmevent consumers is strictly private. It is subject to change at any time, and you should not encode any dependency on the underlying mechanism into your application. Use only the API described on this manual page and in libfmevent.h.
The underlying transport mechanism is guaranteed to have the property that a subscriber may attach to it even before the fault manager is running. If the fault manager starts first then any events published before the first consumer subscribes will wait in the transport until a consumer appears.
The underlying transport will also have some maximum depth to the queue of events pending delivery. This may be hit if there are no consumers, or if consumers are not processing events quickly enough. In practice the rate of events is small. When this maximum depth is reached additional events will be dropped.
The underlying transport has no concept of priority delivery; all events are treated equally.
Obtain a new subscription handle with fmev_shdl_init(). The first argument is the libfmevent ABI version to be used (see above). The remaining three arguments should be all NULL to leave the library to use its default allocator functions (the libumem family), or all non-NULL to appoint wrappers to custom allocation functions if required.
The library does not support the version requested.
An error occurred in trying to allocate data structures.
The alloc(), zalloc(), or free() arguments must either be all NULL or all non-NULL.
Insufficient privilege to perform operation. In version 1 root privilege is required.
Internal library error.
Once a subscription handle has been initialized, authority information for the fault manager to which the client is connected may be retrieved with fmev_shdl_getauthority(). The caller is responsible for freeing the returned nvlist using nvlist_free(3NVPAIR).
Close a subscription handle with fmev_shdl_fini(). This call must not be performed from within the context of an event callback handler, else it will fail with FMEVERR_API.
The fmev_shdl_fini() call will remove all active subscriptions on the handle and free resources used in managing the handle.
May not be called from event delivery context for a subscription on the same handle.
To establish a new subscription on a handle, use fmev_shdl_subscribe(). Besides the handle argument you provide the class or class pattern to subscribe to (the latter permitting simple wildcarding using '*'), a callback function pointer for a function to be called for all matching events, and a cookie to pass to that callback function.
The class pattern must match events per the fault management protocol specification, such as “list.suspect” or “list.*”. Patterns that do not map onto existing events will not be rejected - they just won't result in any callbacks.
A callback function has type fmev_cbfunc_t. The first argument is an opaque event handle for use in event access functions described below. The second argument is the event class string, and the third argument is the event nvlist; these could be retrieved using fmev_class () and fmev_attr_list() on the event handle, but they are supplied as arguments for convenience. The final argument is the cookie requested when the subscription was established in fmev_shdl_subscribe().
Each call to fmev_shdl_subscribe() opens a new door into the process that the kernel uses for event delivery. Each subscription therefore uses one file descriptor in the process.
See below for more detail on event callback context.
Class pattern is NULL or callback function is NULL.
Class pattern is the empty string, or exceeds the maximum length of FMEV_MAX_CLASS.
An attempt to fmev_shdl_zalloc() additional memory failed.
Duplicate subscription request. Only one subscription for a given class pattern may exist on a handle.
A system-imposed limit on the maximum number of subscribers to the underlying transport mechanism has been reached.
An unknown error occurred in trying to establish the subscription.
An unsubscribe request using fmev_shdl_unsubscribe() must exactly match a previous subscription request or it will fail with FMEVERR_NOMATCH. The request stops further callbacks for this subscription, waits for any existing active callbacks to complete, and drops the subscription.
Do not call fmev_shdl_unsubscribe from event callback context, else it will fail with FMEVERR_API.
A NULL pattern was specified, or the call was attempted from callback context.
The pattern provided does not match any open subscription. The pattern must be an exact match.
The class pattern is the empty string or exceeds FMEV_MAX_CLASS.
Event callback context is defined as the duration of a callback event, from the moment we enter the registered callback function to the moment it returns. There are a few restrictions on actions that may be performed from callback context:
You can perform long-running actions, but this thread will not be available to service other event deliveries until you return.
You must not cause the current thread to exit.
You must not call either fmev_shdl_unsubscribe() or fmev_shdl_fini() for the subscription handle on which this callback has been made.
You can invoke fork(), popen(), etc.
A callback receives an fmev_t as a handle on the associated event. The callback may use the access functions described below to retrieve various event attributes.
By default, an event handle fmev_t is valid for the duration of the callback context. You cannot access the event outside of callback context.
If you need to continue to work with an event beyond the initial callback context in which it is received, you may place a “hold” on the event with fmev_hold(). When finished with the event, release it with fmev_rele(). These calls increment and decrement a reference count on the event; when it drops to zero the event is freed. On initial entry to a callback the reference count is 1, and this is always decremented when the callback returns.
An alternative to fmev_hold() is fmev_dup(), which duplicates the event and returns a new event handle with a reference count of 1. When fmev_rele() is applied to the new handle and reduces the reference count to 0, the event is freed. The advantage of fmev_dup() is that it allocates new memory to hold the event rather than continuing to hold a buffer provided by the underlying delivery mechanism. If your operation is going to be long-running, you may want to use fmev_dup() to avoid starving the underlying mechanism of event buffers.
Given an fmev_t, a callback function can use fmev_ev2shdl() to retrieve the subscription handle on which the subscription was made that resulted in this event delivery.
The fmev_hold() and fmev_rele() functions always succeed.
The fmev_dup() function may fail and return NULL with fmev_errno of:
A NULL event handle was passed.
The fmev_shdl_alloc() call failed.
A delivery callback already receives the event class as an argument, so fmev_class() will only be of use outside of callback context (that is, for an event that was held or duped in callback context and is now being processed in an asynchronous handler). This is a convenience function that returns the same result as accessing the event attributes with fmev_attr_list() and using nvlist_lookup_string(3NVPAIR) to lookup a string member of name “class”.
The string returned by fmev_class() is valid for as long as the event handle itself.
The fmev_class() function may fail and return NULL with fmev_errno of:
A NULL event handle was passed.
The event appears corrupted.
All events are defined as a series of (name, type) pairs. An instance of an event is therefore a series of tuples (name, type, value). Allowed types are defined in the protocol specification. In Solaris, and in libfmevent, an event is represented as an nvlist_t using the libnvpair(3LIB) library.
The nvlist of event attributes can be accessed using fmev_attr_list(). The resulting nvlist_t pointer is valid for the same duration as the underlying event handle. Do not use nvlist_free() to free the nvlist. You may then lookup members, iterate over members, and so on using the libnvpair interfaces.
The fmev_attr_list() function may fail and return NULL with fmev_errno of:
A NULL event handle was passed.
The event appears corrupted.
These functions refer to the time at which the event was originally produced, not the time at which it was forwarded to libfmevent or delivered to the callback.
Use fmev_timespec() to fill a struct timespec with the event time in seconds since the Epoch (tv_sec, signed integer) and nanoseconds past that second (tv_nsec, a signed long). This call can fail and return FMEVERR_OVERFLOW if the seconds value will not fit in a signed 32-bit integer (as used in struct timespec tv_sec).
You can use fmev_time_sec() and fmev_time_nsec() to retrieve the same second and nanosecond values as uint64_t quantities.
The fmev_localtime function takes an event handle and a struct tm pointer and fills that structure according to the timestamp. The result is suitable for use with strftime(3C). This call will return NULL and fmev_errno of FMEVERR_OVERFLOW under the same conditions as above.
The fmev_timespec() function cannot fit the seconds value into the signed long integer tv_sec member of a struct timespec.
A string can be duplicated using fmev_shdl_strdup(); this will allocate memory for the copy using the allocator nominated in fmev_shdl_init(). The caller is responsible for freeing the buffer using fmev_shdl_strfree(); the caller can modify the duplicated string but must not change the string length.
An FMRI retrieved from a received event as an nvlist_t may be rendered as a string using fmev_shdl_nvl2str(). The nvlist must be a legal FMRI (recognized class, version and payload), or NULL is returned with fmev_errno () of FMEVERR_INVALIDARG. The formatted string is rendered into a buffer allocated using the memory allocation functions nominated in fmev_shdl_init(), and the caller is responsible for freeing that buffer using fmev_shdl_strfree().
The fmev_shdl_alloc(), fmev_shdl_zalloc(), and fmev_shdl_free() functions allocate and free memory using the choices made for the given handle when it was initialized, typically the libumem(3LIB) family if all were specified NULL.
The fmev_shdlctl_*() interfaces offer control over various properties of the subscription handle, allowing fine-tuning for particular applications. In the common case the default handle properties will suffice.
These properties apply to the handle and uniformly to all subscriptions made on that handle. The properties may only be changed when there are no subscriptions in place on the handle, otherwise FMEVERR_BUSY is returned.
Event delivery is performed through invocations of a private door. A new door is opened for each fmev_shdl_subscribe() call. These invocations occur in the context of a single private thread associated with the door for a subscription. Many of the fmev_shdlctl_*() interfaces are concerned with controlling various aspects of this delivery thread.
If you have applied fmev_shdlctl_thrcreate(), “custom thread creation semantics” apply on the handle; otherwise “default thread creation semantics” are in force. Some fmev_shdlctl_*() interfaces apply only to default thread creation semantics.
The fmev_shdlctl_serialize() control requests that all deliveries on a handle, regardless of which subscription request they are for, be serialized - no concurrent deliveries on this handle. Without this control applied deliveries arising from each subscription established with fmev_shdl_subscribe() are individually single-threaded, but if multiple subscriptions have been established then deliveries arising from separate subscriptions may be concurrent. This control applies to both custom and default thread creation semantics.
The fmev_shdlctl_thrattr() control applies only to default thread creation semantics. Threads that are created to service subscriptions will be created with pthread_create (3C) using the pthread_attr_t provided by this interface. The attribute structure is not copied and so must persist for as long as it is in force on the handle.
The default thread attributes are also the minimum requirement: threads must be created PTHREAD_CREATE_DETACHED and PTHREAD_SCOPE_SYSTEM. A NULL pointer for the pthread_attr_t will reinstate these default attributes.
The fmev_shdlctl_sigmask() control applies only to default thread creation semantics. Threads that are created to service subscriptions will be created with the requested signal set masked - a pthread_sigmask(3C) request to SIG_SETMASK to this mask prior to pthread_create(). The default is to mask all signals except SIGABRT.
See door_xcreate(3C) for a detailed description of thread setup and creation functions for door server threads.
The fmev_shdlctl_thrsetup() function runs in the context of the newly-created thread before it binds to the door created to service the subscription. It is therefore a suitable place to perform any thread-specific operations the application may require. This control applies to both custom and default thread creation semantics.
Using fmev_shdlctl_thrcreate() forfeits the default thread creation semantics described above. The function appointed is responsible for all of the tasks required of a door_xcreate_server_func_t in door_xcreate().
The fmev_shdlctl_*() functions may fail and return NULL with fmev_errno of:
Subscriptions are in place on this handle.
The following example subscribes to list.suspect events and prints out a simple message for each one that is received. It foregoes most error checking for the sake of clarity.
#include <fm/libfmevent.h> #include <libnvpair.h> /* * Callback to receive list.suspect events */ void mycb(fmev_t ev, const char *class, nvlist_t *attr, void *cookie) { struct tm tm; char buf[64]; char *evcode; if (strcmp(class, "list.suspect") != 0) return; /* only happens if this code has a bug! */ (void) strftime(buf, sizeof (buf), NULL, fmev_localtime(ev, &tm)); (void) nvlist_lookup_string(attr, "code", &evcode); (void) fprintf(stderr, "Event class %s published at %s, " "event code %s\n", class, buf, evcode); } int main(int argc, char *argv[]) { fmev_shdl_t hdl; sigset_t set; hdl = fmev_shdl_init(LIBFMEVENT_VERSION_LATEST, NULL, NULL, NULL); (void) fmev_shdl_subscribe(hdl, "list.suspect", mycb, NULL); /* Wait here until signalled with SIGTERM to finish */ (void) sigemptyset(&set); (void) sigaddset(&set, SIGTERM); (void) sigwait(&set); /* fmev_shdl_fini would do this for us if we skipped it */ (void) fmev_shdl_unsubscribe(hdl, "list.suspect"); (void) fmev_shdl_fini(hdl); return (0); }
See attributes(5) for descriptions of the following attributes:
door_xcreate(3C), gethrtime(3C) , libnvpair(3LIB), libumem (3LIB), nvlist_lookup_string(3NVPAIR), pthread_create(3C), pthread_sigmask(3C) , strftime(3C), attributes (5), privileges(5) | http://docs.oracle.com/cd/E36784_01/html/E36876/fmev-errno-3fm.html | CC-MAIN-2014-52 | refinedweb | 3,596 | 53.71 |
[SOLVED]Turn between SplineData and BaseContainer?
On 29/11/2017 at 04:13, xxxxxxxx wrote:
[Original Topic] Convert between SplineData and base container?
[Original Post]
Hello, again!
In a previous Topic, I learn how to create a default spline user data.
But there comes some other problems.
<1> If I want to use c4d.SplineData to set some value first, then turn it into a base container for adding it to user data,
<2> or use c4d.SplineData to change some value in a predefined "SplineData" base container,
how can I do?
Thanks again~
On 29/11/2017 at 05:41, xxxxxxxx wrote:
import c4d def main() : obj = doc.GetActiveObject() if not obj: return # Create Spline Data bc = c4d.GetCustomDatatypeDefault(c4d.CUSTOMDATATYPE_SPLINE) bc[c4d.DESC_NAME] = "Spline" splineID = obj.AddUserData(bc) # Assign default vaue spline = c4d.SplineData() spline.SetKnot(1,c4d.Vector(1,1,0)) obj[splineID] = spline c4d.EventAdd() if __name__=='__main__': main()
Basicly SplineData do not get a Basecontainer but it's the host objet that have one. And you copy or set this basecontainer that link to a SplineData.
On 29/11/2017 at 08:53, xxxxxxxx wrote:
Thank you for your patience and detail answer!
I learn a lot!
Thank you again! | https://plugincafe.maxon.net/topic/10484/13934_solvedturn-between-splinedata-and-basecontainer | CC-MAIN-2019-18 | refinedweb | 205 | 61.73 |
Since Internet Explorer 4, the shell namespace contains an "Internet Explorer" item.
This Internet Explorer forms a special junction point where you can include your own namespace extensions.
Microsoft uses this for their FTP folders. You can add your own by registering a url prefix in the registry. Copy the entries that are used in HKEY_CLASSES_ROOT\ftp.
HKEY_CLASSES_ROOT\ftp
However, something strange is going on. You don't see the root item of this namespace extension displayed in explorer. There is no item named "ftp folders" that is the root of all ftp folders. Instead, the "Internet Explorer" item functions as the root.
This has a very strange implication: The Internet Explorer root item had to understand the pidls of all underlying namespaces.
The solution: Internet Explorer will embed the pidls of all namespace extensions in its own pidls. The mechanism used to accomplish this is IDelegateFolder.
IDelegateFolder
Using IDelegateFolder, your namespace extension will receive an IMalloc interface. The Alloc function of this IMalloc will allocate an Internet Explorer pidl that points to your namespace extension and has empty room to put your own pidl in.
IMalloc
Alloc
This is the IID of IDelegateFolder:
// {ADD8BA80-002B-11D0-8F0F-00C04FD7D062}
DEFINE_GUID(IID_IDelegateFolder,
0xADD8BA80L, 0x002B, 0x11D0, 0x8F, 0x0F, 0x00, 0xC0, 0x4F, 0xD7, 0xD0, 0x62);
This is the interface definition:
DECLARE_INTERFACE_(IDelegateFolder, IUnknown)
{
// IUnknown methods
STDMETHOD(QueryInterface)(THIS_ REFIID riid, LPVOID FAR* ppvObj) PURE;
STDMETHOD_(ULONG,AddRef)(THIS) PURE;
STDMETHOD_(ULONG,Release)(THIS) PURE;
// IDelegateFolder methods
STDMETHOD(SetItemAlloc)(THIS_ IMalloc *pMalloc) PURE;
};
After instantiating your namespace extension, Internet Explorer will query for the IDelegateFolder interface and call SetItemAlloc, passing an IMalloc interface. You have to store this interface.
SetItemAlloc
From this moment on, when ever you have to create a pidl, you have to follow these steps:
The returned buffer will already be a pidl, starting with the size (2 bytes) and then a 2 bytes signature (0x61 0x03).
0x61 0x03
All the pidls that will be passed to your namespace extension will also have this format. This means you will find your own pidl at offset 4.
The pidls are still freed the normal way, using the shell allocator.
If your namespace extensions has subfolders, then these subfolders follow the normal system. The first id in the list will be the special Internet Explorer pidl, all the others that follow are your own normal pidls.
This is not a clean solution that was chosen by Microsoft. It would have been much easier if the Internet Explorer root node would do the insertion and extraction of the embedded pidl, eliminating the need for this. | http://www.codeproject.com/Articles/1840/Namespace-Extensions-the-IDelegateFolder-mystery | CC-MAIN-2015-22 | refinedweb | 431 | 54.02 |
Hi, On Fri, Oct 21, 2005 at 05:58:30PM +1000, Mike Barnes wrote: > Pasi Pirhonen wrote: > :) > > This is ironic. I started recompiling Fedora Core 2 for the Alpha about > a year and a half ago, mainly because I'd just picked up a PCI Radeon > 7000, and wanted to get DRI going. I've now got everything I wanted, > except that. :) > > What's been changed to disable it? I'll grab the SRPMs and start doing > some tests over the weekend. I only enabled that previous (disabled in spec file) 'disable-dri-patch' and disabled the new one. Only few lines in .spec (i do believe i wrote something %changelog too as i usually do). ###%if %{build_rhel4} ####%patch1215 -p0 -b .ati-radeon-disable-dri ###%endif ###%patch1216 -p0 -b .ati-radeon-7000-disable-dri ####CentOS-4/alpha %ifarch alpha %patch1215 -p0 -b .ati-radeon-disable-dri %else %patch1216 -p0 -b .ati-radeon-7000-disable-dri %endif So i enabled the %patch1215 again and disabled the later 7000-patch (which does have something else in it too). Quick hack which made me confident releasing the DVD/CDs as it should not render boxes unusable after firstboot. Makign updates later on would be much more easy that explaing every installer 'why his/her radeon just isn't working'. This is on my TODO-list too. If my now running sparc-patch holds up now w/o hanging the box on _high_ load after few hours of hammering, i'll be much more open to work on these alpha issues :) -- Pasi Pirhonen - upi iki fi - | http://www.redhat.com/archives/axp-list/2005-October/msg00053.html | CC-MAIN-2017-09 | refinedweb | 262 | 74.69 |
i am presently breaking a program down into seperate source files and writing some small header files also, everything has gone swimmingly but now i am hitting problems with the below..i am sure other problems of this sort will come up as i continue so id appreciate any advice...
this opening entry of my Graphics.h file works fine included in main and everything in there seems to use it fine, but when i include it in other source files i get a list of errors saying each element is being redefined..first defined here..etc.. even when the source file does not so far use anything in the header. on the other hand other headers dont give me any problem being put in more than one source, ...but then they only contain class definitions
Code:
#ifndef GRAPHICS_H_INCLUDED
#define GRAPHICS_H_INCLUDED
SDL_Surface *screen;
SDL_Surface *temp;
SDL_Surface *PegArea;
SDL_Surface *Buttonarea;
SDL_Surface *Gameboard;
#endif // GRAPHICS_H_INCLUDED | http://cboard.cprogramming.com/cplusplus-programming/120841-header-file-module-issues-printable-thread.html | CC-MAIN-2014-15 | refinedweb | 153 | 52.6 |
Backup and restore
The last I heard about backup and restore hasnt been given much interest. Any reason for that? has some progress.
Jason Robinson January 8th, 2016 21:20
The latest automatic back up / restore spec proposal includes the import parts in this issue. Would be fancy to get more comments on it, split it into smaller chunks and dump a lot of bounty money on the issues - maybe get something to happen :)
Anyone wanting to give their insight, please check the proposal and comment in issue #908 . If no comments are received after some time I could just create a proposal to accept the proposal as a guideline for development.
Please don't start working on any import thingy before at least considering that proposal ;)
Pavithran S January 9th, 2016 13:50
Its a very detailed proposal. Isn't there a simple way like mysql dump to csv/json and import csv/json ? Pod to pod communication sounds very complex.
Jason Robinson January 9th, 2016 18:45
Dumping database would not work for ordinary users very well - and certainly would not work in the situation where the pod just disappears. And, importing raw SQL dumps into other pods would probably not be a very nice thing to do :)
Regarding JSON, yes we already allow exporting data. The proposal details how this data could be to not only restore the data to a new pod (profile, etc), but also migrate the identity, and thus take control of previously shared posts etc.
Pavithran S January 9th, 2016 20:59
Regarding JSON, yes we already allow exporting data.
Yes now we need an import function but your proposal looks too big and lot of stuff needs to be implemented. Agreed its the optimal way to go forward.
but also migrate the identity, and thus take control of previously shared posts etc.
Can't we have something just for exporting all the posts and its comments which are imported back under new account. " Taking Control" sounds complex.
Jason Robinson January 9th, 2016 21:02
Can’t we have something just for exporting all the posts and its comments which are imported back under new account. “ Taking Control” sounds complex.
That would create duplicates and there would be no interaction possibility. No comments to the old ones would arrive to your new identity, for example.
Sure, you can have (almost) anything (sane) - just find a coder to do it :) The issue has been there for a while waiting for someone to grab it. This proposal aims to solve the disappearing pod problem and clean migration. That doesn't stop anyone doing partial or full imports before a "proper" migrate solution is done.
Steffen van Bergerem January 9th, 2016 21:04
Can’t we have something just for exporting all the posts and its comments which are imported back under new account. “ Taking Control” sounds complex.
So you would like to share all your previous posts again from your new profile?
Comrade Senya January 22nd, 2016 14:33
So we have User and Person models. On migration we must create a new user but do we preserve the same Person model which was known for that person? I suppose we should.
Comrade Senya January 22nd, 2016 14:39
Profile data export package contains also posts and comments which can be ignored during the restore process.
I don't think posts and comments can be ignored. It is possible that the new pod doesn't contain some posts (especially private). I believe it must not be dropped and we should fetch them on restore also.
Comrade Senya January 22nd, 2016 14:42
"backup" is signed and encrypted using the user private key
Where do we store signature? Don't we have to have a separate field in the schema for that?
Comrade Senya January 22nd, 2016 15:08
Pods should schedule backups of user data once a week to the backup pods..
Comrade Senya January 22nd, 2016 15:19
I also beleive we must support some kind of limitation for pods backup capacity. For example a pod wants to store backups, but no more than for 100 people. Then it shows that it doesn't receive backups anymore, but previous still work.
Jason Robinson January 22nd, 2016 19:30
So we have User and Person models. On migration we must create a new user but do we preserve the same Person model which was known for that person? I suppose we should.
Yes, we should keep the same Person as it is the same identity, just moved local instead of remote.
I don’t think posts and comments can be ignored. It is possible that the new pod doesn’t contain some posts (especially private). I believe it must not be dropped and we should fetch them on restore also.
Well, it's not feasible to import posts or comments imho. What purpose would that even solve? Your contacts would not see any of the uploaded private posts unless you also push them out to them - which would be bad because they already have them from before.
The main point is to save the identity, not every crumb of data related to it.
Where do we store signature? Don’t we have to have a separate field in the schema for that?
This would be the same signing method we use for delivering content - ie the receiver only accepts it after verifying the signature in the payload against the public key of the person. Whether this should live in the diaspora federation gem where the code is readily available is up to question. I'd say no, but of course it would be easier to implement that way. I don't consider this part part of the protocol as such, but then that is not in any way under specification anyway..
I believe this can be left as detail when the thing actually works. If we leave out posts and comments, which I think is the only real way to do it, then there is absolutely no sense in sending backups daily.
I also beleive we must support some kind of limitation for pods backup capacity. For example a pod wants to store backups, but no more than for 100 people. Then it shows that it doesn’t receive backups anymore, but previous still work.
Again, if posts and comments are not part of the archive, it will always be very small. Additionally, backup pods would be randomized, guaranteeing some balancing of load between pods. Of course we can introduce a setting like this, but personally it doesn't sound super useful, considering how much data pods store normally compared to how much this would add.
One interesting case is what happens if the pod has turned off sign-ups in the event that someone wants to restore? I think we should bypass the sign-up not being on and let the user restore. This would speak towards the setting you mention, so that pods don't gather too many possible identities that could activate suddenly.
Great comments, thanks!
Comrade Senya January 24th, 2016 19:30
Well, private posts may contain some valueable information for the user herself. I don't have extra copies of texts I posted, and I don't like to lose these texts. Moreover it would be definitely nice to preserve a possibility to continue conversation on some private post that was going on before the move. I don't see why we should push them again. A guid of the post is preserved, so all new comments will be federated well to the right place.
Not to say about public posts which might be wanted to be shared with some new contacts.
So I think posts, the content is extremely important part of the network. That's why people are in it. And the restore feature without posts restore would look unfinished.
Comrade Senya January 24th, 2016 19:38
Maybe if we push posts to the restore pod as if there were some subscriber on it could make restore easier, since we won't need frequent backup then and any special posts restore feature. At least we could do that for public posts, since for private posts that would imply trust for restore server even before a password was entered. That is not really acceptable.
Comrade Senya January 24th, 2016 19:46
“backup” is signed and encrypted using the user private key. Once opened, it should contain the following schema...
It's not very clear the structure of the backup field before we "open" it. It must be some data array containing signature and encrypted data, right?
Comrade Senya January 25th, 2016 11:59
BTW, does any other software in the federation (friendica, redmatrix) do some sort of backup restore?
Comrade Senya January 26th, 2016 18:33
So here are my changes to the spec according to what we've discussed
Jason Robinson January 26th, 2016 20:26
@comradesenya unfortunately I'll unlikely have much time for going through these before friday - I'll reply then. Until then, it would be nice if the wiki could be left alone before accepting changes here in Loomio. I don't agree (still) to some things like including posts and comments in the backup archive and I'm not prepared to change my opinion unless some others join in and support that idea and also give valid technical ways to do it sanely which at the moment is missing.
Anyway, lets continue discussion on that, for a few days I'll have to pass on comments. I'll clean up the wiki page to reflect those items that have been mutually agreed upon then.
Once we have a clear understanding we can do a proposal.
Comrade Senya January 26th, 2016 21:10
TBH, I don't see any technical problems on posts restore. Everything seems to me pretty straightforward. We just add the posts to the database of the backup pod if they aren't there yet. Maybe I miss something?
On contrary, not restoring posts would lead to weird situations, like when after you've moved some of your contacts does comment on an old post of you. Comment gets federated to your new pod, but parent post is not there! Because we haven't merged it.
Jason Robinson February 7th, 2016 20:57
So, I've now cleaned and published the spec for final discussion, cleanup and voting.
@comradesenya I left in your good idea to retry moved messages (which a small change to play with status codes instead of sending more messages around). But I removed the pointer regarding posts and comments and filed it as an issue instead.
There are also quite a few TODO's in the spec (will log issues out of these) - and I'm pretty sure there are in general things that need fixing.
All in all, no one has given strong opposition to the idea itself, so hoping we can lock down a spec 1.0, approve it for implementing in diaspora* and then it can be implemented (hopefully by @comradesenya ;)).
So basically;
- The spec is written as markdown and should be worked on via preferably issues and pull requests
- I chose Gitlab because not everybody likes github and because it is nice and because TheFederation was not available in Github ;) You can create an account using your Github login easily. For those who want to participate but don't like Gitlab, please feel free to use other ways to communicate or even send patch files if you want.
- The spec is "owned" by The Federation, or more like namespaced. I would like to keep editorship until 1.0 at least though.
- The spec was generalized into diaspora* like federated social networks, not just diaspora*. I like the idea of being able to move across platforms too. Platform specific stuff doesn't belong to the spec but should be left to implementation.
The spec is live as a working draft version 0.1.0 here:
The git repository is here:
Sorry this took a little while. Ping also @jhass as you've commented quite a bit on this before in github.
Comrade Senya February 8th, 2016 16:54
There is a thing we haven't discussed before.
If the pod from which a person has moved to some new place is still alive, shouldn't we block registration for that name on an old pod at least for a while?
If a user has moved, and some new user registers on the old pod with the same name as the previous user had, then it could be possible, that somebody will try to discover the old person by the handle or even send a message.
At first after the feature is introduced in the diaspora* source code, it will be usual that there are many pods that are not up-to-date and they won't get "I've moved" message in time. So, probably, it would be nice to block registrations with the same handles as some moved users had, to lower inconsistencies produced on the network.
At first I thought about making kinda blocking period for a handle before it could be available again. However, I think, that we could consider user handles as an inexhaustible resource, so probably we don't need to unblock them authomatically after a period someone has moved. Maybe we could introduce an action at the podmin page to free the account that was previously occupied by someone who had moved with statistics of the discovery requests for this old account shown. Then, the podmin could free this account on request by user like "Hi! Could you please make the handle of a previously moved person available for registration again?".
Comrade Senya February 8th, 2016 17:00
TODO: To avoid an extra endpoint, should we use a version of NodeInfo instead?
@jhass had strong objection against it, so probably his opinion is to be considered.
Comrade Senya February 9th, 2016 04:57
Jason Robinson February 9th, 2016 20:39
If a user has moved, and some new user registers on the old pod with the same name as the previous user had, then it could be possible, that somebody will try to discover the old person by the handle or even send a message
We could add a recommendation to the spec that old local usernames are reserved permanently - that would make sense since that is what happens when an account is deleted in diaspora afaik (though would have to check to make sure). I've made an issue.
But really, the keys will have been regenerated so there isn't really a possibility of hijacking an identity like this..
You mean store the whole "I've moved" message payload on the old pod and respond to webfinger queries made towards the old closed identity. That sounds good otherwise but to make it work webfinger would have to pick up the message as a response. I think that might be a bit outside spec?
3 and 4 relate to 1 - where I think we should just recommend reserving forever.
TODO: To avoid an extra endpoint, should we use a version of NodeInfo instead?
@jhass had strong objection against it, so probably his opinion is to be considered.
Well, I didn't mean the NodeInfo but a NodeInfo :P But, I think it is out of scope of this spec anyway, better keep it simple with a dedicated clean endpoint.
I guess it’s fine if I base the content schema on this?
Yes, but generalized for the spec, so we shouldn't include all the keys but a generic set. diaspora* can of course implement a larger set containing the full amount of data used in our profiles.
Dennis Schubert February 10th, 2016 09:08
Yes, but generalized for the spec, so we shouldn't include all the keys but a generic set.
If you truely want to support "all" networks, you have to define a minimal set of keys. Otherwise, diaspora may use
username where other networks might use
user and everything becomes a complete mess.
Comrade Senya February 10th, 2016 19:27
I believe we may define a minimal set of keys as required and everything else as optional, so other networks pick what they would like to support.
Jason Robinson February 10th, 2016 19:36
Defining a single all-around schema with strict keys is hard, especially as we ourselves use pretty unique names like "aspects". Making our export not use our own key names just to support this would be odd.
What about aliases? The spec defines the basic expected keys using the most common key names that would be expected in a generic spec, and then we define a set of aliases, for example
aspects could map to something that might be more generic like
contact_groups.
These aliases can be expanded as seen fit in future spec versions.
Comrade Senya February 10th, 2016 19:44
In the PR I introduced format by generating a schema basing on our exported archive json document using
Then I manually edited it to fix some issues and I removed most of the keys from the "required" section, leaving
"name",
"private_key",
"profile",
"contacts".
The latter two - "profile" and "contacts" beign objects themselves doesn't have required fields though. So we can either make them optional as well, or make some of their contents required.
Dennis Schubert February 10th, 2016 19:44
Defining a single all-around schema with strict keys is hard, especially as we ourselves use pretty unique names like "aspects".
Ah c'mon. What do you want to achieve? Do you want something that looks fancy but nobody is able to use between implementations or do you want to have a spec that defines a standard way to exchange account data?
If you want a spec, then define keys. Define
username,
contacts,
groups,
birthday and stuff. No alias. One key has exactly one descriptor, no exceptions. How implementations call these things internally is completely irrelevant to your spec, since you should define a data format, not a set of rules on how social networks should behave internally. Diaspora may put
diaspora_id in the
username field, Friendca my call the same database column
lookhowfancytheuseriscalled. The spec does not care. And the spec should not care.
Jason Robinson February 10th, 2016 19:48
@dennisschubert so how do we use our JSON exports to restore then as they would be incompatible - or are you saying the diaspora JSON exports would be renamed?
Dennis Schubert February 10th, 2016 19:50
If you write a spec that is well-written and suitable for diasporas need (that is, we can somehow export all fields), I'm sure it's an easy task adapting the spec...
Dennis Schubert February 10th, 2016 19:53
Think about the other side, the people implementing specs. What would you do if you see a spec that literally says "
username contains the users name. Be aware that this field also may be called
profile_name,
name or
fancyfancy". How are you supposed to implement that? Rolling dices while implementing specs is surely not a good thing.
Comrade Senya February 10th, 2016 20:20
Do we follow the federation protocol for the "Delivery package" and for the "I've moved" messages? If so, we must use XML instead of JSON.
Comrade Senya February 10th, 2016 20:25
Also, if we reuse federation protocol for these messages, we don't have to define signing methods in our spec, since signing is already implemented on the salmon level.
Comrade Senya February 10th, 2016 20:30
But there is a problem, if we want to rely on the federation protocol in the spec, we must refer to the federation protocol spec to stay formal, but AFAIK there is no formal specification document for the federation protocol.
Comrade Senya February 10th, 2016 20:48
And in this case "Backup server receive route" is the usual public receive route for the server.
Jason Robinson February 10th, 2016 20:50
I don't think we should mix this and diaspora federation together. At least the spec can't be written like that.
More comments tomorrow, zzzz...
Comrade Senya February 10th, 2016 21:02
Well, I don't know, that is exactly how we can reuse endpoints and signing code also. I feel that this is optimal to reuse the federation protocol, but not to invent one more way to exchange messages. To write the spec we'll need some formal definition of the federation protocol then. At least some short version.
@dennisschubert, what do you think?
Dennis Schubert February 10th, 2016 21:05
but AFAIK there is no formal specification document for the federation protocol.
True, and at this point, there is no point in writing one. After the federation-gem efforts are done, I intent to formalize what we have at that point so we have a reference documentation for further usage and other networks as well.
I'd like to add something for Jason here. You are trying to write a spec, which is much appreciated. "Spec" is short for specification, but I get the feeling that you are actually trying to be as vague as somehow possible. I see why you're trying to do this, but if you want to come out with something that people might actually use, you have to do the opposite. Be as specific as possible. Otherwise, people will never be able to adopt the spec, everyone is going to maintain their own set interpretation errors and nothing will ever be compatible.
Example: Defining terminology.
Bad:
usernamecontains a string.
Good:
usernamecontains an identifier that uniquely identifies an user across the network.
Example: Defining formats.
Bad:
groupscontains an array of contact groups.
Good:
groupscontains an array of objects specifying groups. Group objects should contain a
titleattribute as well as a
membersarray.
titleshould be the human readable identifier.
membersshould be an array of
usernames.
Do you understand what I'm trying to say?
Jason Robinson February 11th, 2016 20:09
Regarding diaspora protocol. Yes, the initial version in the wiki that I wrote in October had the idea that the whole backup/restore would be implemented with minimal changes by hooking up to the current federation endpoints, signing, etc. This is definitely the easy route, I and to be honest I wouldn't of imagined any other way.
However, it is clear that there is heavy work going into the protocol itself and I'm not at all sure it makes sense to smash a concept like this that has absolutely nothing to do with social content in. This would weaken not only the protocol but also make the backup/restore feature unstable due to breaking changes in the protocol itself.
As such, I began looking at it from outside of diaspora. As a separate feature, not conflicting or requiring the diaspora, or any other, protocol, it would become much more stable to implement. The downsides are of course clear; more time spent on defining the spec (or specification, thanks Dennis) and more time spent on development (signing methods, endpoints, etc). The end result however would be cleaner and more stable not just for the backup/restore feature, but also the diaspora federation protocol.
I know it is hard for people here to see the federated web outside the context of diaspora, but that is exactly what I would like to do here.
So, there are two ways forward:
1) Implement using the diaspora protocol. Some of the code would then have to be implemented in the diaspora federation gem, making it a permanent feature there.
2) Continue writing a more generic specification and implement using that, with no changes at all to the diaspora federation gem.
I'm not too interested in 1) to be honest, but if that is chosen, then imho implementation can start whenever a basic flow is agreed upon, and it's already there written waiting for comments. There is no need to write fancy specifications because implementing it would require understanding something that has no specification and that needs reverse engineering. If anyone is able to do that they should be able to follow this half done spec already.
I'm really interested in 2) and that doesn't really even depend on diaspora implementing it. For that, there is much work to do and many comments needed, from outside of diaspora developers. I'll do what I can to get some comments.
Btw, regarding 2), I'd like to make use of as much of existing or upcoming standards as possible.
Using ActivityStreams2 for the JSON messages for example would enable using existing and upcoming parser libraries - the spec is currently a W3C working draft and is moving on nicely. Backup/restore would use extensions to it where needed as per AS2 spec.
For signatures, there are a few that have been mentioned within the W3C SocialWG group. The most important thing would be to use something that has support on implementation or standardization level. It is possible signatures will be a part of ActivityPub, the federation spec from the SocialWG, but this is not guaranteed unfortunately.
So, how should it be for diaspora - backup/restore on top of the diaspora protocol or backup/restore implemented as a feature nothing to do with the diaspora protocol?
Should I create a proposal now, seems a good thing to decide before moving on with specifics?
Dennis Schubert February 11th, 2016 20:12
I still don't see the connection between a spec describing how data should be exported and a spec describing how servers should communicate with each other.
Jason Robinson February 11th, 2016 20:16
I still don’t see the connection between a spec describing how data should be exported and a spec describing how servers should communicate with each other.
You're looking at only the export/import part - which is manual. I only added that by request from @jhass . The main idea of the spec is to make backups automatic, using the whole network as a way to protect user identities (= one pod goes down, only some backups are lost temporarily).
To make this kind of automated system, servers need to understand each other. I'm not sure how that can be done without defining how servers talk to each other in the spec.
Dennis Schubert February 11th, 2016 20:27
I see. As usual, I like well-defined standards more than self-hacked stuff. If you feel like AS is your way of exchanging stuff, sure, go ahead.
But how does ActivityStreams solve your issue? Sure, you got a way to exchange json objects, but ActivityStreams is rather incomplete at defining the fields (which, obviously, is done on purpose to keep AS extendable). You still have to very carefully design the actual fields to export? Isn't that the main issue?
Jason Robinson February 11th, 2016 20:51
You still have to very carefully design the actual fields to export? Isn’t that the main issue?
Yes that is one large issue. It's certainly not the main issue imho, I'd say the signing is very critical.
Anyway, assuming diaspora as a project would want a backup/restore feature, and let's assume for the sake of this discussion that the answer is yes, a decision needs to be made I think whether to implement it on top of the current federation layer or implement it as a separate feature not related to federation layer.
And if you want to discuss content further, I'd say implementing on top of the current diaspora protocol we should just go with the current fields as per export schema (more basic set for the automated backup archives, but from the same set). A clearly defined schema is needed however for 2).
Btw, it's nice you didn't even read the document before commenting on it ;)
Dennis Schubert February 11th, 2016 20:53
Oh, don't worry, I have read it. I stopped at the point you tried to specify how the user has to sign in and what buttons he has to click as well as the point where you started to assume rules about how implementations should store their stuff in the database, which is why I wrote the initial comment. Nice try, though. Maybe it'll work next time.
Feel free to open a proposal.
Jason Robinson started a proposal February 11th, 2016 21:30
If diaspora* implements the backup/restore specification, should it base it on the diaspora federation protocol? Closed 10:07pm - Sunday 28 Feb 2016
Assuming diaspora* might want to implement a backup/restore process (which enables migrating across pods), should it build this support into the diaspora protocol or base it on a separate specification, not modifying the diaspora protocol to support this feature?
Note! The backup/restore spec COULD be one like, but this proposal is NOT about whether to implement this specification or not.
Votes:
* YES - any backup/export type specification should be built on top of the diaspora federation protocol
* NO - any backup/export type specification should not be implemented on top of the diaspora federation protocol.
For clarifications, see for example comment
Comrade Senya
February 11th, 2016 22:26
Well, I like both ideas. Can't decide yet. See my comment.
Comrade Senya February 11th, 2016 22:40
Well, I like both ideas. The first option I like for its simplicity of implementation. The second is good as it is a completely separate piece of functionality, and the generalization would make it possible to use the spec even in some applications where we don't have social media at all. Like integrating backup/restore in some completely different platform, like, for example, Loomio. It is a big, complex and interesting task. The main question is whether it makes much sense to implement this spec somewhere outside the diaspora* federation world? If one could name a few decent applications of the spec outside the world of the software which supports the diaspora* federation protocol, I would probably support the latter proposition.
I believe that the issues with diaspora* federation protocol instability is not a big deal for the topic, because there will be transition periods which will guarantee backward compatibility for sensible periods of time, so it'll be completely transparent for the feature because we operate on the upper level and the transport level is provided to us.
Renato Zippert
February 12th, 2016 00:20
Looks safer to me to avoid allowing user information traveling on the network in case of any kind of vulnerability that could trigger the process, entirely or partially, without user consent, to a malicious destination or with a man-in-the-middle.
Dennis Schubert February 12th, 2016 00:23
@renatozippert "disagree" does not mean the data will not be sent over the network. Apparently, that's not even a point open for discussion here. The question is if the spec should be based on diasporas protocol or something different like ActivityStreams.
Renato Zippert February 12th, 2016 00:32
Thanks @dennisschubert... I'll abstain for now then and read more about it. I need to understand this better.
Renato Zippert
February 12th, 2016 00:33
Looks safer to me to avoid allowing user information traveling on the network in case of any kind of vulnerability that could trigger the process, entirely or partially, without user consent, to a malicious destination or with a man-in-the-middle.
Jason Robinson February 12th, 2016 18:00
If it helps, we can first vote whether to work on this feature at all - if that is required to continue. I feel it would be better to decide how to do it first and then vote to approve it. It would not be fair to vote on approving something that isn't finished as a plan.
@renatozippert the whole backup/restore idea here is about automatic encrypted backups over the network. If you are wary of this please vote disagree when there is a vote to approve the feature for implementation - as said the current proposal is not approving the feature for implementation.
@dennisschubert - ActivityStreams2 is not a protocol, it is a content serialization specification. I doubt it can be used to federate anything...
Renato Zippert
February 12th, 2016 20:48
Can't see clearly the benefits or issues of both options.
Renato Zippert February 12th, 2016 20:54
The problem of basing this on the protocol would be having to change the protocol that is related to other projects?
Right now I tend to agree with basing on the protocol, as creating the backups and moving profiles would be much more elegantly implemented with protocol support, not as something "external".
Florian Staudacher February 13th, 2016 12:19
Ultimately I would love to see our federation protocol getting used for all kinds of stuff, but right now I think it wouldn't make sense to take it much further than "social media" content.
For me, an account backup-replication protocol is out of scope of the original federation. I'd imagine in a far away future you could sync your account backup to some not-even-related-to-diaspora service (e.g. owncloud?) and that shouldn't depend on implementing anything else other than the backup endpoint, imho.
Comrade Senya February 14th, 2016 23:30
Ok, how about the following idea.
We agree on the backup/restore message set and implement it with both federation and non-federation based transport protocol? This is redundant work, but not that big, and it will make happy everyone. After a while, we'll see, which version got integrated to the third party services (that's why we standartize it at all) and drop the one, that is redundant. I read that this is how decisions were made in Linux kernel sometimes - by including two competing technologies.
So, as for the specification, it must then be made agnostic to the transport protocol used.
P.S. Does XMPP have anything on the topic?
Comrade Senya
February 23rd, 2016 11:10
I believe, reusability is valuable and the acc. b/r spec may be written in a neutral way (no links to federation, so may be used outside) while staying compatible with the federation protocol.
Comrade Senya started a proposal February 29th, 2016 14:50
Shall we encypt the backup archive when a user exports it manually? Closed 3:07am - Tuesday 8 Mar 2016
Currently we have the archive export feature and it is being exported unencrypted. It is convenient since a user can browse the contents of the archive himself. On the other hand, if the user by his mistake (or some malicious software) passes the archive to a third party, it would lead to bad consequences. We might encrypt the archive with a password like it was proposed for the automatic backup feature. (Here is a possible rough implementation of the archive encyprtion presented.) This way it gonna be more safe, but unreadable without some extra tools (Here is a snippet for decrypting archives produced by the modified export feature).
Brad Koehn February 29th, 2016 15:07
I would recommend not encrypting the data; it adds additional complexity to the system that the developer and user ultimately needs to keep track of, introduces additional failure points, while not providing much in the way of actual benefit to the user.
Any user can encrypt the data him/herself using a variety of tools once its downloaded.
Dennis Schubert
February 29th, 2016 19:46
If the user is not able to keep the manual export safe, he is also not able to keep the key/passphrase used for decryption save, so there is no loss and no win.
[deactivated account] February 29th, 2016 21:48
I also would recommend not to encrypt the data; however it would be good to tell users, that the data is unencrypted and should be kept in a safe place before downloading.
Dennis Schubert March 1st, 2016 18:52
@bradkoehn Blocking proposals is only valid if the proposal itself is invalid, which it is not. Please don't block, but disagree.
I'm going to abstain from voting because I've had next to zero participation in the whole backup/restore conversation. And I feel like there are people here who know better than me what should happen. But I have this to say:
Isn't this why we use https? So that files are transferred over encrypted lines? I think it would just introduce another layer of complexity, for not much benefit.
And correct me if I'm wrong, but if you were going to send an encrypted archive, it would have to be encrypted on the server side...using keys generated....somewhere; either locally or on the server and they'd have to be sent over the same (again, already encrypted) lines we're worried about someone tapping. I'd rather just do all of it locally.
I've just repeated what other people have already said, and probably poorly. So, my current recommendation is no, but I'm not an expert so I'm not voting.
Maybe encryption should be optional?
Blocking proposals is only valid if the proposal itself is invalid, which it is not. Please don’t block, but disagree.
When was that decided, Dennis? My memory is that the decision was that a 'block' was considered a 'strong disagreement' and no more. At the moment the only proposal regarding this that I can find is this one. I'd be grateful if you'd point me to the proposal which changed this so that I know the current situation. Thanks.
Dennis Schubert March 4th, 2016 08:16
When was that decided, Dennis?
Actually, good question. That's how we handled in the past. Do you feel like we should write an official documentation?
Comrade Senya March 4th, 2016 08:45
Pavithran S
March 5th, 2016 13:21
Please don't overdo things and increase complexity. Its users duty to encrypt or keep his data save. Since it was mentioned by Chris that we anyways use https for transfers shouldn't that be enough? :open_mouth:
Mathijs de Bruin
March 7th, 2016 08:33
Voting behaviour of community members should be private at all times. When using OpenSSL as in the snippet decryption is simple enough. When in doubt, make it opt-out.
Frode Lindeijer
March 7th, 2016 16:19
How about making it optional through an "Encrypt exported data" checkbox? :monkey:
SuperTux88
March 8th, 2016 00:47
This doesn't add security, because the password/key would be sent over the same connection as the backup-archive. It only adds complexity on the server side and on the user-side, if the user wants to open the archive.
Comrade Senya started a proposal May 10th, 2016 22:20
Include comments of other people to the user's data archive Closed 1:02am - Monday 23 May 2016
Proposal passes, comments will be included in the user's archive.
We have the feature request for this. The user's data archive which we export now includes only user's own comments. But if we want to import posts with comments from the archive, the comments won't look consistent unless we also include other people's comments for that posts to the archive. However it maybe viewed as data safety violation. Whether we should add other people's comments to the user data archive?
Comrade Senya
May 10th, 2016 22:26
The user who exports his data already has access to the other people's comments on his posts. So the export feature will just represent the data which is already available in a different manner, it won't expose anything new, so it isn't unsafe.
Dennis Schubert
May 10th, 2016 22:29
When a user comments under a limited post, the commenter expects that the comment will not leave the scope of given comment. During exporting, data can be lost (for example by people not caring about their stuff, uploading them to public sources...)
SuperTux88
May 11th, 2016 01:10
We can't control it anyway (everybody can copy/paste). Also, if the post-author moves to another pod, the new pod needs to know all comment, so that comment-authors still can delete their comments (which is relayed over the post-author)
To comment on a limited post means you trust your friend and his podmin. With this proposition, a user can change his pod, and you will be forced to trust the new podmin for the old data you posted. On another hand, you haven't access to your friend contacts anyway so you don't know with which pod the comment is shared initially, and the whole model of diaspora* is based on the assumption that you trust your friend, and your friend trust his podmin, so the situation seems not worse here than before. The only case left is how the user will store the archive. I'm wondering what Facebook does? Does it include friends comments in the export?
I'm wondering what Facebook does? Does it include friends comments in the export?
It may have changed since I did it (more than 1 year ago) but it didn't saved any comment (either your comments on other's messages nor friends comments on your messages).
Note that Facebook export was not intended to import the profile elsewhere later.
Comrade Senya · January 8th, 2016 14:27
I hope I can participate in it in a while, after some present jobs are done. | https://www.loomio.org/d/qsVJ2K1t/backup-and-restore | CC-MAIN-2020-10 | refinedweb | 6,960 | 61.46 |
This post is a super.
So, you’ve heard about CUDA and you are interested in learning how to use it in your own applications. If you are a C or C++ programmer, this blog post should give you a good start. To follow along,.
Let’s get started!
Starting Simple
We’ll start with a simple C++ program that adds the elements of two arrays with a million elements each.
#include <iostream> #include <math.h> // function to add the elements of two arrays void add(int n, float *x, float *y) { for (int i = 0; i < n; i++) y[i] = x[i] + y[i]; } int main(void) { int N = 1<<20; // 1M elements float *x = new float[N]; float *y = new float[N]; // initialize x and y arrays on the host for (int i = 0; i < N; i++) { x[i] = 1.0f; y[i] = 2.0f; } // Run kernel on 1M elements on the; }
First, compile and run this C++ program. Put the code above in a file and save it as
add.cpp, and then compile it with your C++ compiler. I’m on a Mac so I’m using
clang++, but you can use
g++ on Linux or MSVC on Windows.
> clang++ add.cpp -o add
Then run it:
> ./add Max error: 0.000000
(On Windows you may want to name the executable add.exe and run it with
.\add.)
As expected, it prints that there was no error in the summation and then exits. Now I want to get this computation running (in parallel) on the many cores of a GPU. It’s actually pretty easy to take the first steps.
First, I just have to turn our
add function into a function that the GPU can run, called a kernel in CUDA. To do this, all I have to do is add the specifier
__global__ to the function, which tells the CUDA C++ compiler that this is a function that runs on the GPU and can be called from CPU code.
// CUDA Kernel function to add the elements of two arrays on the GPU __global__ void add(int n, float *x, float *y) { for (int i = 0; i < n; i++) y[i] = x[i] + y[i]; }
These
__global__ functions are known as kernels, and code that runs on the GPU is often called device code, while code that runs on the CPU is host code.
Memory Allocation in CUDA
To compute on the GPU, I need to allocate memory accessible by the GPU. Unified Memory in CUDA makes this easy by providing a single memory space accessible by all GPUs and CPUs in your system. To allocate data in unified memory, call
cudaMallocManaged(), which returns a pointer that you can access from host (CPU) code or device (GPU) code. To free the data, just pass the pointer to
cudaFree().
I just need to replace the calls to
new in the code above with calls to
cudaMallocManaged(), and replace calls to
delete [] with calls to
cudaFree.
// Allocate Unified Memory -- accessible from CPU or GPU float *x, *y; cudaMallocManaged(&x, N*sizeof(float)); cudaMallocManaged(&y, N*sizeof(float)); ... // Free memory cudaFree(x); cudaFree(y);
Finally, I need to launch the
add() kernel, which invokes it on the GPU. CUDA kernel launches are specified using the triple angle bracket syntax <<< >>>. I just have to add it to the call to
add before the parameter list.
add<<<1, 1>>>(N, x, y);
Easy! I’ll get into the details of what goes inside the angle brackets soon; for now all you need to know is that this line launches one GPU thread to run
add().
Just one more thing: I need the CPU to wait until the kernel is done before it accesses the results (because CUDA kernel launches don’t block the calling CPU thread). To do this I just call
cudaDeviceSynchronize() before doing the final error checking on the CPU.
Here’s the complete code:
#include <iostream> #include <math.h> // Kernel function to add the elements of two arrays __global__ void add(int n, float *x, float *y) { for (int i = 0; i < n; i++); } // Run kernel on 1M elements on the GPU add<<<1, 1>>>; }
CUDA files have the file extension
.cu. So save this code in a file called
add.cu and compile it with
nvcc, the CUDA C++ compiler.
> nvcc add.cu -o add_cuda > ./add_cuda Max error: 0.000000
This is only a first step, because as written, this kernel is only correct for a single thread, since every thread that runs it will perform the add on the whole array. Moreover, there is a race condition since multiple parallel threads would both read and write the same locations.
Note: on Windows, you need to make sure you set Platform to x64 in the Configuration Properties for your project in Microsoft Visual Studio.
Profile it!
I think the simplest way to find out how long the kernel takes to run is to run it with
nvprof, the command line GPU profiler that comes with the CUDA Toolkit. Just type
nvprof ./add_cuda on the command line:
$ nvprof ./add_cuda ==3355== NVPROF is profiling process 3355, command: ./add_cuda Max error: 0 ==3355== Profiling application: ./add_cuda ==3355== Profiling result: Time(%) Time Calls Avg Min Max Name 100.00% 463.25ms 1 463.25ms 463.25ms 463.25ms add(int, float*, float*) ...
Above is the truncated output from
nvprof, showing a single call to
add. It takes about half a second on an NVIDIA Tesla K80 accelerator, and about the same time on an NVIDIA GeForce GT 740M in my 3-year-old Macbook Pro.
Let’s make it faster with parallelism.
Picking up the Threads
Now that you’ve run a kernel with one thread that does some computation, how do you make it parallel? The key is in CUDA’s
<<<1, 1>>>syntax. This is called the execution configuration, and it tells the CUDA runtime how many parallel threads to use for the launch on the GPU. There are two parameters here, but let’s start by changing the second one: the number of threads in a thread block. CUDA GPUs run kernels using blocks of threads that are a multiple of 32 in size, so 256 threads is a reasonable size to choose.
add<<<1, 256>>>(N, x, y);
If I run the code with only this change, it will do the computation once per thread, rather than spreading the computation across the parallel threads. To do it properly, I need to modify the kernel. CUDA C++ provides keywords that let kernels get the indices of the running threads. Specifically,
threadIdx.x contains the index of the current thread within its block, and
blockDim.x contains the number of threads in the block. I’ll just modify the loop to stride through the array with parallel threads.
__global__ void add(int n, float *x, float *y) { int index = threadIdx.x; int stride = blockDim.x; for (int i = index; i < n; i += stride) y[i] = x[i] + y[i]; }
The
add function hasn’t changed that much. In fact, setting
index to 0 and
stride to 1 makes it semantically identical to the first version.
Save the file as
add_block.cu and compile and run it in
nvprof again. For the remainder of the post I’ll just show the relevant line from the output.
Time(%) Time Calls Avg Min Max Name 100.00% 2.7107ms 1 2.7107ms 2.7107ms 2.7107ms add(int, float*, float*)
That’s a big speedup (463ms down to 2.7ms), but not surprising since I went from 1 thread to 256 threads. The K80 is faster than my little Macbook Pro GPU (at 3.2ms). Let’s keep going to get even more performance.
Out of the Blocks
CUDA GPUs have many parallel processors grouped into Streaming Multiprocessors, or SMs. Each SM can run multiple concurrent thread blocks. As an example, a Tesla P100 GPU based on the Pascal GPU Architecture has 56 SMs, each capable of supporting up to 2048 active threads. To take full advantage of all these threads, I should launch the kernel with multiple thread blocks.
By now you may have guessed that the first parameter of the execution configuration specifies the number of thread blocks. Together, the blocks of parallel threads make up what is known as the grid. Since I have
N elements to process, and 256 threads per block, I just need to calculate the number of blocks to get at least N threads. I simply divide
N by the block size (being careful to round up in case
N is not a multiple of
blockSize).
int blockSize = 256; int numBlocks = (N + blockSize - 1) / blockSize; add<<<numBlocks, blockSize>>>(N, x, y);
I also need to update the kernel code to take into account the entire grid of thread blocks. CUDA provides
gridDim.x, which contains the number of blocks in the grid, and
blockIdx.x, which contains the index of the current thread block in the grid. Figure 1 illustrates the the approach to indexing into an array (one-dimensional) in CUDA using
blockDim.x,
gridDim.x, and
threadIdx.x. The idea is that each thread gets its index by computing the offset to the beginning of its block (the block index times the block size:
blockIdx.x * blockDim.x) and adding the thread’s index within the block (
threadIdx.x). The code
blockIdx.x * blockDim.x + threadIdx.x is idiomatic CUDA.
__global__ void add(int n, float *x, float *y) { int index = blockIdx.x * blockDim.x + threadIdx.x; int stride = blockDim.x * gridDim.x; for (int i = index; i < n; i += stride) y[i] = x[i] + y[i]; }
The updated kernel also sets
stride to the total number of threads in the grid (
blockDim.x * gridDim.x). This type of loop in a CUDA kernel is often called a grid-stride loop.
Save the file as
add_grid.cu and compile and run it in
nvprof again.
Time(%) Time Calls Avg Min Max Name 100.00% 94.015us 1 94.015us 94.015us 94.015us add(int, float*, float*)
That’s another 28x speedup, from running multiple blocks on all the SMs of a K80! We’re only using one of the 2 GPUs on the K80, but each GPU has 13 SMs. Note the GeForce in my laptop has 2 (weaker) SMs and it takes 680us to run the kernel.
Summing Up
Here’s a rundown of the performance of the three versions of the
add() kernel on the Tesla K80 and the GeForce GT 750M.
As you can see, we can achieve very high bandwidth on GPUs. The computation in this post is very bandwidth-bound, but GPUs also excel at heavily compute-bound computations such as dense matrix linear algebra, deep learning, image and signal processing, physical simulations, and more.
Exercises
To keep you going, here are a few things to try on your own. Please post about your experience in the comments section below.
- Browse the CUDA Toolkit documentation. If you haven’t installed CUDA yet, check out the Quick Start Guide and the installation guides. Then browse the Programming Guideand the Best Practices Guide. There are also tuning guides for various architectures.
- Experiment with
printf()inside the kernel. Try printing out the values of
threadIdx.xand
blockIdx.xfor some or all of the threads. Do they print in sequential order? Why or why not?
- Print the value of
threadIdx.yor
threadIdx.z(or
blockIdx.y) in the kernel. (Likewise for
blockDimand
gridDim). Why do these exist? How do you get them to take on values other than 0 (1 for the dims)?
- If you have access to a Pascal-based GPU, try running
add_grid.cuon it. Is performance better or worse than the K80 results? Why? (Hint: read about Pascal’s Page Migration Engine and the CUDA 8 Unified Memory API.) For a detailed answer to this question, see the post Unified Memory for CUDA Beginners.
Where To From Here?
I hope that this post has whet your appetite for CUDA and that you are interested in learning more and applying CUDA C++ in your own computations. If you have questions or comments, don’t hesitate to reach out using the comments section below.
I plan to follow up this post with further CUDA programming material, but to keep you busy for now, there is a whole series of older introductory posts that you can continue with (and that I plan on updating / replacing in the future as needed):
-
- Accelerated Ray Tracing in One Weekend with CUDA! | https://devblogs.nvidia.com/even-easier-introduction-cuda/ | CC-MAIN-2020-24 | refinedweb | 2,098 | 73.78 |
The Auth API provides a low-level REST API for adding strong two-factor authentication to your website or application. It is used, for example, as the backend for Duo Unix. This API may be appropriate for use (instead of Duo Web) if your application cannot directly display rich web content, or requires complete control over the appearance and functionality of the authentication prompt. However, it is more complicated to integrate than Duo Web.
This version of the Auth API will continue to be supported, but if you're trying Auth API for the first time you should check out the current version. The Auth API was formerly named Duo REST API.
When a user (bob) wishes to authenticate, your application usually would proceed roughly as follows:
Normally, the auth method will not return a response until the authentication process has completed. However, it permits an optional parameter,
async. If the application provides a value of '1' for the
async argument, then the auth method will instead return a unique identifier which can be used to poll the status of the authentication attempt.
Review the API Details to see how to construct your first API request..:
Click Protect an Application and locate the entry for Auth API in the applications list. Click Protect to the far-right to configure the application and get your integration key, secret key, and API hostname. You'll need this information to complete your setup. See Protecting Applications for more information about protecting applications in Duo and additional application options.
The security of your Duo application is tied to the security of your secret key (skey). Secure it as you would any sensitive credential. Don't share it with unauthorized individuals or email it to anyone under any circumstances!
/ping
The
/ping method acts as a "liveness check" that can be called to verify that Duo is up before trying to call other methods. Unlike the other API methods, this one does not have to be signed with the Authorization header and may be sent over HTTP for speed.
GET /rest/v1/ping
A successful response will be returned in the standard container format described above and will contain
pong in the
response field.
Example successful response (JSON):
{ "stat": "OK", "response": "pong" }
/check
The
/check method can be called to verify that the integration and secret keys are valid, and that the signature is being generated properly.
GET /rest/v1/check
A successful response will be returned in the standard container format described above and will contain
valid in the
response field.
Available parameters:
None.
Response codes:
Example successful response (JSON):
{ "stat": "OK", "response": "valid" }
The
/logo endpoint provides a programmatic way to retrieve your stored logo.
GET /rest/v1/logo
Parameters
None required.
Response Codes
Response Format
On success, the response body is Content-Type image/png, containing the logo.
On failure, the response is the standard error JSON.
The
/preauth method determines whether a user is authorized to log in, and (if so) returns the user's available authentication factors.
POST /rest/v1/preauth
Available parameters:
The response will be returned in the container format described above. For successful responses, the payload will contain the following key/value pairs:
Example successful response:
{ "stat": "OK", "response": { "result": "auth", "factors": { "default": "push1", "1": "push1", "2": "push2", "3": "phone1", "4": "phone2", "5": "sms1", "6": "sms2" }, "prompt": "Duo login for bob\n\n 1. Duo Push to XXX-XXX-1234\n 2. Duo Push to XXX-XXX-5678\n 3. Phone call to XXX-XXX-1234\n 4. Phone call to XXX-XXX-5678\n 5. SMS passcodes to XXX-XXX-1234 (next code starts with: B)\n 6. SMS passcodes to XXX-XXX-5678\n\nPasscode or option (1-6): " } }
The
/auth method performs second-factor authentication for a user by verifying a passcode, placing a phone call, or sending a push notification to the user's smartphone app.
POST /rest/v1/auth
Parameters:
Additionally, you will need to pass some factor-specific parameters:
The response will be returned in the container format described above. For successful responses, the payload will contain the following key/value pairs:
If "async" was not enabled:
Example successful response (JSON):
{ "stat": "OK", "response": { "status": "Success. Logging you in...", "result": "allow" } }
If "async" was enabled:
Example successful response (JSON):
{ "stat": "OK", "response": { "txid": "45f7c92b-f45f-4862-8545-e0f58e78075a" } }
/status
The
/status method "long-polls" for the next status update from the authentication process for a given transaction. That is to say, if no status update is available at the time the request is sent, it will wait until there is an update before returning a response.
GET /rest/v1/status
Available parameters:
The response will be returned in the container format described above. For successful responses, the payload will contain the following key/value pairs:
Example successful response (JSON):
{ "stat": "OK", "response": { "status": "Success. Logging you in...", "result": "allow" } }
All API methods use your API hostname,.
Methods always use HTTPS. Unsecured HTTP is not supported.
All requests must have "Authorization" and "Date" headers.
If the request method is GET or DELETE, URL-encode parameters and send them in the URL query string like this:
/rest/v1/check?realname=First%20Last&username=root.:
The API uses HTTP Basic Authentication to authenticate requests. Use your Duo application's integration key as the HTTP Username.
Generate the HTTP Password as an HMAC signature of the request. This will be different for each request and must be re-generated each time.
To construct the signature, first build an ASCII string from your request, using the following components:
Then concatenate these components with (line feed) newlines. For example:
Tue, 21 Aug 2012 17:29:18 -0000 POST api-xxxxxxxx.duosecurity.com /rest/v1/check realname=First%20Last&username=root
GET requests also use this five-line format:
Tue, 21 Aug 2012 17:29:18 -0000 GET api-xxxxxxxx.duosecurity.com /rest/v1/check username=root
Lastly, compute the HMAC-SHA1 of this canonical representation, using your Duo application's secret key as the HMAC key. Send this signature as hexadecimal ASCII (i.e. not raw binary data). Use HTTP Basic Authentication for the request, using your integration key as the username and the HMAC-SHA1 signature as the password.
For example, here are the headers for the above POST request to
api-XXXXXXXX.duosecurity.com/rest/v1/check, using
DIWJ8X6AEYOR5OMC6TQ1 as the integration key and
Zh5eGmUq9zpfQnyUIu5OL9iWoMMv5ZNmk3zLJ4Ep as the secret key:
Date: Tue, 21 Aug 2012 17:29:18 -0000 Authorization: Basic RElXSjhYNkFFWU9SNU9NQzZUUTE6MmQ5N2Q2MTY2MzE5NzgxYjVhM2EwN2FmMzlkMzY2ZjQ5MTIzNGVkYw== Host: api-XXXXXXXX.duosecurity.com Content-Length: 35 Content-Type: application/x-www-form-urlencoded
Separate HTTP request header lines CRLF newlines.
The following Python function can be used to construct the "Authorization" and "Date" headers:
import base64, email, hmac, hashlib, urllib def sign(method, host, path, params, skey, ikey): """ Return HTTP Basic Authentication ("Authorization" and "Date") headers. method, host, path: strings from request params: dict of request parameters skey: secret key ikey: integration key """ # create canonical string now = email.Utils.formatdate() canon = [now, method.upper(), host.lower(), path] args = [] for key in sorted(params.keys()): val = params[key] if isinstance(val, unicode): val = val.encode("utf-8") args.append( '%s=%s' % (urllib.quote(key, '~'), urllib.quote(val, '~'))) canon.append('&'.join(args)) canon = '\n'.join(canon) # sign canonical string sig = hmac.new(skey, canon, hashlib.sha1) auth = '%s:%s' % (ikey, sig.hexdigest()) # return headers return {'Date': now, 'Authorization': 'Basic %s' % base64.b64encode(auth)}
Need some help? Take a look at our Auth API Knowledge Base articles or Community discussions. For further assistance, contact Support. | https://duo.com/docs/authapi-v1 | CC-MAIN-2021-17 | refinedweb | 1,260 | 56.66 |
#include <UT_Options.h>
Definition at line 985 of file UT_Options.h.
UT_OptionsHolder can be constructed with UT_OptionsHolder::REFERENCE to create a shallow reference to the const char *.
Definition at line 990 of file UT_Options.h.
Definition at line 993 of file UT_Options.h.
Will make a copy of the provided options.
Definition at line 1000 of file UT_Options.h.
Will make a shallow reference.
Definition at line 1011 of file UT_Options.h.
Makes a shallow reference to the contents of the UT_OptionsRef.
Definition at line 1018 of file UT_Options.h.
Makes a deep copy of the provided UT_OptionsRef. This constructor is not marked explicit since we often want this conversion (e.g. when inserting a UT_OptionsRef into a UT_OptionsMap, as with the const char* constructor).
Construct as a sentinel value.
Definition at line 1031 of file UT_Options.h.
Makes a copy of the provided options.
Definition at line 1038 of file UT_Options.h.
Move constructor. Steals the working data from the original.
Definition at line 1045 of file UT_Options.h.
Returns a writeable UT_Options which will modify the contents of this options holder. When this is copied or deleted, the returned pointer must not be used any more. (Ideally we could erase ourself and return a UniquePtr for correct life time, but we can't steal from a shared pointer)
Use update() whereever possible instead as it is much safer.
Definition at line 1091 of file UT_Options.h.
Makes a bit-wise copy of the options and adjust the reference count.
Definition at line 1052 of file UT_Options.h.
Move the contents of about-to-be-destructed options s to this options.
Definition at line 1061 of file UT_Options.h.
Definition at line 1068 of file UT_Options.h.
Definition at line 1074 of file UT_Options.h.
Updates the contents of this option, first making sure it is unique. The provided operator should take a reference to a UT_Options that it will update. UT_OptionsHolder value; value.update([](UT_Options &opt) { opt.setOptionS("test", "bar"); });
Definition at line 1113 of file UT_Options.h.
Friend specialization of std::swap() to use UT_OptionsHolder::swap()
Definition at line 1121 of file UT_Options.h.
Friend specialization of std::swap() to use UT_OptionsHolder::swap()
Definition at line 1122 of file UT_Options.h.
In some functions it's nice to be able to return a const-reference to a UT_OptionsHolder. However, in error cases, you likely want to return an empty options. This would mean that you'd have to return a real UT_OptionsHolder (not a const reference). This static lets you return a reference to an empty options.
Definition at line 1130 of file UT_Options.h.
Definition at line 1132 of file UT_Options.h. | https://www.sidefx.com/docs/hdk/class_u_t___options_holder.html | CC-MAIN-2020-50 | refinedweb | 446 | 53.17 |
Lots of external libraries contain state, but one that really contains a *lot* of state is the OpenGL libraries, since OpenGL is specified as a statemachine. This means that when you're writing structured code you quite often want to save and restore chunks of state 'automatically'. For the very most common case (coordinate transformations) Sven gives us 'preservingMatrix' which is extremely handy. Unless I've missed something there's no similar API for saving/restoring arbitrary state variables. It's not hard to write: > {-# OPTIONS -fglasgow-exts #-} > import Graphics.Rendering.OpenGL > import Graphics.UI.GLUT > > preserving :: (HasSetter g, HasGetter g) => g a -> IO t -> IO t > preserving var act = do old <- get var > ret <- act > var $= old > return ret This enables us to write preserving lighting $ do ..... Note that, since IORef is an instance of HasGetter and HasSetter, you can do 'preserving' on any old IORef, not just an openGL StateVar. Also note that the 'makeStateVar' interface that Graphics.Rendering.OpenGL.GL.StateVar exports allows you to make a statevar out of any appropriate action pair (not entirely unrelated to) Sometimes you don't only want to preserve a value, but set a specific temporary value, so: > with :: (HasSetter g, HasGetter g) => g a -> a -> IO t -> IO t > with var val act = do old <- get var > var $= val > ret <- act > var $= old > return ret with lighting Enabled $ do .... (of course, with could be written as with var val act = preserving var $ var $= val >> act ) But this gets really clumsy if you have multiple variables to save/restore, which is really what lead me to write this message in the first place. A cute syntax for doing multiple save/restores at once is given by an existential: > data TemporaryValue = forall a g. > (HasGetter g,HasSetter g) => > g a := a > > with' :: [TemporaryValue] -> IO t -> IO t > with' tvs act = do olds <- mapM (\(a := b) -> do old <- get a > return (a := old)) > tvs > ret <- act > mapM_ (\(a := b) -> a $= b) tvs > return ret so we can then write: with' [lighting := Enabled, currentColor := Color4 1 0 1 0] $ do ... and have a type safe list of temporary assignments passed as an argument. And, amazingly, you get decent error messages too: *Main> :t with' [lighting := Enabled, currentColor := Color4 1 0 1 0] with' [lighting := Enabled, currentColor := Color4 1 0 1 0] :: IO t -> IO t *Main> :t with' [lighting := Enabled, currentColor := "Foo"] <interactive>:1:44: Couldn't match expected type `Color4 GLfloat' against inferred type `[Char]' In the second argument of `(:=)', namely `"Foo"' In the expression: currentColor := "Foo" In the first argument of `with'', namely `[lighting := Enabled, currentColor := "Foo"]' Hope someone else finds that useful, Jules | http://www.haskell.org/pipermail/haskell-cafe/2007-October/032568.html | CC-MAIN-2014-42 | refinedweb | 443 | 51.41 |
envz_add, envz_entry, envz_get, envz_merge, envz_remove, envz_strip — environment string support
#include <envz.h>
These functions are glibc-specific.
An argz vector is a pointer to a character buffer together with a length, see argz_add(3). An envz vector is a special argz vector, namely one where the strings have the form "name=value". Everything after the first '=' is considered to be the value. If there is no '=', the value is taken to be NULL. (While the value in case of a trailing '=' is the empty string "".)
These functions are for handling envz vectors.
envz_add() adds the string
"
name=
value" (in case
value is non-NULL) or
"
name" (in case
value is NULL) to the
envz vector (*
envz,
*
envz_len) and
updates *
envz and
*
envz_len. If an
entry with the same
name existed, it is
removed.
envz_entry() looks for
name in the envz
vector (
envz,
envz_len) and returns
the entry if found, or NULL if not.
envz_get() looks for
name in the envz
vector (
envz,
envz_len) and returns
the value if found, or NULL if not. (Note that the value can
also be NULL, namely when there is an entry for
name without '=' sign.)
envz_merge() adds each entry
in
envz2 to
*
envz, as if with
envz_add(). If
override is true, then values
in
envz2 will
supersede those with the same name in *
envz, otherwise not.
envz_remove() removes the
entry for
name from
(*
envz, *
envz_len) if there was one.
envz_strip() removes all
entries with value NULL.
All envz functions that do memory allocation have a return type of error_t, and return 0 for success, and ENOMEM if an allocation error occurs.
#include <stdio.h> #include <stdlib.h> #include <envz.h> int main(int argc, char *argv[], char *envp[]) { int i, e_len = 0; char *str; for (i = 0; envp[i] != NULL; i++) e_len += strlen(envp[i]) + 1; str = envz_entry(*envp, e_len, "HOME"); printf("%s\n", str); str = envz_get(*envp, e_len, "HOME"); printf("%s\n", str); exit(EXIT_SUCCESS); } | http://man.linuxexplore.com/htmlman3/envz_add.3.html | CC-MAIN-2021-21 | refinedweb | 322 | 65.93 |
#include <FXMenuButton.h>
Inheritance diagram for FX::FXMenuButton:
There are many ways to control the placement where the popup will appear; first, the popup may be placed on either of the four sides relative to the menu button; this is controlled by the flags MENUBUTTON_DOWN, etc. Next, there are several attachment modes; the popup's left/bottom edge may attach to the menu button's left/top edge, or the popup's right/top edge may attach to the menu button's right/bottom edge, or both. Also, the popup may apear centered relative to the menu button. Finally, a small offset may be specified to displace the location of the popup by a few pixels so as to account for borders and so on. Normally, the menu button shows an arrow pointing to the direction where the popup is set to appear; this can be turned off by passing the option MENUBUTTON_NOARROWS.
See also: | http://www.fox-toolkit.org/ref16/classFX_1_1FXMenuButton.html | crawl-003 | refinedweb | 155 | 50.5 |
> [tim, believing everything he reads again <wink>]
> ...
> I believed that remaining DELETE_NAMEs could be changed to
> DELETE_GLOBALs, but didn't do that ...
Well, this is interesting! I believe that when you put in compile.c's
"optimize" function, it made a subtle change to language semantics, but
that nobody has noticed. BTW, the LOAD_GLOBAL patch doesn't make it any
worse.
The question is what should this module do?
a = 12
def f():
del a
f()
It actually raises "NameError: undefined local variable" today. But the
Reference Manual, section 4.1 (Code blocks, execution frames, and name
spaces) sez that a name is local to a block if and only if it's _bound_
(anywhere) in the block, and explicitly says "a target occurring in a del
statement does not bind a name". So the docs clearly say that "a" is
global in f, while the implementation doesn't agree.
Now when I hacked in the LOAD_GLOBAL patch, the comments in optimize said
that the local-variable-finding phase did _not_ look at DELETE_NAME
instructions, but the code actually _did_ look at them. And I believe
that's the cause of the discrepancy ("a" is an argument to DELETE_NAME,
so "a" gets stuffed onto the list of locals, so the DELETE_NAME on "a"
gets optimized to DELETE_FAST, and the global "a" is inaccessible).
Note 1: The patch I posted changed the comments to match the code.
Note 2: If you want to keep it this way, I take back the quoted comment
about optimizing DELETE_NAME -> DELETE_GLOBAL above (since all
DELETE_NAMEs are optimized to DELETE_FASTs now, there's nothing
else to be done).
Opinion: Leave the implementation alone and change the reference manual
to match it (-> a name's local iff it's bound or appears as the target of
del):
+ The doc's decision about del targets predates the introduction of the
"global" statement, back to the days when there was no non-excruciating
way for a function to alter a global in any way. If a function
_really_ wants to del a global, it can declare it "global" now.
+ It's more consistent the way it is (since every other attempt to mangle
a global now requires a "global" statement to make the global mangle-
able).
+ If you change the implementation instead of the docs, then depending on
the value of "string", this variation will either delete the global "a"
or won't touch it (e.g., if string=='a=3\n'):
a = 12
def f():
exec(string)
del a
f()
The way things are now, we know for a fact, by static inspection, that
the global "a" won't get clobbered (assuming a non-perverse exec
argument). I think that's Good.
+ If you changed the implementation now, millions of lines of code would
break <snort>.
when-you've-got-a-good-point-beat-it-to-death<grin>-ly y'rs - tim
Tim Peters tim@ksr.com
not speaking for Kendall Square Research Corp | https://legacy.python.org/search/hypermail/python-1994q2/0203.html | CC-MAIN-2021-43 | refinedweb | 496 | 60.65 |
- Author:
- nstrite
- Posted:
- March 23, 2007
- Language:
- Python
- Version:
- .96
- templatetag ifnotequal ifequal template if conditional tag
- Score:
- 12 (after 12 ratings).
More like this
- Showell markup--DRY up your templates by showell 5 years, 9 months ago
- testdata tag for templates by showell 6 years, 3 months ago
- Page numbers with ... like in Digg by Ciantic 6 years, 4 months ago
- Tags & filters for rendering search results by exogen 7 years, 5 months ago
- Easy Conditional Template Tags by fragsworth 6 years, 3 months ago
For the sake of clarity in my templates, I've replaced the two instances of
'endif'with
'end'+TAGNAME. So for now it's
endpyifinstead of
endif. Didn't want to confuse it with any regular
{% endif %}in my templates.
#
Please login first before commenting. | https://djangosnippets.org/snippets/130/ | CC-MAIN-2015-35 | refinedweb | 131 | 70.23 |
Details
Description
Calling unwrap(oracle.jdbc.OraclePreparedStatement.class) on a Bitronix prepared statement wrapper doesn't work while it should, the current code only allows ((OraclePreparedStatement)pstmt.unwrap(PreparedStatement.class)) calls.
The same is true for all wrappers implementing java.sql.Wrapper.
See:
Activity
Is the code in 2.1.3 a proper fix for this? The Javadoc for java.sql.Wrapper.isWrapperFor() reads (emphasis my own):
Returns true if this either implements the interface argument or is directly or indirectly a wrapper for an object that does.
That suggests that unwrap() should check if the delegate is itself a wrapper and call unwrap() on it if necessary, i.e.:
public <T> T unwrap(Class<T> iface) throws SQLException { if (iface.isInstance(delegate)) return (T) delegate; if (delegate instanceof Wrapper) return ((Wrapper)delegate).unwrap(iface); throw new SQLException(getClass().getName() + " is not a wrapper for " + iface.getName()); }
Same for isWrapperFor().
Indeed, this was overlooked.
I'll fix this for the next release, thanks!
I committed a slightly different (but apparently equivalent) patch to the master branch:;a=commit;h=25e15a125985ce25ff22213414d92e83521634fd. I'd be happy if you could review it and report back the outcome.
Thanks!
Sorry for the late feedback. I missed the notice that this issue had changed.
I had a look at the changeset and noticed two points:
First, the change to PoolingDataSource mangles the two cases that the xaDataSource may implement the requested interface directly or that it itself may be a wrapper.
Second, is it possible that a wrapped delegate might only implement JDBC 3 interfaces and thus lack the methods from interface Wrapper? I think JLS-13.4.16 applies here which means a call to isWrapperFor() or unwrap() may fail with an AbstractMethodError.
Fixed and deployed a 2.1.3-SNAPSHOT version in the codehaus snapshot repository containing the fix. | http://jira.codehaus.org/browse/BTM-114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2013-48 | refinedweb | 306 | 52.15 |
0
Im trying to do a exercise in one of my books and it wants me to use a function to get the factorial of a number entered by the user. Ive tried several different ways and they either get me numbers that make no sense or say if I entered 5, I would just get 25(which is what the code below does). Any help in the right direction would be great, thanks.
#include <iostream> using namespace std; int factorial(int n); int main () { int n; cout << "Enter a number then press ENTER: "; cin >> n; cout << factorial(n) << " "; cin.get(); return 0; } int factorial(int n) { for (int i = n; i >= 1; i--) { return n = n * i; } } | https://www.daniweb.com/programming/software-development/threads/367195/factorial-for-loop-function | CC-MAIN-2017-30 | refinedweb | 118 | 73.92 |
AI with Python – Heuristic Search
Heuristic search plays a key role in artificial intelligence. In this chapter, you will learn in detail about it.
Concept of Heuristic Search in AI
Heuristic is a rule of thumb which leads us to the probable solution. Most problems in artificial intelligence are of exponential nature and have many possible solutions. You do not know exactly which solutions are correct and checking all the solutions would be very expensive.
Thus, the use of heuristic narrows down the search for solution and eliminates the wrong options. The method of using heuristic to lead the search in search space is called Heuristic Search. Heuristic techniques are very useful because the search can be boosted when you use them.
Difference between Uninformed and Informed Search
There are two types of control strategies or search techniques: uninformed and informed. They are explained in detail as given here −
Uninformed Search
It is also called blind search or blind control strategy. It is named so because there is information only about the problem definition, and no other extra information is available about the states. This kind of search techniques would search the whole state space for getting the solution. Breadth First Search (BFS) and Depth First Search (DFS) are the examples of uninformed search.
Informed Search
It is also called heuristic search or heuristic control strategy. It is named so because there is some extra information about the states. This extra information is useful to compute the preference among the child nodes to explore and expand. There would be a heuristic function associated with each node. Best First Search (BFS), A*, Mean and Analysis are the examples of informed search.
Constraint Satisfaction Problems (CSPs)
Constraint means restriction or limitation. In AI, constraint satisfaction problems are the problems which must be solved under some constraints. The focus must be on not to violate the constraint while solving such problems. Finally, when we reach the final solution, CSP must obey the restriction.
Real World Problem Solved by Constraint Satisfaction
The previous sections dealt with creating constraint satisfaction problems. Now, let us apply this to real world problems too. Some examples of real world problems solved by constraint satisfaction are as follows −
Solving algebraic relation
With the help of constraint satisfaction problem, we can solve algebraic relations. In this example, we will try to solve a simple algebraic relation a*2 = b. It will return the value of a and b within the range that we would define.
After completing this Python program, you would be able to understand the basics of solving problems with constraint satisfaction.
Note that before writing the program, we need to install Python package called python-constraint. You can install it with the help of the following command −
>pip install python-constraint
The following steps show you a Python program for solving algebraic relation using constraint satisfaction −
Import the constraint package using the following command −
>from constraint import *
Now, create an object of module named problem() as shown below −
problem = Problem()
Now, define variables. Note that here we have two variables a and b, and we are defining 10 as their range, which means we got the solution within first 10 numbers.
problem.addVariable('a', range(10)) problem.addVariable('b', range(10))
Next, define the particular constraint that we want to apply on this problem. Observe that here we are using the constraint a*2 = b.
problem.addConstraint(lambda a, b: a * 2 == b)
Now, create the object of getSolution() module using the following command −
>solutions = problem.getSolutions()
Lastly, print the output using the following command −
>print (solutions)
You can observe the output of the above program as follows −
>[{'a': 4, 'b': 8}, {'a': 3, 'b': 6}, {'a': 2, 'b': 4}, {'a': 1, 'b': 2}, {'a': 0, 'b': 0}]
Magic Square
A magic square is an arrangement of distinct numbers, generally integers, in a square grid, where the numbers in each row , and in each column , and the numbers in the diagonal, all add up to the same number called the “magic constant”.
The following is a stepwise execution of simple Python code for generating magic squares −
Define a function named magic_square, as shown below −
def magic_square(matrix_ms): iSize = len(matrix_ms[0]) sum_list = []
The following code shows the code for vertical of squares −
for col in range(iSize): sum_list.append(sum(row[col] for row in matrix_ms))
The following code shows the code for horizantal of squares −
sum_list.extend([sum (lines) for lines in matrix_ms])
The following code shows the code for horizontal of squares −
dlResult = 0 for i in range(0,iSize): dlResult +=matrix_ms[i][i] sum_list.append(dlResult) drResult = 0 for i in range(iSize-1,-1,-1): drResult +=matrix_ms[i][i] sum_list.append(drResult) if len(set(sum_list))>1: return False return True
Now, give the value of the matrix and check the output −
>print(magic_square([[1,2,3], [4,5,6], [7,8,9]]))
You can observe that the output would be False as the sum is not up to the same number.
>print(magic_square([[3,9,2], [3,5,7], [9,1,6]]))
You can observe that the output would be True as the sum is the same number, that is 15 here. | https://scanftree.com/tutorial/python/artificial-intelligence-with-python/ai-python-heuristic-search/ | CC-MAIN-2022-40 | refinedweb | 870 | 60.24 |
By default, each FlexChart has two axes and a single Plot Area.
You may create additional plot areas and stack them vertically or horizontally. Vertically stacked plot areas usually have their own Y axis and a shared X axis. The legend is shared by all plot areas.
To create a chart with 2 series on 2 separate plot areas, follow these steps:
For example, the snippet below creates two plot areas. The first contains two series and show amounts on the Y axis. The second contains a single series and shows quantities on the Y axis.
import * as chart from '@grapecity/wijmo.chart'; // create the chart var myChart = new chart.FlexChart('#myChart', { itemsSource: getData(), bindingX: 'country', series: [ { binding: 'sales', name: 'Sales' }, { binding: 'expenses', name: 'Expenses' }, { binding: 'downloads', name: 'Downloads', chartType: 'LineSymbols' } ] }); // define first plot area, add to chart var p = new chart.PlotArea(); p.row = 0; p.name = 'amounts'; p.height = '2*'; myChart.plotAreas.push(p) // define second plot area p = new chart.PlotArea(); p.row = 1; p.name = 'quantities'; p.height = '*'; // define second y axis var axisY2 = new chart.Axis(wijmo.chart.Position.Left); // assign 2nd Y axis to 2nd plot area axisY2.plotArea = p; // assign 3rd series 'downloads' to the 2nd Y axis myChart.series[2].axisY = axisY2; myChart.plotAreas.push(p);
Note that the plot area in which a series is plotted is determined by which X or Y axis the series is plotted against. So if you move an axis to a different plot area, the series will move with them.
The plot area layout is based on a grid layout of rows and columns. For example, to create 3 vertically stacked plot areas, set the row property for each to 0, 1 and 2 respectively.
To create 3 horizontally stacked plot areas, set the column property for each to 0, 1 and 2 respectively. You can even use both rows and columns to create a 2 x 2 layout.
You can control the size of each plot area by setting its width and height properties. If you want to add some extra space between plot areas, add an empty one to act as a spacer.
// create a spacer plot area p = new chart.PlotArea(); p.row = theChart.plotAreas.length; p.name = 'spacer'; p.height = 25; theChart.plotAreas.push(p)
Stacking chart controls is an alternative to creating a single chart with multiple plot areas. Use the plotMargin property to ensure the charts line up properly.
For example, these two charts stacked vertically have their Y axes aligned by setting both plotMargin properties to have '120' as the left value.
myChart1.plotMargin = 'NaN 120 10 60'; // top, right, bottom, left myChart2.plotMargin = '10 120 NaN 60'; // top, right, bottom, left
Submit and view feedback for | https://www.grapecity.com/wijmo/docs/master/Topics/Chart/Advanced/Plot-Areas | CC-MAIN-2022-05 | refinedweb | 459 | 68.06 |
Created on 2008-03-24 20:59 by jjcogliati, last changed 2010-08-06 19:52 by dandrzejewski. This issue is now closed.
I was trying to use subprocess to run multiple processes, and then wait
until one was finished. I was using poll() to do this and created the
following test case:
#BEGIN
import subprocess,os
procs = [subprocess.Popen(["sleep",str(x)]) for x in range(1,11)]
while len(procs) > 0:
os.wait()
print [(p.pid,p.poll()) for p in procs]
procs = [p for p in procs if p.poll() == None]
#END
I would have expected that as this program was run, it would remove the
processes that finished from the procs list, but instead, they stay in
it and I got the following output:
#Output
[(7426, None), (7427, None), (7428, None), (7429, None), (7430, None),
(7431, None), (7432, None), (7433, None), (7434, None), (7435, None)]
#above line repeats 8 more times
[(7426, None), (7427, None), (7428, None), (7429, None), (7430, None),
(7431, None), (7432, None), (7433, None), (7434, None), (7435, None)]
Traceback (most recent call last):
File "./test_poll.py", line 9, in <module>
os.wait()
OSError: [Errno 10] No child processes
#End output
Basically, even for finished processes, poll returns None.
Version of python used:
Python 2.5.1 (r251:54863, Oct 30 2007, 13:45:26)
[GCC 4.1.2 20070925 (Red Hat 4.1.2-33)] on linux2
Relevant documentation in Library reference manual 17.1.2
poll( ) ... Returns returncode attribute.
... A None value indicates that the process hasn't terminated yet.
The problem is that os.wait() does not play nicely with subprocess.py.
Popen.poll() and Popen.wait() use os.waitpid(pid, ...) which will
raise OSError if pid has already been reported by os.wait().
Popen.poll() swallows OSError and by default returns None.
You can (sort of) fix your program by using
"p.popen(_deadstate='dead')" in place of "p.popen()". This will make
poll() return 'dead' instead of None if OSError gets caught, but this
is undocumented.
Maybe a subprocess.wait() function could be added which would return a
tuple
(pid, exitcode, popen_object)
where popen_object is None if the process is "foreign" (i.e. it was
not created by the subprocess module).
It would not be hard to implement this on unix if you don't care about
thread safety. (On unix Popen.wait() is not thread-safe either, so
maybe thread safety does not matter.)
To implement something similar on windows you would probably need to
use WaitForMultipleObjects() to check whether any process handles are
signalled, but that would involve patching _subprocess.c or using
ctypes or pywin32.
Hm. Well, after filing the bug, I created a thread for each subprocess,
and had that thread do an wait on the process, and that worked fine.
So, I guess at minimum it sounds like the documentation for poll could
be improved to mention that it will not catch the state if something
else does. I think a better fix would be for poll to return some kind
of UnknownError instead of None if the process was finished, but python
did not catch it for some reason (like using os.wait() :)
Isn't this a critical problem. The .poll() function serves as a means to
check the status of the process started. When it continues to report
'None' to a process which has already terminated, it creates a false
positive of a hung process. Dealing with recovery from an actual hung
process is difficult enough. Having to deal with a bad detection that
the process ran to completion on top of this, makes the use of
subprocess difficult.
Maybe I'm miss applying the .poll() function. I'm trying to detect that
a process has hung, prior to calling .stdout.readlines(). The
readlines() will hang my python script if the process is hung. Is there
another way I should be doing this?
Thanks,
Mike
I have also run into this problem. If you only use p.poll() and never
p.wait(), returncode will always remain None.
roudkerk's workaround doesn't seem to work with the new Popen objects,
at least in python 2.4. ("unexpected keyword argument '_deadstate'")
Does anyone have a workaround for subprocess.Popen? Do I have to switch
to the deprecated popen function(s)?
Should this be closed or is this still a problem in 2.7 (release candidate out now, final soon) or 3.1?
Terry,
I had long since coded around the problem. At this point, I no longer have
the test environment to cause the intermittent conditions of the process
hang. I could code something up, but; your question is the first response to
this bug report since "reiko <j.k.langridge@gmail.com>" on 7/28/2008. I
suggest it can be closed, and would be reopened when encountered again. I
appears nobody is working in that area of code.
On Wed, Jun 16, 2010 at 8:13 PM, Terry J. Reedy <report@bugs.python.org>wrote:
>
> Terry J. Reedy <tjreedy@udel.edu> added the comment:
>
> Should this be closed or is this still a problem in 2.7 (release candidate
> out now, final soon) or 3.1?
>
> ----------
> nosy: +tjreedy
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <>
> _______________________________________
> | https://bugs.python.org/issue2475 | CC-MAIN-2018-30 | refinedweb | 875 | 76.01 |
Paul Hammant's Blog: A forgotten aspect of the Facade Pattern
Use of the Facade Pattern in software is about apparent elegance on the ‘user’ side of the facade and the hiding of inelegance on the ‘implementation’ side. The implementation can change or be replaced easily over time, but that applies to many patterns. Thus it is the elegance of the facade that’s making it attractive to use in a layered design. There is a second aspect, thought, that is more subtle and often forgotten: reducing the number of calls needed through it. Hopefully to one.
Dishonored Facade Implementations
Just pretend for a second that a object that straddles a divide has gettable and settable things that are representative of more meaningful business things:
public interface Broken { void setGivenName(String gn) void setFamilyName(String fn) String getGivenName() String getFamilyName() }
What we have in the above is something where two are likely to be called in series for typical interactions. We could see that getGivenName and getFamilyName could often be called together when populating an UI’s fields. Similarly setGivenName and setFamilyName could be called together when pushing the UI’s fields back towards persistence:
// populate UI: gn = b.getGivenName() fn = b.getFamilyName() // (elsewhere) persist UI fields: b.setGivenName(gn) b.setFamilyName(fn)
If the facade happens to by hiding inelegance of lower level implementation of things that go over a wire, we will now observe two calls over the a RPC or RESTful divide. There are performance consequences that add up quickly.
Honoring Facade Patterns
When the facade pattern is implemented correctly, more data are channelled though fewer invocations (again these are representative of more meaningful business operations):
public class Name { // imagine constructor, getter, setter that // are characteristic of Immutable or ValueObject String firstName; String givenName; } public interface BetterFacade { void changeName(Name name) Name getName() }
What we have now a single operation through the facade that takes or gives something that could be serialized quite easily. Again it does not matter at this stage whether the underlying implementation is a Remote Procedure Call, or something RESTful.
At this level of object granularity, it should be easy to get right.
The Dark Ages of Enterprise Computing
Sun’s late-90’s J2EE is the largest effort to bring bad practice to enterprise computing. Particularly the EJB stuff in its early incarnations. Sun promoted Session Facade as one of the constructs you should code in to your stack. In so doing, they forced a ton of inelegance on the developer. The page I link to shows the J2EE definition from the 2001-2002 era (Oracle could nix it at any time). It is also in the EJB 2.0 style before EJB 3.0 injected some small amount of sanity in 2005. From 1998 to 2004, Sun told folks to make directed graphs of components while constructing applications, that required a ton of boiler-plate code. In the effort to just get it working many teams forgot the desire to reduce invocations across the divide. Java teams are still perhaps forgetful in the years since the progressive abandonment of EJB in particular and J2EE generally since (the mid noughties).
Clues that you are doing it wrong.
While the above diagram does not show categorically an incorrect architecture, even if the there’s a TCP/IP separation between each of the boxes. It does show the start of a fan-out of connections. What starts with one I/O from the browser to the web-tier, results in three I/Os to lower level services. That could be nine more if each of those three did the same to layers below. We might have a constraint in that these have to be separate boxes, and we have to do I/O in series to them as part of the buyCartContents operation. Incidentally, the Catalog node in the above diagram, could have some caching attached to reduce some of our worries, but that is a different thing.
What I am trying to get across at this level of architecture, is the fact that you could quickly engineer something with a lot of downstream invocations that add up while you are otherwise busy with developing functionality. As you’re making your stack, look at the numbers of invocations across divides that could otherwise be facade-like, and worry of the growth of them is uncontrolled. Look at facade methods that are similar to each other, or always invoked in series and could be rolled into one.
One technique for doing it right from the start.
Simply put, you should rigidly stick to top-down design as you build your application stack over time. This is true for the run up to your initial go live, and the in the incremental releases that follow that first push. Do not let Database Administrators (DBAs) drive the design of your web-app. Nor engineers of lower level services. Both of these are ‘bottom-up’ techniques and classic false economies for enterprise development teams. Instead think about the facade like operations from the UI, and change downstream code as appropriate to support that. If that means that your UI teams are more instructional to service teams, so be it.
While adhering to the above, keep busy with your refactoring agenda , even if you are using a language that does not have an IDE as good as Intellij (Java) is. With a decent Agile perspective the right design will emerge at all times. | https://paulhammant.com/2011/09/21/a-forgotten-aspect-of-the-facade-pattern/ | CC-MAIN-2018-43 | refinedweb | 914 | 60.45 |
Hey i'm using trying to add a new crafting recipe into my minecraft (1.1) I'm using modloader and minecraft coder pack. Based on a tutorial i watched my code currently is: package net.minecraft.src; import java.util.Random; public class mod_Recipe extends BaseMod {public void load() {ModLoader.AddRecipe(new ItemStack(Item.diamond, 9), new Object[]{ "***", "***","***", Character.valueOf('*'), Block.wood }); } public String getVersion() { return "1.1"; } } This always leads to the following error though: class mod_Recipe is public, should be declared in a file named mod_Recipe.java public class mod_Recipe extends BaseMod So then i obviously changed my third line to: public class mod_Recipe.java extends BaseMod This though leads to the following new error: src/minecraft/net/minecraft/src/Mod_Recipe.java:3: '{' expected public class mod_Recipe.java extends BaseMod ^ This is intended to have 9 blocks of wood make a block of diamond in minecraft and im using the most up-to-date versions of all the applications, as of yesterday. Please help dunno what to do! :dontknow: | http://www.thecodingforums.com/threads/modding-in-minecraft.822819/ | CC-MAIN-2016-40 | refinedweb | 172 | 51.65 |
Splash Screen
- PDF for offline use
-
- Related Samples:
-
- Related Links:
-
Let us know how you feel about this
Translation Quality
0/250
last updated: 2017-08
An Android app takes some time to to start up, especially when the app is first launched on a device. A splash screen may display start up progress to the user or to indicate branding.
Contents
- Overview
- Requirements
- Implementing A Splash Screen
- Summary
Overview
An Android app takes some time to to start up, especially during the first time the app is run on a device (sometimes this is referred to as a cold start). The splash screen may display start up progress to the user, or it may display branding information to identify and promote the application.
This guide discusses one technique to implement a splash screen in an Android application. It covers targets Android API level 15 (Android 4.0.3) or higher. The application must also have the Xamarin.Android.Support.v4 and Xamarin.Android.Support.v7.AppCompat NuGet packages added to the project.
All of the code and XML in this guide may be found in the SplashScreen sample project for this guide.
Implementing A Splash Screen
The quickest way to render and display the splash screen is to create a custom theme and apply it to an Activity that exhibits the splash screen. When the Activity is rendered, it loads the theme and applies the drawable resource (referenced by the theme) to the background of the activity. This approach avoids the need for creating a layout file.
The splash screen is implemented as an Activity that displays the branded drawable, performs any initializations, and starts up any tasks. Once the app has bootstrapped, the splash screen Activity starts the main Activity and removes itself from the application back stack.
Creating a Drawable for the Splash Screen
The splash screen will display an XML drawable in the background of the splash screen Activity. It is necessary to use a bitmapped image (such as a PNG or JPG) for the image to display.
In this guide, we use a
Layer List
to center the splash screen image in the application. The following
snippet is an example of a
drawable resource using a
layer-list:
<?xml version="1.0" encoding="utf-8"?> <layer-list xmlns: <item> <color android: </item> <item> <bitmap android: </item> </layer-list>
This
layer-list will center the splash screen image splash.png on
a background specified by the
@color/splash_background resource.
After the splash screen drawable has been created, the next step is to create a theme for the splash screen.
Implementing a Theme
To create a custom theme for the splash screen Activity, edit (or add)
the file values/styles.xml and create a new
style element for
the splash screen. A sample values/style.xml file is shown below
with a
style named MyTheme.Splash:
<resources> <style name="MyTheme.Base" parent="Theme.AppCompat.Light"> </style> <style name="MyTheme" parent="MyTheme.Base"> </style> <style name="MyTheme.Splash" parent ="Theme.AppCompat.Light.NoActionBar"> <item name="android:windowBackground">@drawable/splash_screen</item> <item name="android:windowNoTitle">true</item> <item name="android:windowFullscreen">true</item> </style> </resources>
MyTheme.Splash is very spartan – it declares the window
background, explicitly removes the title bar from the window, and
declares that it is full-screen. If you want to create a splash screen
that emulates the UI of your app before the activity inflates the first
layout, you can use
windowContentOverlay rather than
windowBackground in your style definition. In this case, you must
also modify the splash_screen.xml drawable so that it displays an
emulation of your UI.
Create a Splash Activity
Now we need a new Activity for Android to launch that has our splash image and performs any startup tasks. The following code is an example of a complete splash screen implementation:
Intent(Application.Context, typeof (MainActivity))); } }
SplashActivity explicitly uses the theme that was created in the
previous section, overriding the default theme of the application.
There is no need to load a layout in
OnCreate as the theme declares a
drawable as the background.
It is important to set the
NoHistory=true attribute so that the
Activity is removed from the back stack. To prevent the back button
from canceling the startup process, you can also override
OnBackPressed and have it do nothing:
public override void OnBackPressed() { }
The startup work is performed asynchronously in
OnResume. This is
necessary so that the startup work does not slow down or delay the
appearance of the launch screen. When the work has completed,
SplashActivity will launch
MainActivity and the user may begin
interacting with the app.
This new
SplashActivity is set as the launcher activity for the
application by setting its
MainLauncher = true attribute. Because
SplashActivity is now the launcher activity, you must edit
MainActivity.cs, and remove the
MainLauncher attribute from
MainActivity:
[Activity(Label = "@string/ApplicationName")] public class MainActivity : AppCompatActivity { // Code omitted for brevity }
Summary
This guide discussed one way to implement a splash screen in a Xamarin.Android application; namely, applying a custom theme to the launch. | https://docs.mono-android.net/guides/android/user_interface/splash-screen/ | CC-MAIN-2017-43 | refinedweb | 845 | 54.93 |
Member Since 3 Years Ago
3,03xt3r left a reply on How To Upgrade From 5.4 To 5.5?
How can I upgrade laravel V 5.4 to 5.5?
Please wait till official documentations are in place ...
d3xt3r left a reply on Service Provider Register() And Provides()
I've seen it done a few ways, and I don't fundmentally understand the differences:
You need to read between the lines and figure this out ... Just, for this particular case let me elaborate ...
You are registering a
singleton service with the
IoC which can be
located \
resolved by the name
tabs. When any such request is received by IoC, an instance of Manager class is instantiated and stored for future calls. Using
$app->make() allows for any further DI in the
Manager's construction.
Similar to 1 except that the name/key by which IoC resolves it is
Manager::class
Similar to 2, except you are instantiating the Manager class yourself. If the constructor requires any further DI, you will have to resolve it yourself,
Regarding, error with Facade, show some more code and error backtrace.
d3xt3r left a reply on Push Notification To IOS And Android
Do I need to buy push notification service or can this be achieved through the laravel backend?
Push notifications involve a backend which can deliver the notifications and a frontend which is configured to receive one. If you have taken care of later, former is a just a matter of making an http request. No special package required ...
d3xt3r left a reply on Like Operator In Laravel Blade
What if k != update ???
d3xt3r left a reply on Eager Loading Many To Many Relationships But Limiting The Results
Definitely. However, i am AFK. This question has been answered several times on this forum... try searching ... common cases displaying top N comments per post ... Also would help if you try writing native sql for your use case ...
d3xt3r left a reply on Where Should Timezone Processing Be Done, On Server Or On Client?
Depends what it does. If just displaying time to client, client side is fine. If taking certain decision based on client timezone then server side ... remember client is free to modify its clock so you shouldnt rely heavily on it for critical decisions ...
d3xt3r left a reply on Eager Loading Many To Many Relationships But Limiting The Results
It is not possible with eager loading ... what you want needs a group by which isn't a part of relations ...
d3xt3r left a reply on Base Table Or View Not Found: 1051 Unknown Table After Php Artisan Migrate:refresh
Short term solution: use drop if exists ...
d3xt3r left a reply on Axios To Catch Custom Validation Rule Error Message?
You have marked toDate as
sometimes. Validation will only trigger if present...
d3xt3r left a reply on Code Structure Behind Controller
Don't fry your brain over this .... If and only if, your functions (so called logic) will be re-used by from multiple locations, you can move it to services/facades or static util classes ...
Its perfectly OK to have a few helper functions within controllers, if they are only to be used by the controller in question ...
d3xt3r left a reply on Filtering Nested Collections
Try ... just a hunch ... as filtering is not in place ... (perhaps)
return $clients->each(function ($client) { $client->schedules = $client->schedules->filter(function ($schedule) { return $schedule->some_custom_attribute == 'something'; }); });
or create an associative array for client id and filtered schedules ... and reference it later, just not in most laravel'ly way ...
$filtered_schedules = []; return $clients->each(function ($client) { $filtered_schedules[$client->id] = $client->schedules->filter(function ($schedule) { return $schedule->some_custom_attribute == 'something'; }); });
d3xt3r left a reply on How To Get Value Of Select Option
<select> <option value="1">Bar</option> </select>
This is the only way, if you are not getting 1, there might be other field in your form with name=foo and value=bar messing your select option...
d3xt3r left a reply on How To Get Value Of Select Option
Ya, and I want a beer. HTML 101.
d3xt3r left a reply on Subdomain Map To Subdirectory Route In Existing Laravel Project?
Use proxy_pass to redirect forum.example.com/* to app.example.com/forum/$1 ...
d3xt3r left a reply on Filtering Nested Collections
I'm wondering if I can do this with just collection methods, as I'd rather not use db joins in this instance.
Think twice, with proper indexes database modification/query will be faster, rather than loading all the data and filtering them in PHP. Also, executing each() will lead to N+1 queries, which you definitely don't want.
d3xt3r left a reply on Execution Order In Controller's Constructor With Middleware (Laravel 5.4)
$this->middleware(MyStupidMiddleware::class);
It does not actually execute the middleware code but add it to the list to be executed later ...
d3xt3r left a reply on How To Get A Route From URI
If you already know the
uri what do you need the
route for ?
d3xt3r left a reply on Why Ajax Not Post Method Not Working
So there it is, the form is being posted. Now figure out why your server made the boo boo with 500 all over the place.
d3xt3r left a reply on Send Mail After Returning To A View With Data
The problem is the function dies after the return view so I never get to send the email.
Sure, it will. Thats the definition of it. Use queues.
d3xt3r left a reply on Why Ajax Not Post Method Not Working
Check the debug console of your browser for any javascript error.
d3xt3r left a reply on Using Route.php Code, If Button Is Clicked
Heard of AJAX? If yes use it, if not try
d3xt3r left a reply on Migration Fails Due To Not Finding Class
Class 'AddTimestampsToTables' not found
You might have previously migrated something by this name. Check your database and clear all the previous migration if any.
d3xt3r left a reply on CSS Files Not Updating
How are you generating the CSS files ? Gulp ?? or plain vanilla css files?? If former, you will need to compile it.
d3xt3r left a reply on How To Show The Latest Post
I am still not sure, what exactly does not work. Do you see any error? If yes, what is it?
d3xt3r left a reply on Managing Users Files
If authentication is involved, use storage as it cannot be directly accessed by web.
Edit: Also explore S3 and sorts.
d3xt3r left a reply on Cache::remember - Forever?
I remember there used to be a caching strategy for query builder. Is it no longer the case ?
d3xt3r left a reply on I Have Problems Updating A File Type Input That Sends An Image
Are you re-uploading the img while updating?
d3xt3r left a reply on PHP 7.1 "A Non-numeric Value Encountered" On DB::raw
It may not be related to query.... can you show the exact error message and lines which fail ....
d3xt3r left a reply on Local Scope Pivot Table - L5.2
wherePivot() is a member of ManyToMany relationship and not general Query builder, hence scope would not work as expected in this case. It is building where query on column
pivot
Create a new relationship
unreadConversionsations()
public function unreadConversations() { return $this->belongsToMany('whatever')->wherePivot('whatever'); }
d3xt3r left a reply on I Have Problems Updating A File Type Input That Sends An Image
Move
$articles->img_dest = $name within the if block.
d3xt3r left a reply on PHP 7.1 "A Non-numeric Value Encountered" On DB::raw
Not sure, why it would work on one version. The placement of
" does not seem correct in first query... either escape the double quotes or use single quotes ....
d3xt3r left a reply on Retrieving Only Id Values From DB Table
Unable to understand sync() and then create() ? Sorry you may have to be verbose.
d3xt3r left a reply on Retrieving Only Id Values From DB Table
still not clear? What do you expect, when you dd() the array, this is what is expected. Its not associative ...
d3xt3r left a reply on Retrieving Only Id Values From DB Table
And this is the output that you get ???
d3xt3r left a reply on Retrieving Only Id Values From DB Table
Show the output and the one that you need ...
d3xt3r left a reply on When / How Do Expired Cache Items Get Deleted
Most cache are LRU based. Items get deleted(evicted) as an when required, as an example, to free memory for new items. I don't think Laravel's disk(file) based cache has a eviction policy.
If using any other driver (apc/memcached/redis) they will have a built in eviction system, so you need not worry. For disk based cache, you can rely on commands/packages etc...
d3xt3r left a reply on Extending Eloquent's Builder To Eager Load Collections W/ Limits
Eager loaded relationships does not care about groups, it all boils down to ` Select ... from ... where id in (...)'. So its next to impossible to achieve this using built in eager loading.
d3xt3r left a reply on "weighted" Newsfeed
Fairly trivial... Include a signed integer field to store the 'weight' value
I didn't realise you were just looking for this. I thought,, it was more about how you calculate the
weights
d3xt3r left a reply on "weighted" Newsfeed
This isn't trivial. There are 100s and 1000s of variables that goes into
weight calculation, including but not limited to other user's interaction with certain feed.
Plus, the weight calculated for a certain event will be different for you and me.
Start by studying the variables that define your
event.
d3xt3r left a reply on Request Class Not Returning Validation Errors
double check your routes definition ...
d3xt3r left a reply on All Routes Directly From Root?!
Whats wrong with
d3xt3r left a reply on Htmlentities() Expects Parameter 1 To Be String, Array Given - Laravel Name Array Inputs
{!! Form::text('name[]', null, ['class' => 'form-control eighty-percent']) !!}
This is what i am meant, you will have to treat arrays differently.
d3xt3r left a reply on Htmlentities() Expects Parameter 1 To Be String, Array Given - Laravel Name Array Inputs
Check or show your view
C:\xampp\htdocs\projectbuilder\resources\views\invoices\create.blade.php,
If passing an array, treat them as as array in your view.
d3xt3r left a reply on [5.3] Multi Auth Configuration Problem: Guards Use Same Session
Yes, it would as it shares the same cookie and hence session. Guards are not meant for different sessions, but different authentication logic. The hack wont be trivial and it would require manipulating cookies and session.
d3xt3r left a reply on Remembe Me Is Not Working For Me
But after logout I can't see checkbox checked and username password filled automatically.
This is not what is expected of this feature. Its meant for signing in automatically when session expires., so that on session expiry user is not logged out.
d3xt3r left a reply on Use Custom Helper Functions For +readability
Just an opinion, adding more and more functions to global namespace, makes me uncomfortable. What if (less probable) but if PHP introduces a function by same name.
Not against helper functions, but prefer name-spacing them by making them static functions of a helper class.
d3xt3r left a reply on Service Providers???? When Where Why?
what service providers are supposed to do besides registering things
Exactly that., register the
services in a sense that they can be
Plugged and Played...
even then you can still use classes by creating static functions etc
Well of-course, if the classes are tightly coupled with your app, (no abstraction, no interface), you don't need the Service Providers ...
Has anyone come across a nice short, good and complete blog or something that explains real life examples of when they can be used?
Well, to me they are a way to interact/talk with/to the IoC, so that's where i would start.
d3xt3r left a reply on Use Custom Helper Functions For +readability
Ouch.... :)
Well, rightly posted under TIPS. Fell free to use it or let it go ... | https://laracasts.com/@d3xt3r | CC-MAIN-2019-39 | refinedweb | 2,037 | 65.12 |
#include <Xm/IconBox.h>
The Icon Box widget lays out its children on a grid with each child forced to be the same size and with the location of each child specified as an X and Y location on the grid.
The size of the Icon Box, its children, and the number of cells displayed are calculated as described below. The general idea is that all children are always be shown and should be given their desired size whenever possible. The user may add or delete cells by resizing this window using the window manager widget.
The preferred size is calculated by using the maximum desired child height or width and making sure that these are no smaller than the minimum sizes. This size is multiplied by the number of cells along the axis and properly padded to come up with a preferred size. The number of cells is the maximum of the largest cellX or cellY value and the minimum number of horizontal or vertical cells.
If the Icon box is forced larger than its preferred size more cells are added at the bottom-right of the widget while the children all remain at their preferred sizes.
If the Icon box is forced smaller than its preferred size each cell is forced to be smaller in order to allow all children to fit within the Icon Box. All children will be forced to the same smaller size.
Icon Box inherits behavior, resources, and traits from Core, Composite, Constraint, and XmManager.
The class pointer is xmIconBoxWidgetClass.
The class name is XmIcon).
Icon Box inherits behavior and resources from the superclasses described in the following tables. For a complete description of each resource, refer to the reference page for that superclass.
XmIconBox inherits translations from XmManager.
Composite(3), Constraint(3), Core(3), XmCreateIconBox(3), XmIconBoxIsCellEmpty(3), XmManager(3), XmVaCreateIconBox(3), and XmVaCreateManagedIconBox(3). | http://www.makelinux.net/man/3/X/XmIconBox | CC-MAIN-2015-35 | refinedweb | 312 | 61.97 |
Python – Reading RSS feed
RSS (Rich Site Summary) is a format for delivering regularly changing web content. Many news-related sites, weblogs and other online publishers syndicate their content as an RSS Feed to whoever wants it. In python we take help of the below package to read and process these feeds.
pip install feedparser
Feed Structure
In the below example we get the structure of the feed so that we can analyse further about which parts of the feed we want to process.
import feedparser NewsFeed = feedparser.parse("") entry = NewsFeed.entries[1] print entry.keys()
When we run the above program, we get the following output −
>['summary_detail', 'published_parsed', 'links', 'title', 'summary', 'guidislink', 'title_detail', 'link', 'published', 'id']
Feed Title and Posts
In the below example we read the title and head of the rss feed.
import feedparser NewsFeed = feedparser.parse("") print 'Number of RSS posts :', len(NewsFeed.entries) entry = NewsFeed.entries[1] print 'Post Title :',entry.title
When we run the above program we get the following output −
>Number of RSS posts : 5 Post Title : Cong-JD(S) in SC over choice of pro tem speaker
Feed Details
Based on above entry structure we can derive the necessary details from the feed using python program as shown below. As entry is a dictionary we utilise its keys to produce the values needed.
import feedparser NewsFeed = feedparser.parse("") entry = NewsFeed.entries[1] print entry.published print "******" print entry.summary print "------News Link--------" print entry.link
When we run the above program we get the following output −
>Fri, 18 May 2018 20:13:13 GMT ****** Controversy erupted on Friday over the appointment of BJP MLA K G Bopaiah as pro tem speaker for the assembly, with Congress and JD(S) claiming the move went against convention that the post should go to the most senior member of the House. The combine approached the SC to challenge the appointment. Hearing is scheduled for 10:30 am today. ------News Link-------- | https://scanftree.com/tutorial/python/python-text-processing/python-reading-rss-feed/ | CC-MAIN-2022-40 | refinedweb | 326 | 65.93 |
You'll need to install a library to make Arduino IDE support the sensor. This library includes helpers for the module_BuzzerButton_Module library is responsible for configuring interrupt and PWM pins, reading button state and generating a frequency for the piezo sounder.
To use the library on Arduino IDE, add the following #include statement to the top of your sketch.
#include <Turta_BuzzerButton_Module.h>
Then, create an instance of the Turta_BuzzerButton_Module class.
Turta_BuzzerButton_Module bb
Now you're ready to access the library by calling the bb instance.
To initialize the sensor, call the begin method.
begin()
This method configures the interrupt and PWM pins.
Returns the button pressed state.
bool readButton()
Parameters
None
Returns
Bool: Button state
Plays a tone on the buzzer.
void buzzerTone(int frequency, short dutyCycle)
Parameters
Int: frequency
Short dutyCycle
Returns
None
Stops the tone.
void buzzerStop()
Parameters
None
Returns
None
Plays a tone on the buzzer for a duration.
void buzzerTonePeriod(int frequency, short dutyCycle, int durationMs)
Parameters
Int: frequency
Short: dutyCycle
Int: durationMs
Returns
None
You can open the example from Arduino IDE > File > Examples > Examples from Custom Libraries > Turta Buzzer Button Module. There is one example of this sensor.
If you're experiencing difficulties while working with your device, please try the following steps.
Problem: You're pressing the button, but the application does not recognize it. Cause: There are two press levels of the button. You probably stop pushing to it after the first level. Solution: Please push the button stronger.
Problem: The buzzer does not produce sound or produces incorrect tones. Cause: The PWM pin drives the buzzer uses PWM channel 15. You probably use this channel with another component. Solution: Please do not assign PWM channel 15 to any pin. | https://docs.turta.io/modular/buzzer-button/iot-node | CC-MAIN-2019-51 | refinedweb | 288 | 58.48 |
Handling Anomalies: Errors and Exceptions
Note that the code in red is in place to re-prompt the user for valid input. Also presented in red, note that you are ignoring the required exception detection. The anomaly is detected so that no division takes place with a zero denominator. Rather than terminate, the application can continue. In theory, the operating system does not come in to play, as seen in Diagram 4.
Diagram 4: Generating an Error—Catching the problem and handling it.
In this case, the user is kept informed by an error message indicating what the problem is and then is asked to re-enter a valid value, as seen in Figure 3. It is important to make sure that the messages to the user are specific and clear.
Figure 3: Generating an Error—Catching the problem and handling it.
Despite the fact that this type of error handling is not necessarily object-oriented in nature, I believe that it has a valid place in OO design. Throwing an exception (discussed in the next section) can be expensive in terms of overhead (you will learn about the cost of exception handling in a later article). Thus, although exceptions are a great design choice, you will still want to consider other error handling techniques, depending on your design and performance needs.
Although this means of error checking is preferable to the previous solutions, it still has a few potentially limiting problems. It is not always easy to determine where a problem first appears. And, it might take a while for the problem to be detected. It is always important to design error handling into the class right from the start.
Throwing an Exception
Most OO languages provide a feature called exceptions. In the most basic sense, exceptions are unexpected events that occur within a system. Exceptions provide a way to detect problems and then handle them. In Java, C#, and C++, exceptions are handled by the keywords catch and throw. This might { // Business/program logic } catch(Exception e) { // Code executed when exception occurs }
If an exception is thrown within the try block, the catch block will handle it. When an exception is thrown while the code in the try block is executing, the following occurs:
- The execution of the try block is terminated.
- The catch clauses are checked to determine whether an appropriate catch block for the offending exception was included. (There might be more than one catch clause per try block.)
- If none of the catch clauses handle the offending exception, it is passed to the next higher-level try block. (If the exception is not caught in the code, the system ultimately catches it and the results are unpredictable.)
- If a catch clause is matched (the first match encountered), the statements in the catch clause are executed.
- Execution then resumes with the statement following the try block.
Listing 4 an example of how an exception is caught by using the code from the previous examples:
import java.io.*; // Class ErrorHandling public class ErrorHandling { public static void main(String[] args) throws Exception { int a = 9; int b = 0; int c = 0; b = getInput(); try { c = a / b; } catch(Exception e) { System.out.println("\n*** Exception Caught"); System.out.print("*** System Message : "); System.out.println(e.getMessage()); System.out.println("*** Exiting application ... return (x); } }
Listing 4: Generating an Error—Catching the exception.
Page 4 of 5
| http://www.developer.com/lang/article.php/10924_3692751_4/Handling-Anomalies-Errors-and-Exceptions.htm | CC-MAIN-2014-52 | refinedweb | 567 | 55.74 |
This section describes.
A 256-bit hash of the data in a file system block. The checksum capability can range from the simple and fast fletcher4 (the default) to cryptographically strong hashes such as SHA256.
A file system whose initial contents are identical to the contents of a snapshot.
For information about clones, see Overview of ZFS Clones.
A generic name for the following ZFS components: clones, file systems, snapshots, and volumes.
Each dataset is identified by a unique name in the ZFS namespace. Datasets are identified using the following format:
pool/path[@snapshot]
Identifies the name of the storage pool that contains the dataset
Is a slash-delimited path name for the dataset component
Is an optional component that identifies a snapshot of a dataset
For more information about datasets, see Chapter 6, Managing Oracle Solaris ZFS File Systems.
A ZFS dataset of type filesystem that is mounted within the standard system namespace and behaves like other file systems.
For more information about file systems, see Chapter 6, Managing Oracle Solaris ZFS File Systems.
A virtual device that stores identical copies of data on two or more disks. If any disk in a mirror fails, any other disk in that mirror can provide the same data.
A logical group of devices describing the layout and physical characteristics of the available storage. Disk space for datasets is allocated from a pool.
For more information about storage pools,.
A virtual device that stores data and parity on multiple disks. For more information about RAID-Z, see RAID-Z Storage Pool Configuration.
The process of copying data from one device to another device is known as resilvering. For example, if a mirror device is replaced or taken offline, the data from an up-to-date mirror device is copied to the newly restored mirror device. This process is referred to as mirror resynchronization in traditional volume management products.
For more information about ZFS resilvering, see Viewing Resilvering Status.
A read-only copy of a file system or volume at a given point in time.
For more information about snapshots, see Overview of ZFS Snapshots.
A logical device in a pool, which can be a physical device, a file, or a collection of devices.
For more information about virtual devices, see Displaying Storage Pool Virtual Device Information.
A dataset that represents a block device. For example, you can create a ZFS volume as a swap device.
For more information about ZFS volumes, see ZFS Volumes. | http://docs.oracle.com/cd/E19253-01/819-5461/ftyue/index.html | CC-MAIN-2015-48 | refinedweb | 412 | 56.86 |
SharePoint lists are stored in a SQL Server database, so you would think that connecting SQL Reporting Services to a SharePoint 2007 list would be trivial, but it’s not. There are a number of pitfalls to be avoided that are not entirely clear and do not provide clear error messages. In this white paper, I’ll outline one approach to attach a report to a list. I’ll include common mistakes and ways to avoid them as well as tips for determining causes of problems you might find along the way.
There were a couple sites that helped me figure out how to get things wired up.
On the one hand, it’s silly to spend a lot of time thinking of reporting on a list that doesn’t exist, and SharePoint does make list creation a simple process. Maybe the site in question has had an existing list for a long time, already. Or, maybe this is a new site that you’re currently building.
The trick is that you not only need an existing list to build a report, you need the list ID. The protocol we will be using in this example allows the use of a name to identify the list, but it only seems to recognize the names of the built-in lists. For all other lists, it requires the list ID. This will be a GUID that identifies the list in the site.
There are a few ways you can try to get the list GUID from a SharePoint site. Sometimes, opening the list page in a new window will work. Sometimes, hovering the mouse over the link works. In this example, hovering the mouse shows a JavaScript command that includes the GUID ID for the list.
Unfortunately, there are multiple GUIDs in this list. By trial and error, I determined it was the second GUID listed in this example (starting with “9f2c2…”).
If you have access to Site Settings and can see the list in question through Site Libraries and Lists, the Customize link includes a single GUID that appears to be the correct one.
If you’re not familiar with Reporting Services development, you’ll want to make sure you have the proper tools in your environment stack. If you don’t have this option in your new project dialog, you probably need to make sure you have the developer edition of SQL Server installed on your box.
There are a few options to set at this point.
http://[server name]/[optional site name]/_vti_bin/lists.asmx
as an address in a browser and verifying the address is correct. If the browser can’t find the web services, Report Services won’t be able to find it, either.
http://[server name]/[optional site name]/_vti_bin/lists.asmx
as an address in a browser and verifying the address is correct. If the browser can’t find the web services, Report Services won’t be able to find it, either.
At this point, we might have a connection to the SharePoint server, but there could be a number of things wrong with it. The Reporting Services credentials might not be working or the address could still be wrong (especially if you didn’t check it in a browser, first).
For these and other reasons, I recommend making a simple dummy report first and verifying that it works before moving on to the special sauce. The blogs show a GetListCollection query that you can make for a small report that needs no parameters to work. Skipping this step can make any problems you find much harder to track down. The query given by the RockStarGuys blog works out of the box for this simple case.
The blogs also make note that the namespace does not have a trailing slash. This is important to remember, since something along the line is too picky about trailing slashes.
At this point, you should be able to test the query from the data tab, and get some rows back.
If anything has gone wrong, you’ll get a generic error message. Fortunately, there are details that might be helpful hidden behind a button that looks like a small icon.
In my troubleshooting, I found the faultstring of the last message to generally be more helpful than the rest. At this point, you’re most likely looking at an addressing or security problem.
If you see the rows of data, you’re ready to get some real reporting done. The primary trick for this step is the query string. This is where the second blog was more helpful, except that it did not tell you that the list name field really wants to use a GUID for user-defined lists. You will want to use the GUID you found in the first step (or try the ones you find until you see the results you like) in place of the list name as shown in the blog. In this example, the following query worked:
<Query>
<SoapAction></SoapAction>
<Method Namespace="" Name="GetListItems">
<Parameters>
<Parameter Name="listName">
<DefaultValue>{9f2c2d37-eef7-43f0-85e6-91af7524d775}</DefaultValue>
</Parameter>
</Parameters>
</Method>
<ElementPath IgnoreNamespaces="True">*</ElementPath>
</Query>
As you can see, the GUID goes in the DefaultValue tag in the listName parameter tag. One option would be to leave this default value blank, set the query parameter to a report parameter, and allow access to more than one list. The caveat here is that the lists must have the same definition.
Assuming everything has gone well, you should be able to see your list. Chances are the column names are decorated with “ows_” or something similar. You can change the header text easily enough to fix that. Also, the column names will not match the names in the list. In this example, the original “Title” column was renamed to “Address,” but the underlying table still uses “Title.”
After that, it’s just a matter of details. Dates come across as strings, and need to be reconverted back to dates using CDate before they can be formatted and sorted properly. The same will apply for numbers as well.
Security concerns multiply when it comes to actually getting your report on a server where it can be used. The documentation clearly states that only Integrated Windows Authentication and anonymous authentication are supported. This is not entirely correct. In fact, anonymous doesn’t work at all, and Integrated Windows Authentication only seems to work from a browser on the Reporting Services server itself. The Prompt User for Authentication option (called “Credentials supplied by the user running the report” in the Report Manager) does work. This might be acceptable if you are showing the report as a kind of dashboard for occasional access, but it won’t do for scheduled reports or for Internet-facing public reports. Fortunately, you can also use the option to store credentials locally, but this only appears to work as long as you have the “Use as Windows Credentials…” check box checked.
While not completely painless, the process was actually easier than I had anticipated. In this case, most of the limitations were on the Reporting Services side. Web services are ubiquitous enough that SRS should be able to make that connection method more painless. For example, connecting to the service could use a location dialog like the one used in Visual Studio would be much more helpful than a simple text box. How hard can this be? The editor is already in Visual Studio…
From there, the WSDL is available to provide a list of methods to query as well as the parameters required. The error message dialog could make it a little clearer that the icons in the lower left corner are actually buttons without requiring you to move your mouse over them to see the borders. One can only hope that future versions of SRS and SP will make more of the espoused integration they. | https://www.codeproject.com/Articles/24469/SQL-Reporting-Services-data-from-SharePoint-lists?fid=1123033&df=90&mpp=25&sort=Position&spc=Relaxed&tid=3391068 | CC-MAIN-2017-34 | refinedweb | 1,323 | 70.13 |
Transfer Learning approach In Keras | Deep Learning | Python
Hello everyone, In this post, I am going to explain to you the transfer learning approach to deal with your problem statement in deep learning. In our last blogs, we have solved some of the classification and regression problems using a deep neural network. We have walked through many problems, while we were creating the model for our problem statement. I tried to explain it to you there. But we didn’t build any model for the image dataset. Suppose you have an image dataset instead of a numeric dataset so how you will find a way to approach this problem?
You must be thinking to create an ANN Model taking that dataset, yes you are right you can surely create your model with that approach.
But I will tell you another approach to build your new model for the images. Generally, when you are dealing with an image dataset then CNN(Convolutional Neural Network) performs better than normal ANN(Artificial Neural Network). Why?
I have an answer for this one, Go and google the structure of CNN then you will find that there will be a hierarchical structure of CNN. One concept is general in deep learning more the dense layers more the accuracy and more the training data, best your model will be trained.
CNN uses extra layers on the top of ANN as convolutional layers, pooling layers, and then all go together in fully connected layers, and then finally you can get your desired output from the output layers.
Go and Just read about CNN and come back to this article because here we are going to use the approach of transfer learning to train our model. Before moving forward let me tell you, what is transfer learning and why I am interested to explain to you about that?
Transfer learning is the method/process to use the pre-trained model on a new problem. So must have a question that why we will use a pre-trained model, If I can build our own model right?
Yes, you can surely build your own CNN model for your problem statement, but as I have told you for better training of your deep learning model, you need a large number of the dataset, and suppose you have collected the dataset, your next task is to choose the right parameters for this problem as weights, learning rate, etc.
So the overall solution for the above problem is to walk through the approach of transfer learning because transfer learning can train a deep neural network with comparatively little data.
So now, you must have an idea, what is transfer learning. Even in transfer learning, there are many approaches and for that, you can use various pre-trained models such as VGG-16, ResNet-50, etc.
In this tutorial, I am going to use VGG-16 and I will also tell you, how you can use different pre-trained models in the same code by some modifications.
Transfer learning gives us the ability to re-use the pre-trained model in our problem statement. For example, you have a problem to classify images so for this, instead of creating your new model from scratch, you can use a pre-trained model that was trained on the huge number of datasets. Basically, you can transfer the weights of the previous trained model to your problem statement.
Transfer Learning Approach in Computer Vision Applications
So I have introduced the concept of transfer learning and now it’s time to do some practical regarding this. For this tutorial, I have collected some of the images from open data sources and created my folder and two subfolder test and train. these folders are having some class of folders for the different images. With this image dataset, we will use the transfer learning approach to build our first image classification model.
The above block will show you the structure of my folders. You can use any dataset of images. Here I am only taking this dataset to apply the transfer learning approach.
Link to the dataset Dataset
NOTE: To follow this tutorial you can use google colab notebook or you can install all the dependencies in your system.
Now, let’s move forward to build our model to classify these images. keras.models import Sequential
In the above block of code, I am loading the required library to use in our model. You are already familiar with Keras’ deep learning API.
If you will see the above block of code then you will found that I am importing VGG-16 from Keras.applications.I have already told you at the beginning that I am going to use VGG-16 in our problem statement. I am also importing one module named ImageDataGenerator, which is useful to generate random images by modifying the already present images in our dataset (modification means generate images by rotation, flip, shift, and zoom).
The advantage of the image generator is to add more data to our training sets, this technique is also called image augmentation. Image augmentation allows us to create more number of copies of already present images by doing some transformation as flip, rotate, shift, and zoom.
import numpy as np from glob import glob import matplotlib.pyplot as plt
IMAGE_SIZE = [224, 224]
train_path = '/content/drive/My Drive/Dataset/train' valid_path = '/content/drive/My Drive/Dataset/test'
vgg = VGG16(input_shape=IMAGE_SIZE + [3], weights='imagenet', include_top=False)
for layer in vgg.layers: layer.trainable = False
folder = glob('/content/drive/My Drive/Dataset/train/*')
x = Flatten()(vgg.output)
prediction = Dense(len(folder), activation='softmax')(x)
model = Model(inputs=vgg.input, outputs=prediction)
model.summary()
Let me explain to you one by one that what I am doing in the above block of code. Look at the first block that I am importing some of the important libraries. You are already familiar with plot and NumPy but I am importing here one new library named glob and it will be useful to retrieve files matching specific patterns.
In the second block, I am using image_size [224, 224] to scale all the images to this size only. And the reason behind this size is that when VGG-16 was trained, Its used image size was [224,224].
In further block, I am storing the path of the train and test.
In the next block of code, I am storing the VGG-16 model into a vgg variable with the imagenet weight. We want to cut the last layers of VGG-16 because the VGG-16 model was used to categorize thousand of images but in our problem statement, we are having only four categories.
If you will clearly see then you will find that I am using input_shape as Image_size+[3] and the reason is that the image is having three channels(RGB), so we need to add that also. Suppose if you are having black and white images then you don’t need to do that because It is having only one channel.
If you will notice one thing, you will find that I am using include_top=False because I don’t want to add the last layer. If you will use it as true It means that you are adding the last layers.
In a further block of code, I am using a for loop in vgg, and the aim to use this loop is that I don’t want to train the existing weights. You can say that I don’t want to train the VGG-16 layers because It has been already trained. Don’t try to put that as true otherwise, your model will start training itself and you will not get better accuracy.
Next, I am storing the image path into a variable named folder with the help of the glob module and this will result in showing us all the subfolders present in our folder.
In the next code block, I am flattening the last layers of VGG-16 and after that, I am appending my folder as dense layers with an activation function softmax.
In the next block of code, I am converting everything as a model by giving vgg. input and prediction which will be combined, and after we can see the model summary.
In a further block of code, I am looking at the model structure with the help of model.summery.
You can see the output below.
You can see in the above block that we have a total of 14,815,044 parameters in which there are 100,356 trainable and 14,714,688 non-trainable parameters.
You can see in the above output, we have 4 output because we are having four categories in our folders.
model.compile( loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'] )
from keras.preprocessing.image import ImageDataGenerator
train_gen = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True)
test_gen = ImageDataGenerator(rescale = 1./255)
train_set=train_gen.flow_from_directory('/content/drive/My Drive/Dataset/train',target_size = (224, 224),batch_size = 32, class_mode = 'categorical')
test_set=test_gen.flow_from_directory('/content/drive/My Drive/Dataset/test',target_size = (224, 224),batch_size = 32, class_mode = 'categorical')
p=model.fit_generator(train_set,validation_data=test_set,epochs=5,steps_per_epoch=len(train_set),validation_steps=len(test_set))
In the above block of code, I am compiling my model passing loss function as categorical_crossentropy and I am using adam optimizer and also I am using accuracy metrics. The overall intention to use this complete is that I am telling my model what kind of optimization and cost function I have to use.
I am also generating images using Image data generator. I have already told you in the beginning that why we have to use an Image data generator.
After performing all the preprocessing, I am storing all the parameters into the train and test set. After storing the data into the train and test set, I am using fit_generator to fit this model.
Why I am using fit_generator() instead of fit()?
The answer is that .fit_generator is used either we have a huge number of datasets in our memory or we have applied data augmentation in our model.
I am using 5 epochs, You can also try to run for more epochs and you can see the change in the accuracy.
You can see the output of each epoch and you can see the loss and accuracy.
Moving forward to plot the loss of this model.
plt.plot(p.history['loss'], label='train loss') plt.plot(p.history['val_loss'], label='val loss') plt.legend() plt.show()
In the above block of code, I am plotting the loss function. The output can be seen below.
Now finally we have built our model with the help of the transfer learning concept and for that, we have used VGG-16. You can also use another pre-trained model as ResNet50 and you don’t need to change all the code just add ResNet50 In place of VGG-16.
I mean that just import ResNet50 from the Keras library instead of VGG-16.
congratulations to reach here, now you are ready with your new model. Take this example and apply it to your upcoming project, It will really help you.
CONCLUSION:
So finally we have completed our tutorial, transfer learning approach in deep learning. We have encountered a new concept that was image augmentation and this concept is very useful in computer vision applications. By using this you can create a large number of image dataset with your existing image data and you will be ready to train your new transfer learning model.
I have also told you, how you can change different pre-trained models without changing the whole code.
Thanks for your time.
code can be found at the link code (2)
Well, I don’t have any technical knowledge in this coding field… Still, I came here just to appreciate your work. And yeah, the day I gather some knowledge, I’ll come here for sure😉😄 | https://valueml.com/transfer-learning-approach-in-keras-deep-learning-python/ | CC-MAIN-2021-25 | refinedweb | 1,999 | 61.56 |
Introduction.
Vivado Build
The first step is to install the board definition files, which enables Vivado to understand the configuration of Cora Z7. You can download the board definition files from here.
However, we need to first make a little modification to the files before we use them, if we wish to use them with the MTDS.
This stems from the pin out of the SPI connection between the Cora Z7 and the MTDS.
The board definition files for the Cora Z7 route all of the SPI signals to the six pin SPI connector J7.
However, the MTDS uses the 6-pin SPI connector for the data signals and clock but the select signal is routed to shield pin 10.
Some development boards, e.g. the Arty S7, route the SPI pins to the SPI header and also to the shield connector IO10 through IO13. This is not the case with the Cora Z7.
As the Cora Z7 does not do this, we have two options:
- Use a separate constraints file for the SS signal
- Update the board definition file for the SS position suitable for use with the MTDS
For this application, I chose to update the board definition file, as it is the simplest approach and it means we can keep using the board interface in Vivado. This allows us to drag and drop board interfaces on to ports as we desire on to the block diagram.
To update the board definition file, under the Cora Z7 board files open part0_pins.xml in a text editor like notepad++ and edit line 72 to use pin U15 in place of pin F16.
To ensure there are no issues, set the pin currently defined to use U15 to F16 you can find this on line 47.
For Vivado to take notice of these changes, if it is currently open you must close and reopen the project.
With these completed the next step is to create a project and add in the elements we require.
From the Digilent IP library we need the following
- PmodMAXSONAR
- PmodMTDS
We also need the Zynq Processing system, as the application will run from the ARM A9 cores.
Creating the Vivado project is simple and involves the steps below:
When the final step has been completed the design can be implemented and the bit file exported to SDK.
Software Build
The software for this build needs to be able to read the PmodMAXSONAR and display the results on the MTDS.
We can of course output the distance over the UART however, that is only really good for testing on the bench.
To drive the PmodMAXSONAR we can use the drivers provided.
To start using the PmodMAXSONAR it is as simple as initializing the Pmod and telling it the clock frequency (100 MHz in this case).
We also have to define the sonar itself.
#include "PmodMAXSONAR.h " #define PMOD_MAXSONAR_BASEADDR XPAR_PMODMAXSONAR_0_AXI_LITE_GPIO_BASEADDR #define CLK_FREQ 100000000 PmodMAXSONAR sonar;
To initialize the function we call the MAXSONAR_begin function from within PmodMAXSONAR.h
MAXSONAR_begin(&sonar, PMOD_MAXSONAR_BASEADDR, CLK_FREQ);
The distance measured can then be provided by calling the MAXSONAR_getDistance function.
A gentle reminder for those in Europe this returns the distance in inches not SI units, to convert to SI we need to multiply by 2.54.
dist = MAXSONAR_getDistance(&sonar);
To use the MTDS and draw images on the display, we use the my display library. This provides a range of functions which ease use of the display.
If we want to use logos, we can load BMP images onto the MTDS SD card and access them via the display library when we wish to use them.
#include <MyDisp.h>
For this application to demonstrate the measurement and writing to the display, the main body of the code is pretty simple:
char c[50]; MAXSONAR_begin(&sonar, PMOD_MAXSONAR_BASEADDR, CLK_FREQ); printf(" Presents\n\r"); printf("Sonar Distance Measuring Example\n\r"); mydisp.begin(); mydisp.clearDisplay(clrBlack); mydisp.setForeground(clrWhite); mydisp.setPen(penSolid); mydisp.setForeground(clrBlue); mydisp.drawImage((char*) "Images/logo_large.BMP", 70, 20); mydisp.drawText((char*) "", 20, 160); mydisp.drawText((char*) "Hexapod Robot Controller", 40, 170); mydisp.drawText((char*) "Distance To Object", 60, 190); print("\n\r"); u32 dist; while (1) { dist = MAXSONAR_getDistance(&sonar); xil_printf("dist (in) = %3d\r", dist); sprintf(c, "%d", dist); EraseImageBox(true, 120, 200, 240, 220); mydisp.drawText(c, 120, 200); usleep(200000); }
The position of text and images on the display is references using a X and Y location references. The MTDS display has offers 240 pixels by 320 lines, measured as standard from the top left of the display.
One point to remember is that each time the displayed distance is output, the text must first be cleared to ensure there are no redundant characters from the previous distance.
The function EraseImageBox is used to erase the line the distance is reported on before a new value is displayed.
The final stage is to then generate a first stage boot loader and generate the bit file such that we can run the Cora Z7 board independently.
Testing
The final stage is to test the accuracy by measuring the distance to an object and then measuring with the Sonar System.
The first step was to place an object away from the sensor and measure the distance between the two.
You can also see a short video of the measurement updating in real time below, note for distances below 6 inches the sensor outputs 6 inches.
This gives us the starting point for the hexpod robot and the ability to measure distance to objects and avoid them.
As a side note, it also implements a very handy standalone measurement system.
You can find the files associated with this project here:
See previous projects here.
Additional Information on Xilinx FPGA / SoC Development can be found weekly on MicroZed Chronicles. | https://www.hackster.io/adam-taylor/building-a-hexapod-robot-sonar-measurement-with-fpga-dc0e47 | CC-MAIN-2019-43 | refinedweb | 967 | 59.64 |
CodePlexProject Hosting for Open Source Software
Can anyone please let me know how to define region in ChildWindow and navigate/inject view into it displaying a modal popup. This should also compose the view using MEF to import is corresponding ViewModel and other imports like IEventAggregator.
A prompt response with sample application will be greatly appreciated.
Thanks,
Milind
reference the SL version of the stocktrader_ri. In the shell notice the top of the shell there is xaml code for the "SecondaryRegion" this is directly related to the ChildWindow control in silverlight. In wpf I use standard region methodology
to get to it. To understand what is going on the code that makes all this happen is in the Infrastructure project under behaviors.
regionManager.Regions["SecondaryRegion"].Add(someView);
this is done in your bound button/link with Commanding.
regionManager.Regions["SecondaryRegion"].Activate(someView);
or
regionManager.RequestNavigate("SecondaryRegion", new Uri("/theviewinquestion", UriKind.Relative));
When I want to indicate the window is closing I use regionManager.Regions["SecondaryRegion"].Deactivate(someView); which is called via a ICommand bound to a button.
xmlns:infBehaviors="clr-namespace:StockTraderRI.Infrastructure.Behaviors;assembly=StockTraderRI.Infrastructure"
infBehaviors:RegionPopupBehaviors.CreatePopupRegionWithName="SecondaryRegion"
infBehaviors:RegionPopupBehaviors.
Thanks for the reply, can you please clarify following.
when you are saying "the top of the shell there is xaml code for the "SecondaryRegion" this is directly related to the ChildWindow control in silverlight" do you mean it is derived from ChildWindow class? Besides this how do you
get the instances of view to Add or remove or activate using regionManager.Regions["SecondaryRegion"].Add/REmove/Activate(someView);
I have one more query how to create object isntances using MEF explicitly so that object getting created wtill import all its dependency. How can I get hold of MEF container from within code. I know we can get it in the bootstrapper, but I am looking for
it in the modules so that I can compose my view using MEF ( so that all dependencies like ViewModel, IEventAggregator etc get imported). Waiting for your response.
It would be great if some one can help me getting ChildWindow scenatio working with some example or sample app.
at the bottom of my post there is the code fragment that is facilitating the popup window. In wpf I am using this feature now. The wpf code base uses Windows and SL code base uses ChildWindow class (which is part of silverlight, but not in wpf
bcl). Farret thru that xaml markup and see then also reference the Infrastructure and look for the Behaviors.
As for activating use the newer regionManager.RequestNavigate("SecondaryRegion", new Uri("/theviewinquestion", UriKind.Relative));
then I use
var viewinquestion = regionManager.Regions["SecondaryRegion"].GetView("viewinquestion"); <<-- since Deactivate takes an object parameter.
regionManager.Regions["SecondaryRegion"].Deactivate(viewinquestion);
for closing the popup window.
I am also very interested in this solution,
but also could not understand the code you posted.
I took a loke at the StockTraderRI and could not understand that too. and its not a popup.
Is there another simple example?
Can you please send me the working sample. I know StockTickerRI is there but it has too many things...I just want to clarify the concepts of modal popup display using PRISM mechanism. How easy it is for everyone to stard adding popup, sending data back to
parent screen, hiding popup, sharing popup between different modules etc.
I hope you will come up with the satisfactory answers as you have already...
Most important one is behaviour concepts, what are they?
Ok guys I will post a solution, it uses a region for the popup (aka modal dialog). I have modified the behavior that StockTraderRI uses to show as a dialog not as "non-modal dialog". The code should work for silverlight but
since I don't use SL atm, I haven't tested it under SL. N.B. I have't modified the code for SL.
Give me a few and I will post a very functionally slim example for ya to look thru... As for bringing data in and out, I will leave that for drill since it isn't that hard.
Morgan.
Thanks Morgan, really appreciated...waiting for the source code. I hope you will be uploading it to some shareware sitte for downloading. I also don't mind the non modal version of the popup behaviour just to have a look at it. But most important one is
modal popup behaviour with MVVM pattern ( wher MEF composes all of view, viewmodles parts). Thanks again, it will really help me in understanding the behaviour and regions.
Waiting for the source code too.
i have found a solution to use ChildWindows, but i think its not a Elegant Solution:
My App:
So, in the Lib project a declared an Interface:
public interface IChildWindowA
{
void Show(/* pass ViewModal and Current Record to update*/);
}
public interface IChildWindowA
{
void Show(/* pass ViewModal and Current Record to update*/);
}
in the ChildWindowA, implements the Interface
[Export(typeof(IChildWindowA))]
[PartCreationPolicy(CreationPolicy.NonShared)]
public partial class EditTemp : ChildWindow, IChildWindowA
{
public void Show(...)
{
this.Show();
}
}
[Export(typeof(IChildWindowA))]
[PartCreationPolicy(CreationPolicy.NonShared)]
public partial class EditTemp : ChildWindow, IChildWindowA
{
public void Show(...)
{
this.Show();
}
}
The ModuleB DependsOn the ModuleA, In a Button Click inside the ChildWindowB in the ModuleB:
private void OnButtonClick(object sender, RoutedEventArgs e)
{
_editWindowA = (IChildWindowA)ServiceLocator.Current.GetInstance(typeof(IChildWindowA));
_editWindow.Show(...);
}
private void OnButtonClick(object sender, RoutedEventArgs e)
{
_editWindowA = (IChildWindowA)ServiceLocator.Current.GetInstance(typeof(IChildWindowA));
_editWindow.Show(...);
}
Is that a Bad Solution? or i can go with that
copy / paste that link to my skydrive and then download the .zip with the solution in it... Don't forget to "unblock" if you have anti-viral in place. This is guidance is based on the StockTrader implementation. It's rough
and could probably be refined and allow for a better solution for editing but they way I do it in my other project is check for an object is null if is then its a new entry and vice-versa if its not null then its an edit. I am merely setting an object
of type x (where x is the record) to the most current selection and going from there, edit/delete, etc... That object is in the viewmodel.
The contracts for simplicity sake are not concrete, which can be changed to interfaces for extensibility purposes.
In the Behaviors folder are all the files that StockTrader use to make a "SL Popup" or a WPF Window.
Thx alot mvermef,
But i have a consideration:
In the Module inizialization, you imported the and added the view to the Region (Popup). Thats a problem im my App, becausa my App have alot of Popups, and i think its a problem create all and add to the region in the initialization of the module.
I made some changes in your example to try to solve this potential problem.
1 - In the Module initialization i removed the code that Add the view to the region:
[ModuleExport(typeof(ExampleModule))]
public class ExampleModule : IModule
{
private IRegionManager regionManager;
//import the view and create a new instance.
//[Import]
//private DemoView view { get; set; }
[ImportingConstructor]
public ExampleModule(IRegionManager regionManager)
{
this.regionManager = regionManager;
}
public void Initialize()
{
//register the view with the region.
//regionManager.Regions["SecondaryRegion"].Add(view, "DemoAppView");
}
}
2 - Created an Interface to export the view:
namespace DemoApp.Infrastructure
{
public interface IDemoView
{
}
}
[Export(typeof(IDemoView))]
[PartCreationPolicy(CreationPolicy.NonShared)]
[RegionMemberLifetime(KeepAlive = false)]
public partial class DemoView : UserControl, IDemoView
{ ... }
3 - At the Command that who the Modal Window:
public void ShowModalWindow()
{
regionManager.Regions["SecondaryRegion"].Add(ServiceLocator.Current.GetInstance<IDemoView>(), "DemoAppView");
regionManager.RequestNavigate("SecondaryRegion", new Uri("/DemoView", UriKind.Relative));
}
4 - I have a problem in the Close button of the view, the have that attribute "[RegionMemberLifetime(KeepAlive =
false)]", but i think its not working, because when i Deactivate the View, its still alive and are not destroyed s. So i removed the view.
private void Close()
{
//Get the View
var view = regionManager.Regions["SecondaryRegion"].GetView("DemoAppView");
//Not Found fail gracefully, otherwise closes the view/region
if (view != null)
{
regionManager.Regions["SecondaryRegion"].Deactivate(view);
regionManager.Regions["SecondaryRegion"].Remove(view);
}
}
How do think about the sollution? I Just need to Declare and implement an interface for each of the Modals in my Application.
Heres the updated Demo App:
All are probably good suggestions, like I said it was a rough app and there is always room for improvement. When i said I would post a solution I was at a conference and then later waiting to catcha flight out of DC, i threw the
bulk of the demo together then finished it today. But I think you get the general idea, I hope it helped. I hadn't ever thought of using the ServerLocator that way since I was using MEF to the majority of the work. I might have to use the
locator more to get stuff into the regions like you had there.
But isnt that a bug?
If my view has: "[RegionMemberLifetime(KeepAlive =
false)]" that code should destroy the view " regionManager.Regions["SecondaryRegion"].Deactivate(view);", but the view are not destroyed.. so i need to Remove that from the view.
Its either that or we aren't disposing correctly of the view on close.
Thanks guys for good suggestion and solutions...but can anyone confirm the approach for opening one modal popup in Module B from modal popup in Module A lanucnhed from the view within module A. It would be great to have ddemo solution as above. There will
be often scenation where you common modal popups such as serach screens will be in the common module which will get called from other modules . Besides this when we use WCF RIA services it geenrates entites on the client side whcih can be shared
and used by other modules. My question here is from deployment prospective how will be able to acheive deployment of only module A which resulting in to changes in common entities used by other modules lets say module B and C, Do I have to build this module
as well and deploy them as they are using entities changed due to changes in Module A. I want a strategy where I will be able to deeply only parts that got changed
A good guidance on achieving modularity from deployment prospective is all I need here in the production environment and version management.
milind_yande,
yes this will do what we are suggesting. as long as the module is initialized and it good working order this can be achieved. I would even go as far as saying it would be a good idea to even make the modules dependent on one or the other.
I gather that module b is depending on module A in some fashion so there you go. The example above would work. It might even be necessary for a IEventAggregator to be used to do cross module event calls e.g. showing the dialog.
Thanks Morgan, it would be good if you can post a sample demonstrating cross module calls for launching popup from within popup. I hope there is one sample already to which I can refer.
Waiting for your resppnse..this will save us lo of time as there is deadline...hoping for the best.
OK so I can get the example simple demo app provided earlier in this post to work. However, when I apply this to my solution, the window that pops up is not modal, and I really can't figure out why this is. With the popup window open, I can still
interact with the parent window.
Any pointers as to where I need to look to ensure that my app opens the window as modal?
Nevermind, I found it. Turns out I had to change the implementation of the Show method in the WindowWrapper class.
yea I had to modify the underlying code in that class to make it modal..
With lucianotcorreia example, what do you do if the user closes the window an alternate way other than the 'close' button?
Example.
Click button to open popup, then Alt+F4 or the x button, then click open popup. An exception occurs when trying to add 'DemoAppView' to the region.
I can add validation code to the ShowModalWindow, but I rather not do that for every viewmodel popup.
Any suggestions?
I can confirm xiquon is correct - if you close the modal using Alt + F4 an exception occurs. Are there any suggestions?
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://compositewpf.codeplex.com/discussions/231798 | CC-MAIN-2016-50 | refinedweb | 2,103 | 56.35 |
KeyboardLayout QML Type
Properties
- inputMethod : var
- inputMode : int
- keyWeight : real
- sharedLayouts : var
- smallTextVisible : bool
Methods
Detailed Description
This type is the root element of the keyboard layout. Use this element to build a new keyboard layout.
Example:
import QtQuick 2.0 import QtQuick.Layouts 1.0 import QtQuick.VirtualKeyboard 2.1 // file: layouts/en_GB/main.qml KeyboardLayout { KeyboardRow { Key { key: Qt.Key_Q text: "q" } Key { key: Qt.Key_W text: "w" } Key { key: Qt.Key_E text: "e" } Key { key: Qt.Key_R text: "r" } Key { key: Qt.Key_T text: "t" } Key { key: Qt.Key_Y text: "y" } } }
Property Documentation
Sets the input method to be used in this layout.
This property allows a custom input method to be used in this layout.
Sets the input mode to be used in this layout.
By default, the virtual keyboard attempts to preserve the current input mode when switching to a different keyboard layout.
If the current input mode is not valid in the current context, the default input mode is specified by the input method.
Sets the key weight for all children keys.
The default value is inherited from the parent element in the layout hierarchy.
List of layout names which share the input method created by the createInputMethod() function.
If the list is empty (the default) the input method is not shared with any other layout and will be destroyed when the layout changes.
The list should contain only the name of the layout type, e.g., ['symbols']. The current layout does not have to be included in the list.
Sets the
smallTextVisible for all children keys.
The default value is inherited from the parent element in the layout hierarchy.
This property was introduced in QtQuick.VirtualKeyboard 2.0.
Method Documentation
This function may be overridden by the keyboard layout to create the input method object dynamically. The default implementation returns
null.
The input method object created by this function can outlive keyboard layout transitions in certain cases. In particular, this applies to the transitions between the layouts listed in the sharedLayouts. | https://doc.qt.io/archives/qt-5.11/qml-qtquick-virtualkeyboard-keyboardlayout.html | CC-MAIN-2021-10 | refinedweb | 338 | 60.61 |
CocoaHeads: Objective-C 2.0Michael Jurewitz will give a presentation tonight on Objective-C 2.0 at CocoaHeads Silicon Valley. The meeting will be in Town Hall at 7:30pm. Objective-C 2.0 is a major language upgrade coming in Leopard.
Thursday, September 13.
We really appreciate Mike coming out during an incredibly busy schedule. Please come show your support.
CocoaHeads: Objective-C 2.0
Posted Sep 13, 2007 — 31 comments below
Posted Sep 13, 2007 — 31 comments below
Majd Abdelahad — Sep 13, 07 4589
d chalmers — Sep 13, 07 4590
Andrew — Sep 14, 07 4594
See:
and
Ulai — Sep 14, 07 4599
Let me provide an example: How do you guys feel about the fact that Objective-C 2.0 will not even bring support for overloaded operators? This means that a scientist who is working on complicated models involving, say, complex numbers will never move towards Cocoa development. He will want to create a class called ComplexNumber, let z and w be two instances of that class, and he will want to add those complex numbers with z + w, instead of ComplexNumber.Add(z,w). You can imagine the latter notation being way too cumbersome as the models get more complicated. Therefore, scientists working with complex number at least (and there are many of them I might add), will never move to Cocoa. Yet.
Scott Stevenson — Sep 14, 07 4600
This is a design decision, I believe. Objective-C is intended to have a small set of syntax and code should look relatively similar from project to project. One opinion (and it would seem, the prevailing one on the Objective-C team) is that operator overloading opens the door for code that is too hard to read.
Developers that want to use operator overloading could use C++ for the internal logic processing, and Cocoa/Objective-C++ for the UI level.
The main standout points of Objective-C 2.0 are:
- Garbage collection
- Properties with synthensized accessors
- Property metadata
- Foreach-style loops
- Some other fancy stuff that may still be NDA'd
David Wareing — Sep 14, 07 4601
Russell Finn — Sep 14, 07 4602
How would your hypothetical scientist feel about using actual Objective-C syntax, which would look more like [z plus: w]? I admit it's not the same as z + w, but it's more readable than what you posted. (Or he could use Objective-C++ to combine his matrix class library with a Cocoa UI, and have the best of both worlds.)
It would be easier to take your criticism more seriously if it looked like you'd actually investigated the language, instead of running down a bullet list looking for your favorite missing features.
Scott Stevenson — Sep 14, 07 4603
How would your hypothetical scientist feel about using actual Objective-C syntax, which would look more like [z plus: w]
This is a good point because the Objective-C 2.0 dot syntax does not support methods with arguments like object.add(1,2) or even object.print(1). It's designed for property setters and getters:
object.name = @"Leopard"; NSLog(@"name: %@", object.name);
Messages that do not involve the setting and getting of properties still look like this in Objective-C 2.0:
NSString *newString = [string stringByAppendingString:@"New String"];
Pieter Omvlee — Sep 15, 07 4604
I agree with the majority here in that I think that overloading should be left out (luckily they did leave it out!)
I should really move to the US, why do you get all that free cool stuff, I would love to be there...
Ulai — Sep 15, 07 4605
Yes, I did realize just after having posted the thought that I had written the C++ style method instead of the Objective-C one. I happen to know a thing or two in Objective-C. However, changing the style of methods to the Objective-C ones does not change my arguments one bit.
Russell talks about [z plus: w] being more readable than .Add(z,w). Sure I agree with that. But this is one example. And probably among the simplest you could find. What about the example of z+w+v ? This would translate into [[z plus: w] plus: v]. Or some string formatting stuff. Simply just not as good as z+w+v. The thing will get worse as we move to real-world applications of the scientists.
What is it about this belief that overloading operators will make code less readable? To me it is like saying that every word in the English dictionary should mean one and only one thing. That of is of course not so and what they mean depend on the context. When you read code that says z+w you will have read the surrounding code, and in your study of the code you will long have known that z and w are complex numbers.
Don't get me wrong though. I happen to like Cocoa and Objective-C a lot. However, I do for sure miss overloading operators. I happen to use C# daily at work and overloading has not caused me any confusion at all. It has just served to make my life easier. After all, the meaning of '+' is just determined by the context, which will be obvious when the code in question has been looked at.
Blain — Sep 15, 07 4606
The reason dot property syntax isn't as bad as I feared is because all NSObjects are pointers, so fooObject.property can't be mistaken for a c construct.
Scott Stevenson — Sep 16, 07 4607
My opinion is that for general-purpose desktop application programming, operator overloading causes more problem then it solves by developers inventing their own metalanguages that someone else has to learn later.
At the end of the day, Objective-C's main priority seems to be general-purpose programming. That said, you can mix Objective-C and C++ if it suits your needs.
I happen to use C# daily at work and overloading has not caused me any confusion at all
Which makes sense since if it did, I doubt you'd be requesting it. :) Of course, the challenge is that Objective-C has a lot of users to take into consideration. Something that you consider a weakness I consider a strength, but that's the whole point of having multiple languages to choose from.
None of this stops you from writing a Mac app, of course. The UI logic and internal calculation logic can be written in different languages.
Blain — Sep 16, 07 4608
Correct me if I'm wrong, but C# frowns on the use of pointers, and makes them mostly inaccessable. Problem is that, to play nicely with C, pointers are vital. C++ muddles through, and here's where the overloading falls apart.
All Obj-C objects are actually pointers to mostly-opaque structs. To get to the value pointed at foo, you use *foo. If foo is an array, you can point to the first element as bar = foo, and then talk about *bar, and the next element is pointed at by bar+1.
So suppose we have an C array of ComplexNumber.
ourNumber = numberArray[0];
No problem. The value of the complex is *ourNumber.
So if we want to add two complex numbers, we'd want
*c = *a + *b;
But we want to use constants as well.
*c = *a + 1;
which is a mess and much worse than brackets, especially when you get *c=*a * *b;!
So let's overload so adding pointers returns a pointer to the sum of the values.
c = a + b;
But now
c = a + 1;
is ambigous; Is that the index of the object after a in the array, or one added to the value of a?
Either we get confused scientists, or we lose backward language compatibility. Again, this is moot with a lang that doesn't allow for pointer math. But it leads to lots of bugs if it's not called a new language.
The thing is that complex numbers are an edge case, and Obj-C is meant more towards doing 80% really well instead of 100% mostly okay.
What would I suggest to the wayward scientist? Simple. If you know C#, use it, and the mono to cocoa bridge.
Or use a language taylored to the physics, and use obj-c to display, but not compute the results. Model, view, controller and all.
Eric Wing — Sep 17, 07 4610
#include <complex.h> #include <stdio.h> int main(int argc, char* argv[]) { float complex my_result; float _Complex my_complex1 = 3.0 + 4.0*I; float complex my_complex2 = 1.0 + 2.0*_Complex_I; my_result = my_complex1 + my_complex2; printf("Add: %f + %fi\n", creal(my_result), cimag(my_result)); my_result = my_complex1 * my_complex2; printf("Multiply: %f + %fi\n", creal(my_result), cimag(my_result)); return 0; }
Obj-C is a pure superset of C; no need to write your own class in this case.
Nicko — Sep 21, 07 4615Timer = [[[NSDate alloc] initWithTimeInterval: delay sinceDate: startTime] autorelease]]
Can you honestly say that the former is harder to read than the later? The exact implementation of the addition process is hidden from sight but this is a feature, not a bug. It's called abstraction!
It's interesting to note that if the language grew the capacity to handle operator overloading through mapping of operators to appropriately named messages it would be relatively easy to retrofit support for this into existing libraries; all you have to do it write categories to support the new operators. This also means that you don't have to add complex support for trying to map to "left" and "right" forms of operator since if I come up with a new class and I want to be able to have it as the right-hand operand dealing with an existing class then I just add a category to the existing class.
Scott Stevenson — Sep 21, 07 4616
It's not a empirical fact. It's an opinion based on experiences of the language designers. You can certainly provide supporting evidence for either side, but ultimately the point is moot because Objective-C does not have operator overloading.
Can you honestly say that the former is harder to read than the later?
No, but I think it's beside the point. In any case, it's a very contained, tiny example which I don't think illustrates the thinking behind the decision.
In the example you give, you talk about the ease of writing code with operator overloading. In my opinion, ease of reading code is more important in most cases. The more layers you have, the more you have to dig to find out what's going on.
There's certainly less code to read in that one snippet, but shorter code doesn't necessarily mean more clear or easier. Cocoa and Objective-C discourages fancy language tricks and instead favors the shallow, low-tech solution. Something like Ruby/Rails takes a different approach. It's a choice: there's no way to prove one is better than the other.
It's called abstraction!
Hmmm. I'm not sure I agree with that. Redefining the basic meaning of a C operator is much different that creating a shim method over several smaller ones. But I could be convinced otherwise.
When you tackle a big chunk of code, it's already quite a lot of work to figure out the naming schemes and how all the classes fit together. It can become quite a bit more complicated if you have to figure out how the original author decided to recast operators. Again, this is just my opinion.
Nicko — Sep 21, 07 4618
You can certainly provide supporting evidence for either side, but ultimately the point is moot because Objective-C does not have operator overloading.
Well, the topic of the thread is a future release of the language which benefits from newly added features, has changes which makes currently erroneous code legal and is still only in beta, so discussion of what Objective C either might or should have seems appropriate. Furthermore, I'm actively considering building a patch to gcc to add support for it!
In the example you give, you talk about the ease of writing code with operator overloading. In my opinion, ease of reading code is more important in most cases.
I talked about both, and I think that both are important. The question I asked of you was which was more readable but the deeper question is "Which makes the programmer's intent clearer to the reader?" My most common frustration with coding complex applications in Objective C is that the intent of the code is all to often obscured by the detail.
The more layers you have, the more you have to dig to find out what's going on.
That is certainly true in some cases, but in many others, if the programmer is doing his or her job, there should be no need to dig. If I've got three variables called "startTime", "delay" and "endTime" then when the heck do you think that the "+" operator is going to do in this context? The exact same thing could be said for adding two NSString objects, ORing one NSSet with another or even adding an NSView subclasss to an NSTabView. Unless the programmer is actively trying to deceive the reader it is almost always going to be the case that a succinct representation of the intent is going to be more meaningful to others, precisely because you don't need to dig. That's one of the main benefits of abstraction.
Redefining the basic meaning of a C operator is much different that creating a shim method over several smaller ones.
True, but there is also a huge difference between redefining an operator and overloading it. I'm not supporting trying to redefine the basic meaning of an operator, I'm arguing that the concepts like "addition" and "OR" have a natural meaning contexts other than int or double, that programmers know what it means to subtract, for instance, an object from a set, and that allowing the extension of these natural meanings into other types of object makes it easier to represent your programming goals.
Scott Stevenson — Sep 21, 07 4622
I see the value in what you're describing and I'm not totally closed to having my mind changed, but this particular case of the plus symbol, I think, highlights one important area.
If you consider 64-bit versus 32-bit, and whether the variable is a pointer or a simple value type, plus can already mean a number of things. Adding "whatever the programming thinks plus means" is an additional burden.
Not that I'm saying it's a deal breaker, I just don't think it's trivial. For what it's worth, I think it would be really nice to have some string conveniences in the syntax since that's much less likely to be ambigious. I'm not as sold on mathematical operations.
If I've got three variables called "startTime", "delay" and "endTime" then when the heck do you think that the "+" operator is going to do in this context?
I can imagine what it's supposed to do, but trying to find the difference between my intent and what the computer thinks is happening has left me with countless hours in front of the debugger. I'm not saying operator overloading doesn't have value, I'm just curious if the value is enough to overpower the drawbacks.
Nicko — Sep 22, 07 4631
You are right that C's implicit casting of numerical types can be confusing, and that there is already a meaning for adding integer types to pointers. That said part of the issue here is a mind-set in which object handles are being thought of as structure pointers rather than opaque references. The ObjC syntax certainly allows one to use the form (objHandle + 42)->member, but I can't say that I've ever seen that used. As such (since we're talking about a fundamental language change) I'd rather see that form deprecated for use with Object pointers and attempts to use it will just cause the compiler to warn objClass may not respond to '__add__:' or something of that ilk.
In the end I think this debate ultimately comes down to the question "Do I trust the authors of code I have to read to use overloaded operators in a manner that is intuitive and meaningful?" If I don't trust them to do this then overloaded operators will mean I have to look all over the place to find the meaning of their code. If I do trust them then I'll be able to skim through the code and know what the programmer meant much more quickly than if the code was all in line. Perhaps I just have more faith in other programmers than you do :-)
huxley — Sep 22, 07 4632
You'd probably change your mind if you had to debug a complex conversation.
Blain — Sep 23, 07 46.
The other problem becomes the heavy use of @selector and similar items. How do you differentiate between -[add:] taking an int and -[add:] taking a float? How would it be backwards compatible without any special hints?
I also stand by my holding the compiler-defined signs as sacred to Obj-C, but I can see a few compromises. Namely, that not only Obj-C should extend itself while keeping true to the strict C/Smalltalk separation, but that Obj-C++, the fusion of C++ and Obj-C that we already have, should improve and evolve. That is, no operator overloading on .m files, but .mm is fair game.
Not only that, but I can see setting aside a playpen for compiler-defined overrides. Namely, endTime = startTime + delay still plays with pointer math, and isn't the same as endTime = [startTime addTimeInterval: delay]; (Which, I might add, is quite readable). But I could see a use for something like:
endTime = [startTime + delay];
That way, the brackets let the coder and compiler know it's an object message instead of pointer math, but with a twist: like the dot notation, it's syntactic sugar at the compiler level, not the language level. In other words, which message gets called would be dependent on the following variable.:].
Does that sound like a promising idea for Obj-C++ 3.0?
Nicko — Sep 23, 07 46.
I'm not sure I agree with your completely different language assertion but irrespective of that, we don't need to worry in this case. Due to the way that ObjC works, by the time the compiler has to worry about what foo + bar means it already knows the type of "foo" (and "bar", see below) and it knows if foo is a base type (behave normally), a pointer (perform pointer addition), a structure (throw an error) or, since this is ObjC, an object reference. Thus correct pure C code would not need to have its behaviour modified in any way.
But I could see a use for something like: endTime = [startTime + delay];
I can see that working in some ways, and it helps in dealing with questions about operator precedence between different classes, but I see a couple of big problems. Firstly, you're heading towards context-sensitive grammars, which are bad in a number of ways. Secondly, unless you deprecate pointer addition on object references you still have confusion over what [startTime + offset + delay] means.:].
The exact behaviour for this was something I had been thinking about for a while. For base types as the right-hand operand I'm inclined to convert it into an NSNumber (and an NSValue for structs) and pass it as an object, that way the programmer never has to implement more than one call if she does not want to. Conversely, since ObjC does not have multiple inheritance, since we know the class of the operand at compile time and since we know the messages that the recipient responds to at compile time, I think that it would make sense to look at all the messages of the form __add__XXX: to find the XXX that is furthest down the class ancestry for the right hand operand. If there is no match then we pass it to the base __add__: message, which may or may not be implemented and will be warned about if it is not. This would have both valuable static type checking benefits (which would avoid some of the classes error one gets in languages like Python) and would have profound performance improvements, since the ObjC runtime can ensure that the conversion from SEL to IMP only has to happen once and get cached rather than having a full dynamic lookup every time.
Blain — Sep 23, 07 4640
void * testy = [NSString stringWithString:@"Hello, world!"];
NSLog(testy);
Not only does this compile and run, but I didn't even get a warning. Obj-C is already context-sensitive. It's a language arch deluxe, keeping the hot side hot, and the cold side cold. The smalltalk side is completely different than the C side, in class declaration, in class definition, in calling, etc. The difference is that the context (object vs void *) is delineated by the brackets, not by the type that it might or might not be.
This is the crux of the matter: the compiler does not know the class it's being passed -- it doesn't need to; the structure it cares about is constant. A lot of Obj-C's magic uses this. Obj-C doesn't have multiple inheritance, simply because it uses protocols, both formal and informal, instead.
- (void)setDelegate:(id)anObject;
You'll find this in many NSObjects, even ones that aren't directly related. It takes an id. Not even an NSObject. There is not even a shred of a clue what class it's going to be, or even if it stays the same class over time. But it doesn't need to. As long as you check with respondsToSelector:, you're fine. That's right, the compiler need not even know what functions the class has. This is the difference. We don't care about your class, as long as you do your duty.
As for SEL and IMP, do look at methodForSelector:. We already can have the caching technique and eat it, too. And we don't have to sacrifice dynamicness for it. Let us go on without premature optimization.
Nicko — Sep 23, 07 4641
This is simply untrue. Consider the following code:
- (void) foo: (NSView*) v {} - (void) barResponder: (NSResponder *) r view: (NSView *) v button: (NSButton *) b { [self foo: r]; [self foo: v]; [self foo: b]; }
The first call to foo: raises a warning (passing argument 1 of 'foo:' from distinct Objective-C type) while the other two pass silently. The compiler is well away of the class of objects, and of the class hierarchy. Now it may well be the case that the recipient does not care, and can indicate so by typing the parameter as 'id' rather than 'Foo*', but that's a very different thing to the compiler not knowing that class.
As for SEL and IMP, do look at methodForSelector:. We already can have the caching technique and eat it, too. And we don't have to sacrifice dynamicness for it. Let us go on without premature optimization.
I think you are missing the point I was making. If we are going to have some sort of operator overloading then, to satisfy exactly the sort of polymorphic behaviour that you were describing, you want to be able to attempt to pass any class as the operand. That said, the behaviour of an operator is likely to differ depending on the type of the operand (e.g. multiply my polynomial by (a) a rational number or (b) another polynomial, but never allow (c) an NSWindow, because that would be meaningless)..
The first option is dreadfully inefficient (since the switch will run every time). The second option could have some sort of built-in caching, though it would be a different type of caching to the existing methodForSelector: cache since it would have to understand the class hierarchy of the operand as well as the recipient. The third option could be done at compile time, which has the added advantage of allowing the compiler to raise a warning if there is no known matching selector.
There is actually a forth option, which might give the best of both worlds, which would be to implement option 2 and option 3, performing the lookup (and caching it) at runtime but performing a static check at compile time to allow for warnings. This is akin to the current arrangement where sending a random message to an object can raise a compiler warning but may well work at run-time. Now that I think about it, I think that this is the plan I shall try to implement!
Blain — Sep 23, 07 4642
This is simply untrue. Consider the following code:
Perhaps I should qualify my statement. While the compiler and IDE do know of the classes, especially for things like autocomplete, it doesn't necessarily need to know where specific values and functions are. Or even if they'll be the same.
Correct me if I'm wrong, but C++ has two ways for member functions. The lesser used is a virtual function, which is akin to Obj-C's messages, in that which code is run is determined at runtime. The default, however, is statically bound at compile time based on the variable class. This allows for very fast execution, but can lead to fragile base classes, because a subclass can't redefine a non-virtual function.
The problem is that I think one major advantage of Obj-C is delegates and other informal protocols. But they require virtual functions at every step of the way. But I digress.
If we are going to have some sort of operator overloading then, to satisfy exactly the sort of polymorphic behaviour that you were describing, you want to be able to attempt to pass any class as the operand.
To a degree, yes. For what I focus on, math is actually very low, as I focus more on classes that depend on things like tableView:objectValueForTableColumn:row:, which really could be almost any class. I mention -stringValue below, which also can lead to overloading confusion. If you can add two strings (concatenate), and you can treat an NSButton as a string using -stringValue and -setStringValue, can you add two buttons?.
Now that I think of it, take a look at the value methods. They're a good example, in some ways. Both NSCell and NSNumber implement -stringValue, -compare:, intValue, -floatValue, -doubleValue, and the like. They're Obj-C's current response to operator overloading, in two ways. First, the two classes are unrelated save at NSObject, yet they receive the same messages. The second thing, however, is when you hook things up in the nib, there's no mention of it being an int, float, or string. This is a detail done at runtime, effectively overloading the operand as well.
I still bristle at the term "best match," as it always gives me an image of a compiler playing 'pin the tail on the donkey.' But I'm curious as to your solution. I still suggest that it'd be best to extend Obj-C++, which still needs better C++/Obj-C inter-operability and already has some precedent for operator overloading, than to modify Obj-C itself..
Chuck — Sep 23, 07 4643
Nicko — Sep 30, 07 4672
That's right, all ObjC messages are bound at runtime but C++ defaults to binding methods at link-time. That said, virtual methods (and ObjC messages) are still type-checked by the compiler. I'm unsure of the details in C++ but in ObjC there is no run-time type checking of parameters.
I mention -stringValue below, which also can lead to overloading confusion. If you can add two strings (concatenate), and you can treat an NSButton as a string using -stringValue and -setStringValue, can you add two buttons?
Firstly, in the model I propose the "button plus button" case would never behave as you suggest simply because the left-hand operand would always stay as the target of the message and never be coerced into any other type. Secondly, I am not proposing any sort of automated type coercion at all. I think that this feature of C++ is the source of many of the problems people have with C++ operator overloading and it makes it very confusing indeed. Operator overloading in languages like Smalltalk, Python and Ruby happily get away without that complexity.
Now that I think of it, take a look at the value methods. They're a good example, in some ways. Both NSCell and NSNumber implement -stringValue, -compare:, intValue, -floatValue, -doubleValue, and the like.
Any automated use of the "-XXXValue" methods leads to exactly the madness of C++ operator overloading that I'm trying to avoid. They are fine when you are trying to present an object in a specific context (e.g. display the value on the slider as a -stringValue but process it as a -doubleValue) but go any deeper than that and you get to just the sort of obfuscation of meaning that Scott was complaining about at the start of this thread.
I still bristle at the term "best match," as it always gives me an image of a compiler playing 'pin the tail on the donkey.'
If you allow for multiple inheritance and automated type coercion then I agree. In fact what you get is the mess that C++ offers. In Objective C we have simple, linear class inheritance and I'm not proposing to support any coercion. In this case the "best match" is very simple. Just like any other method invocation the parameter has to be a instance of the parameter type or a subtype of that; if there is more than one match then take the match which is closest in the class inheritance chain. So, if I have different methods for a responder, a view or a button then an NSPopUpButton is going to match the button, a generic NSControl is going to match the view, an NSWindow is going to match the responder and an NSDictionary is not going to match anything and raises a warning at compile time, not at run time..
Pose as class will not be a problem, though if you add overloaded operators at run time then the compiler has every right to warn about their use at compile time since they didn't exist then. Categories not only work, but are the "right" way to do this. The classes from the current libraries can have operator overloading support added through categories. Furthermore, if I come up with a new class and I want to add specific support for it as the right-hand operand on some existing class I can retrofit that through a category too. Method swizzling is an ugly hack but it should still work with my envisioned implementation.
Blain — Oct 01, 07 4686
I'd support something that could be done in a way that degrades gracefully with backwards compatibility. Perhaps if there's NSObject class foo and bar. And we want to overload -(void)[doThing:] I can see the following.
-(void) doThing: (NSObject *) ourValue; -(void) doThingWithFoo: (Foo *) ourValue; -(void) doThingWithBar: (Bar *) ourValue; #ifdef OBJ_C_2_1 @overload -doThing: #endif
Without the extensions, we'd have three messages, obviously. And you can explicitly ask for them by name. Heck, you can even have doThing: do class-checking, and reroute appropriately at runtime as a fallback.
With the extensions, when the compiler comes across doThing: with a known foo as the argument, it'd substitute in the doThingWithFoo:, for instance. This way, there's no mangling needed, binary backwards compatibility is preserved, and debug tools all will be honest about which doThing is being called. The overloading naming can be akin to accessors, where it searches for (message name)As(class) first, then tries (message name)With(class), (message name)Using(class), (message name)(class), (message name)_(class) etc.
Question: suppose we have foo, and bar is a subclass of foo. If foo has a doThingWithFoo:, but bar only has doThing, which would be called when [barObject doThing: fooObject] is used? Would it be the generic but most recently implemented bar's doThing:, or the specialized but not as directly associated foo's doThingWithFoo? I'm leaning towards the former, but I can see arguments for either.
Of course, if you want to explicitly override the type, it'd be a case of bar.fooValue, which moots things in that it's no longer bar being used, but an accessor-generated foo. That works very well.
---
Finally, the issue of operator overloading remains with the non-class core types. I'm not talking about int and float as much as NSPoint, NSSize, NSRect, etc. I could see use in NSSize + NSSize, or NSPoint + NSPoint.
I'm tempted to try this last bit in Obj-C++. Namely, C++ allows for structs to be treated as classes, so I'll try making NSPoint::operator+(NSPoint val) without it declared in the struct definition.
Timothy Mowlem — Oct 01, 07 4689
The java package system is very useful for separating classes into namespaces, thereby avoiding name clashes and improving high level view of the code.
Is there any likelihood of such a feature being added to Objective-C going forward?
Blain — Oct 01, 07 4690
The other issue is that I don't know how namespaces could avoid conflicts of a messages. That is, if someone category-extended an NSObject when they made it for 10.4, and in 10.5, a message with that name comes out, bad things happen.
In the meantime, frameworks (with recursive frameworks, at that) and the use of a 2 or 3 letter prefix has been holding up rather well.
Hamish — Nov 04, 07 4973Time = [[[NSDate alloc] initWithTimeInterval: delay sinceDate: startTime] autorelease]]
Luckily, in the presence of categories you simply have to write:
endTime = [startTime plus:delay]
Of course, someone (you or Apple) still has to write the method for this, but operator overloading doesn't change that fact. | http://theocacao.com/document.page/495 | crawl-001 | refinedweb | 5,761 | 69.62 |
2088/how-can-i-sort-an-arraylist-in-java
You can sort the ArrayList in 2 steps:
import java.util.collections;
Collections.sort(alist);
In Java 8 or later:
String listString = ...READ MORE
String[] source = new String[] { "a", ...READ MORE
You can also have a look here:
To ...READ MORE
Assuming TreeMap is not good for you ...READ MORE
You can easily do this by simply ...READ MORE
Since Date implements Comparable, it has a compareTo method just like String does.
So your ...READ MORE
In Java 8 or earlier:
List<String> string = ...READ MORE
In Java 9 you can use:
List<String> list= List.of("Hello", "World", ...READ MORE
Here are two ways illustrating this:
Integer x ...READ MORE
public static String byteArrayToHex(byte[] a) {
...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/2088/how-can-i-sort-an-arraylist-in-java | CC-MAIN-2021-10 | refinedweb | 135 | 78.45 |
Neural networks and cloud IDE: setup and run Tensorflow on Codenvy
This post describes how to setup the Tensorflow library on a cloud integrated development environment Codenvy and run a simple example based on neural network. All software and cloud systems described in this neural networks training are available free. Also, Codenvy allows subscribing to their paid services to extend required system parameters.
Why Cloud IDE
Cloud IDE (Integrated Development Environment) is a good solution for homebrew projects or even for commercial software development. It unifies all workflows with source codes for the project and provides good performance for building, deploying, and testing procedures. Most of the cloud IDEs are accessible from a browser that means anyone can open a workplace and continue working on a project in a few clicks.
Why Codenvy
Codenvy is a cloud IDE and developer workspace server that allows anyone to contribute to a project without having to install software. The advantages of Codenvy:
- It allows starting your project free.
- A user interface for an integrated development environment is very well organized and suits for any browser.
- It offers a variety of environments and frameworks, including Python, PHP, C++, Java and many others.
- In Codenvy, you have access to terminal command interface and connection to external internet resources. Even sudo command is available in your command line that allows installing many custom libraries or tools if necessary.
Please take into consideration that Codenvy IDE can be slow on free account for large computation task, but it is quite enough for education and research purposes.
Why Tensorflow for neural networks
There are numerous open-source frameworks for machine learning and artificial neural networks.
One of them is Tensorflow. Tensorflow is a free library from Google for computing operations on tensors which is very popular for building neural networks and machine learning. This framework works with Python and GPU boards to operate with large computational tasks.
Tensorflow allows working with numerical computations based on graphs. Every graph flow represents a tensor (multidimensional array), and this helps represent a complex computational sequence of mathematical operation. It perfectly works with neural networks code, because tensors represent layers of artificial neurons.
For deeper learning, read this article.
Working with git and continuous integration
Codenvy IDE offers great built-in integration with git. If you prefer using a command-line git, it also works in Codenvy terminal ‘from the box.’
You can configure a continuous integration tool and deploy all sources to your production servers, such as Amazon, Azure or Google. If you need a powerful computation environment, you can deploy it on a cloud instance with GPU (for example, it is available from Amazon EC2 P2 with 16 NVIDIA Tesla GPU board).
How to setup Tensorflow in Codenvy
1. Go to codenvy.io and register for free or login with your existing Google account.
2. Create a Python workplace with python3 and pip.
3. Create a Python project in codenvy.io. Click on ‘Workplace’ -> ‘Create Project’ -> select ‘Python’ and enter a name.
4. Go to a Terminal window in your workplace by clicking menu ‘Run’ -> ‘Terminal.’
5. To install Tensorflow by pip3 type the following command in terminal:
sudo pip3 install tensorflow
6. In file main.py:
# Python
import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
7. To start your example in Tensorflow from terminal:
python3 Hello/main.py
This command will generate console output like b'Hello, TensorFlow!'
8. After your work is done, select in menu ‘Workplace’ -> ‘Stop.’
You can set 3Gb of memory for this workplace and connect your github to it.
Your first Tensorflow neural network in cloud IDE
Now you have Tensorflow installed on your cloud IDE.
Let’s run some classical neural networks, for instance, XOR logical function. You can easily find Python code for Tensorflow. A good example is available here.
In your Codenvy environment, click ‘Workplace’ -> ‘Create Project’ on the top menu. Then select Python in the left part of a dialog box enter XOR in field ‘Name:’ and click ‘Create’ button. Select ‘Project’ tab on the left of the main window and open a project ‘XOR.’
Click on main.py to open in the editor. Then, copy Python code from mentioned GitHub and paste it to this file. Finally, open a terminal window from Codenvy top menu ‘Run’ -> ‘Terminal’ and enter the following command:
python3 XOR/main.py
You will see some text output from a neural network training process and file xor_logs created in your directory ./logs/
Congratulations! Your first neural network example works in Codenvy cloud IDE.
How to run Tensorboard on codenvy.io
Tensorboard is a great visualization tool for Tensorflow graph of computations. It allows displaying a data flow step by step. It helps understand a sequence of operations in Tensorflow code. Every time running Tensorflow computation, you can create a special log file with log information. This code is loaded to Tensorboard and shows a graph of computation in a web browser. This line in Python code will write log file xor_logs in folder logs:
tf.summary.FileWriter("./logs/xor_logs", sess.graph)
To run a tensorboard tool in Codenvy and display an image of the computation graph in your browser, do the following.
1. Go to your dashboard, find your workplace, and click on a ‘Configure workplace’ icon (gear).
2. Go to Server section and select an ‘Add Server’ button.
3. Enter Reference: tensorflow, Port: 6006, protocol: http and click "Add". Be aware, this http connection will be insecure, so do not enter any sensitive data or use https connection.
4. Run your workplace on Codenvy.
5. Copy an address from the Server section.
6. Run the command in the terminal window on your workplace. Let’s assume you run a tensorflow with some neural networks and have a log file in ./logs/xor_log
tensorboard --log=./logs/xor_logs
7. Open a new tab in your browser and go to the address (note: the address will be different on your specific environment).
Endnotes on neural networks modeling
Such an interesting combination as Tensorflow and Codenvy is efficient for training purposes or supporting different environments simultaneously, and you cannot use your computer directly for these purposes. Of course, Tensorflow in such a setup cannot be fast as on the video board GPU, but it is still very good for the start. For the production version, you can use another cloud system where you can deploy your project and continue developing in Codenvy IDE.
As the neural networks and learning machines development community expands and the industry evolves, we look forward to see even more mature frameworks and tools. Follow our blog not to miss anything interesting on the topic.
Related articles
>. | https://svitla.com/blog/neural-networks-and-cloud-ide-setup-and-run-tensorflow-on-codenvy | CC-MAIN-2021-39 | refinedweb | 1,121 | 56.66 |
Hello. I need to unfold some feature map of may network during training, which is cuda memory consuming. I found that the program dumps because of “out of cuda memory” after a few training loop, however during training loop, the variable I allocate should be local in the '‘for’ statement, I don’t know why it consumes out of memory after a few success loop.I think the memory consuming should be fixed during every loop. Can anyone help me out? Thanks!
Two methods which I frequently use for debugging:
By @smth
def memReport():
for obj in gc.get_objects():
if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)):
print(type(obj), obj.size())
def cpuStats():
print(sys.version)
print(psutil.cpu_percent())
print(psutil.virtual_memory()) # physical memory usage
pid = os.getpid()
py = psutil.Process(pid)
memoryUse = py.memory_info()[0] / 2. ** 30 # memory use in GB...I think
print('memory GB:', memoryUse)
cpuStats()
memReport()
Thanks! Does python gc collect garbage as soon as variable has no reference? Or with delay?
@chenchr it does immediately, unless you have reference cycles.
@smth
Thanks! Do you means that:
def func():
a = Variable(torch.randn(2,2))
a = Variable(torch.randn(100,100))
return
the memory allocated in a = Variable(torch.randn(2,2)) will be freed as soon as the code a = Variable(torch.randn(100,100)) is executed?
a = Variable(torch.randn(2,2))
a = Variable(torch.randn(100,100))
yes. correct…
But, don’t forget that once you call a = Variable(torch.rand(2, 2)), a holds the data.
When you call a = Variable(torch.rand(100, 100)) afterwards, first Variable(torch.rand(100, 100)) is allocated (so the first tensor is still in memory), then it is assigned to a, and then Variable(torch.rand(2, 2)) is freed.
a = Variable(torch.rand(2, 2))
a
a = Variable(torch.rand(100, 100))
Variable(torch.rand(100, 100))
Variable(torch.rand(2, 2))
@fmassa
that means there have to be enough memory for two variable during the creation of the second variable?
That means that if you have something like
a = torch.rand(1024, 1024, 1024) # 4GB
# the following line allocates 4GB extra before the assignment,
# so you need to have 8GB in order for it to work
a = torch.rand(1024, 1024, 1024)
# now you only use 4GB | https://discuss.pytorch.org/t/how-pytorch-releases-variable-garbage/7277 | CC-MAIN-2017-47 | refinedweb | 390 | 62.44 |
Details
Description
Got this playing w/ hbck going against the 0.94RC:
12/04/16 17:03:14 INFO util.HBaseFsck: getHTableDescriptors == tableNames => [] Exception in thread "main" java.lang.NullPointerException at org.apache.hadoop.hbase.util.HBaseFsck.reportTablesInFlux(HBaseFsck.java:553) at org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:344) at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:380) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3033)
Issue Links
- relates to
HBASE-6015 HBCK rerun should check all the regions which it checked in the first run
- Resolved
Activity
- All
- Work Log
- History
- Activity
- Transitions
@Stack
We too got one NPE in hbck. Still not found the reason. Not sure if it is same as this one.
@Ram , Yes this is the same issue.. I got the reason.
The scenario is like this as in our test.
There is one table and there was a case of one region of that table was not assigned with any of the RS. HBCK tool fixing this issue. After that HBCK will run again.
At this time getHTableDescriptors () is not finding any table in the cluster and return null and so reportTablesInFlux() -> errors.print("Number of Tables: " + allTables.length); gives a NPE
Why at this time no tables getting out of getHTableDescriptors () [Even though one table is there in the cluster is] this table is modified recently. HBCK just changed the HRegionInfo of the region of the table by assigning it to one of the RS.
For fix
1. We need null check in reportTablesInFlux() I think
2. When HBCK rerun after the fix we can set timelag =0?
I started a run of the unit test suite testing this fix – for a method like this, I prefer returning empty arrays instead of null arrays.
diff --git src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java src/main/java/org/apache/hadoop/hbase/cli index ee16e72..44b7c11 100644 --- src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java +++ src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java @@ -1691,7 +1691,7 @@ public class HBaseAdmin implements Abortable, Closeable { /** * Get tableDescriptors * @param tableNames List of table names - * @return HTD[] the tableDescriptor + * @return HTD[] the tableDescriptor (never null) * @throws IOException if a remote or network exception occurs */ public HTableDescriptor[] getTableDescriptors(List<String> tableNames) diff --git src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java src/main/java/org/apache/hadoop/h index 820e2a9..f183b15 100644 --- src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java +++ src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java @@ -2195,7 +2195,7 @@ public class HConnectionManager { @Override public HTableDescriptor[] getHTableDescriptors(List<String> tableNames) throws IOException { - if (tableNames == null || tableNames.isEmpty()) return null; + if (tableNames == null || tableNames.isEmpty()) return new HTableDescriptor[0]; MasterKeepAliveConnection master = getKeepAliveMaster(); try { return master.getHTableDescriptors(tableNames);
@Jon
Yes null check I also dont like to put...
Also what about 2. When HBCK rerun after the fix we can set timelag =0?
I think #2 makes sense, but would need to be tested to verify (it is a legacy of the original hbck – I didn't change this).
Anoop – do you guys want to take this on or should I?
Jon, I can provide a patch tomorrow addressing both the points I have mentioned.[If it is ok with you]
Anoop – go for it.
Returning empty array is valid. I dug a little into the master side as well – it returns an empty array in the case where an invalid set of table names is passed.
Patch for trunk and 0.94
Got a doubt on my patch now
We should track the skipped regions? or the included regions in the 1st run...
The NPE problem as such getting fixed by the patch
HBASE-5928.
We can close this issue.
Also there is another point ( issue ) with HBCK rerun which is mentioned as 2nd point in my above comment. Better to raise a new issue to handle that.
@Jon Ok with you?
Raised to track the second point
The issue with NPE is fixed as part of
HBASE-5928.
Error is transient. Subsequent runs worked. | https://issues.apache.org/jira/browse/HBASE-5798 | CC-MAIN-2016-07 | refinedweb | 687 | 52.46 |
Docs Project lead: Susan Lauber Team Members are needed.
Goal: to assist with page naming, page categories, style guide, and general proofreading. There are three major groups of pages.
Packaging Guide
The Packaging Guide with controlled access in the Packaging namespace.
Packaging Guide Drafts
The Packaging Guide Drafts where anyone can contribute. Most pages are currently named PackagingDrafts/*
Package Maintainer pages
Related documentation and tips for Package Maintainers. These files are mostly named PackageMaintainers/*
Notes
- PSV files are hosted in the wikirename git repository
- Pages can be renamed with the move button at the top of each page. The psv files are used by wikibot for a mass renaming. Either method will take care of all redirects.
- Category:Fonts is considered by the wiki team to be a good example of how Categories can be used to organize pages.
- abadger1999 is doing similar work with infrastructure and release engineering SOPs
- quaid and the docs project pages are also waiting on wikibot
- Going forward:
- new pages should follow guidelines
- categories should be used
- pages will continue to redirect but new references should point to the new page
- Be aware that the redirect does NOT currently change the URL in the browser. When linking to pages in the future, get the correct page name from the page itself. (The URL will continue to redirect but it is cleaner to go direct) There is an open ticket related to this issue. | https://www.fedoraproject.org/wiki/Docs_tasks_for_Packaging_Guide_and_related_materials | CC-MAIN-2021-10 | refinedweb | 238 | 61.46 |
Spring Cloud: Routing with Zuul and Gateway, Third-party apps making a web service call, etc.) can access these end microservices without knowing their hosts and ports. For example, browser restricts calls to different domains (also known as CORS). What we need is a common entry point to our microservices. Using this, we not only free the clients from knowing deployment details about all of the backend services, but also reduce development effort on the server side. At the same time, if an end microservice has multiple instances running, we can do load balancing at this entry point. Furthermore, we can also write all of the authentication and authorization mechanisms at this level. This reduces significant development on the end microservices side.
To solve this problem, Netflix created Zuul server and later open-sourced it. Spring provided a nice wrapper around it for easily incorporating it to the Spring stack. Note: Netflix recently released Zuul 2, but Spring hasn't added it to its ecosystem yet. Because of this we'll be using Zuul 1 in this article. Spring also released its own router called Spring Cloud Gateway. It has non-blocking APIs and supports long-lived connections like WebSockets. We will look into both of these solutions in this article. The architecture diagram looks like:
This article assumes that you already have knowledge of Netflix's Eureka project, which is used as a service registry and for load balancing. We have the following setup up for the backend service:
8761.
/getPublicAddressand running on port
8100.
/categoriesand running on port
8200.
The best way to start with a skeleton project is to use Spring Initializr. Select your preferred version of Spring Boot and add the "Zuul" and "Eureka Discovery" dependencies, and generate as a Maven project:
To make it a Zuul proxy server, all we need to do is add the
@EnableZuulProxy annotation to our main class:
@SpringBootApplication @EnableZuulProxy public class ZuulApplication { public static void main(String[] args) { SpringApplication.run(ZuulApplication.class, args); } }
We will be running the Zuul server on port
8050 and it also needs to register itself to the Eureka server. So in
application.properties we'll add the following:
server.port=8050 spring.application.name=zuul-edge-server eureka.client.serviceUrl.defaultZone=
Let's start this server and navigate your browser to the Eureka server at:
Now that Zuul has been registered with Eureka, let's test routing to our user-service through it by navigating the browser to the endpoint:
Similarly, for the product-service navigate your browser to:
As you can see, we are calling the backend services through Zuul. By default, Eureka client IDs become part of the URIs. For example, here we made a call to Zuul using
/product-service/categories. Zuul will check if there is any service registered as
product-service in Eureka. If it's there, it will get the URL for the service and append the remaining original URL part,
/categories to it and make the call.
Also, Zuul is Ribbon aware, so it will automatically load balance the call if there are multiple instance of the backend service running.
The defaults can, of course, be changed by tweaking the properties file, which can be found here. It's also not necessary for all backend services to be registered on Eureka. We can also route to other domains too.
Let's see another popular edge server called Spring Cloud Gateway, which is built on Spring Framework 5, Project Reactor and Spring Boot 2.0. Once again let's create a new project with Spring Initializr. Select your preferred version of Spring Boot and add the "Gateway" and "Eureka Discovery" dependencies, and generate as a Maven project:
We will be running Zuul server on port
8060 and it also needs to register itself to the Eureka server. So in
application.properties we'll add:
server.port=8060 spring.application.name=gateway-edge-server eureka.client.serviceUrl.defaultZone= spring.cloud.gateway.discovery.locator.enabled=true spring.cloud.gateway.discovery.locator.lowerCaseServiceId=true
Unlike Zuul, Spring cloud Gateway doesn't automatically look in Eureka for routing calls. So we enabled it by adding a couple of additional properties. Let's start this server and navigate your browser to the Eureka server at:
Similar to the previous example, we can test our routing to user-service and product-service by navigating our browser to and, respectively. Just like Zuul, Spring Cloud Gateway checks for a service in Eureka by the first path variable. Other ways to change default are in its documentation, which can be found here.
In this article, we've covered how to use Spring Cloud Zuul and Gateway for routing traffic to backend microservices. We created two simple REST services that registered with Eureka server. We then created Zuul server that also registered with Eureka and then routes traffic based on it. We then saw an alternate approach with Spring Cloud Gateway. As always, the code for the examples used in this article can be found on Github.Reference: stackabuse.com | https://www.codevelop.art/spring-cloud-routing-with-zuul-and-gateway.html | CC-MAIN-2022-40 | refinedweb | 842 | 55.03 |
- The .NET Framework
- Visual Studio 2005
- Conclusion.
The .NET Framework
The heart of .NET is the .NET Framework. First released in 2002, it brought enormous change to the lives of those who write Windows software and the people who manage them. Figure 1-1 shows the Framework’s two main parts: the CLR and the .NET Framework class library. A .NET application always uses the CLR, and it can also use whatever parts of the class library it requires.
Every application written using the Framework depends on the CLR. Among other things, the CLR provides a common set of data types, acting as a foundation for C#, VB, and all other languages that target the .NET Framework. Because this foundation is the same no matter which language they choose, developers see a more consistent environment.
Figure 1-1 The .NET Framework consists of the Common Language Runtime (CLR) and the .NET Framework class library.
Surveying the Library
The contents of the .NET Framework class library are organized into a tree of namespaces. Each namespace can contain types, such as classes and interfaces, and other namespaces. Figure 1-4 shows a very small part of the .NET Framework class library’s namespace tree. The namespaces shown include the following:
Figure 1-4 The .NET Framework class library is structured as a hierarchy of namespaces, with the System namespace at the root.
- System: The root of the tree, this namespace contains all of the other namespaces in the .NET Framework class library. System also contains the core data types used by the CLR (and thus by languages built on the CLR). These types include several varieties of integers, a string type, and many more.
- System.Web: This namespace contains types useful for creating Web applications, and like many namespaces, it has subordinate namespaces. Developers can use the types in System.Web.UI to build ASP.NET browser applications, for example, while those in System. Web.Services are used to build ASP.NET Web Services applications.
- System.Data: The types in this namespace comprise ADO.NET. For example, the Connection class is used to establish connections to a database management system (DBMS), while an instance of the DataSet class can be used to cache and examine the results of a query issued against that DBMS.
- System.Windows.Forms: The types in this namespace make up Windows Forms, and they’re used to build Windows GUIs. Rather than relying on language-specific mechanisms, such as the older Microsoft Foundation Classes (MFC) in C++, .NET Framework applications written in any programming language use this common set of types to build graphical interfaces for Windows.
- System.EnterpriseServices: The types in this namespace provide services required for some kinds of enterprise applications. Implemented by COM+ in the pre-NET world, these services include distributed transactions, object instance lifetime management, and more. The most important type in this namespace, one from which classes must inherit to use Enterprise Services, is the ServicedComponent class.
- System.XML: Types in this namespace provide support for creating and working with XML-defined data. The XmlDocument class, for instance, allows accessing an XML document using the Document Object Model (DOM). This namespace also includes support for technologies such as the XML Schema definition language (XSD) and XPath.
Many more namespaces are defined, providing support for file access, serializing an object’s state, remote access to objects, and much more. In fact, the biggest task facing developers who wish to build on the .NET Framework is learning to use the many services that the library provides. There’s no requirement to learn everything, however, so a developer is free to focus on only those things relevant to his or her world. Still, some parts will be relevant to almost everybody, and so the next sections provide a quick overview of some of this large library’s most important aspects.
Building Web Applications: ASP.NET
Implemented in the System.Web namespace, ASP.NET is an important piece of the .NET Framework. The successor to the very popular Active Server Pages (ASP) technology, ASP.NET applications are built from one or more pages. Each page contains HTML and/or executable code, and typically has the extension .aspx. As Figure 1-5 shows, a request from a browser made via HTTP causes a page to be loaded and executed. Any output the page creates is then returned to the browser that made the request.
Building effective Web applications requires more than just the ability to combine code with HTML. Accordingly, ASP.NET provides a range of support, including the following:
- Web controls, allowing a developer to create a browser GUI in a familiar way. By dragging and dropping standard ASP.NET controls for buttons and other interface elements onto a form, it’s possible to build GUIs for Web applications in much the same way as for local Windows applications.
Figure 1-5 ASP.NET allows developers to create browser-accessible applications.
- Mechanisms for managing an application’s state information.
- Built-in support for maintaining information about an application’s users, sometimes called membership information.
- Support for data binding, which allows easier access to information stored in a DBMS or some other data source.
Given the popularity of Web applications, ASP.NET probably impacts more developers than any other part of the .NET Framework class library. Chapter 5 provides more detail on this key component of the .NET Framework.
Accessing Data: ADO.NET
ADO.NET lets applications work with stored data. As Figure 1-6 shows, access to a DBMS relies on a .NET Framework data provider, written as managed code. Providers that allow access to SQL Server, Oracle, and other DBMS are included with the .NET Framework. They allow a client application to issue commands against the DBMS and examine any results those commands return. The result of a Structured Query Language (SQL) query, for example, can be examined in two ways. Applications that need only read the result a row at a time can do this by using a DataReader object to march through the result one record at a time. Applications that need to do more complex things with a query result, such as send it to a browser, update information, or store that information on disk, can instead have the query’s result packaged inside a DataSet object.
ADO.NET lets applications access stored data
Figure 1-6 ADO.NET allows .NET Framework applications to access data stored in DBMS and XML documents.
As Figure 1-6 illustrates, a DataSet can contain one or more tables. Each table can hold the result of a different query, so a single DataSet might potentially contain the results of two or more queries, perhaps from different DBMS. In effect, a DataSet acts as an in-memory cache for data. As the figure shows, however, DataSets can hold more than just the result of a SQL query. It’s also possible to read an XML document directly into a table in a DataSet without relying on a .NET Framework data provider. Data defined using XML has also become much more important in the last few years, so ADO.NET allows accessing it directly. While not all .NET Framework applications will rely on ADO.NET for data access, a large percentage surely will. ADO.NET is described in more detail in Chapter 6.
Building Distributed Applications
Creating software that communicates with other software is a standard part of modern application development. Yet different applications have different communication requirements. To meet these diverse needs, the .NET Framework class library includes three distinct technologies for creating distributed applications. Figure 1-7 illustrates these choices.
ASP.NET Web Services, mostly defined in System.Web.Services, allows applications to communicate using Web services. Since it’s part of ASP.NET, this technology lets developers use a similar model for creating distributed software. As Figure 1-7 shows, applications that expose methods as Web services can be built from files with the extension .asmx, each of which contains only code. Clients make requests using the standard Web services protocol SOAP2, and the correct page is loaded and executed. Because this technology is part of ASP.NET, requests and replies also go through Internet Information Services (IIS), the standard Web server for Windows.
Figure 1-7 Distributed applications can use ASP.NET Web Services, .NET Remoting, or Enterprise Services.
Communication via Web services is especially useful for interoperating with software built on platforms other than the .NET Framework, such as the Java environment. But it’s not always the right solution. In some situations, the technology known as .NET Remoting, defined in the System.Runtime.Remoting namespace, is a better choice. Unlike ASP.NET Web Services, .NET Remoting focuses on direct communication between applications built on the .NET Framework. While it does support a version of SOAP, this technology also provides a binary protocol along with the ability to add extensions using any other protocol a developer needs. .NET Remoting isn’t the most common choice for communication, but it can be important for some kinds of applications.
The third option for creating distributed applications using the .NET Framework is Enterprise Services. Defined in the System.EnterpriseServices namespace, it provides applications with services such as distributed transactions and more. Figure 1-7 illustrates this, showing a server application accessing two databases. If a single transaction included updates to both of these databases, Enterprise Services might well be the right choice for the application that used this transaction. Remote clients can communicate directly with an Enterprise Services application using DCOM, and it’s also possible for an ASP.NET application to use Enterprise Services when necessary.
All three options make sense in different situations, and having a basic understanding of all three is useful. For a more detailed look at each of them, see Chapter 7. | http://www.informit.com/articles/article.aspx?p=473451&seqNum=2 | CC-MAIN-2018-43 | refinedweb | 1,641 | 58.99 |
Sim, in about 15ms. To get around this I used an HTTP interceptor to simulate network latency in a controlled fashion.
Run this demo in my JavaScript Demos project on GitHub.
In the past, we've seen that AngularJS implements the $http request pipeline using a promise chain with hooks into both the pre-http and post-http portions of the request. This provides a perfect opportunity to inject a "dely promise" into the overall promise chain:
For this demo, I'm hooking into the "response" portion of the workflow, as opposed to the "request" portion. For some reason, that just feels a bit more natural. This works for both successful responses as well as failed responses, assuming we use both the "response" and "responseError" hooks:
<!doctype html> <html ng- <head> <meta charset="utf-8" /> <title> Simulating Network Latency In AngularJS With HTTP Interceptors </title> <link rel="stylesheet" type="text/css" href="./demo.css"></link> </head> <body ng- <h1> Simulating Network Latency In AngularJS With HTTP Interceptors </h1> <p ng- <strong>State</strong>: <span ng-Loading http data.</span> <span ng-Done after {{ delta }} milliseconds</span> </p> <p> <a ng-Make an HTTP request</a> </p> <!-- Load scripts. --> <script type="text/javascript" src="../../vendor/angularjs/angular-1.3.13.min.js"></script> <script type="text/javascript"> // Create an application module for our demo. var app = angular.module( "Demo", [] ); // -------------------------------------------------- // // -------------------------------------------------- // // I control the root of the application. app.controller( "AppController", function( $scope, $http ) { // I determine if there is pending HTTP activity. $scope.isLoading = false; // I keep track of how long it takes for the outgoing HTTP request to // return (either in success or in error). $scope.delta = 0; // I am used to move between "200 OK" and "404 Not Found" requests. var requestCount = 0; // --- // PUBLIC METHODS. // --- // I alternate between making successful and non-successful requests. $scope.makeRequest = function() { $scope.isLoading = true; var startedAt = new Date(); // NOTE: We are using the requestCount to alternate between requests // that return successfully and requests that return in error. $http({ method: "get", url: ( ( ++requestCount % 2 ) ? "./data.json" : "./404.json" ) }) // NOTE: We foregoing "resolve" and "reject" because we only care // about when the HTTP response comes back - we don't care if it // came back in error or in success. .finally( function handleDone( response ) { $scope.isLoading = false; $scope.delta = ( ( new Date() ).getTime() - startedAt.getTime() ); } ); }; } ); // -------------------------------------------------- // // -------------------------------------------------- // // Since we cannot change the actual speed of the HTTP request over the wire, // we'll alter the perceived response time be hooking into the HTTP interceptor // promise-chain. app.config( function simulateNetworkLatency( $httpProvider ) { $httpProvider.interceptors.push( httpDelay ); // I add a delay to both successful and failed responses. function httpDelay( $timeout, $q ) { var delayInMilliseconds = 850; // Return our interceptor configuration. return({ response: response, responseError: responseError }); // --- // PUBLIC METHODS. // --- // I intercept successful responses. function response( response ) { var deferred = $q.defer(); $timeout( function() { deferred.resolve( response ); }, delayInMilliseconds, // There's no need to trigger a $digest - the view-model has // not been changed. false ); return( deferred.promise ); } // I intercept error responses. function responseError( response ) { var deferred = $q.defer(); $timeout( function() { deferred.reject( response ); }, delayInMilliseconds, // There's no need to trigger a $digest - the view-model has // not been changed. false ); return( deferred.promise ); } } } ); </script> </body> </html>
I don't have too much more to say about this. I just thought it was a fun use of the underlying $http implementation. And, it's yet another example of how promises make our lives better.
Reader Comments
If you can, you can simulate delays or server responses using ServiceWorker without requiring application code modifications
Pretty advanced but might come in handy
@Gleb,
I'm having a little trouble following the code. I **think** you are redefining the "load()" method with a Function() constructor in order to change its lexical binding to the $http object you are defining inside the wedge? Is that accurate? If so, it's a very clever idea! I don't think I've seen the Function constructor used to redefine context like that.
Hi, very good article, i have a question: what if i need to delay the request, for example: show a toast before send the request itself. Thanks!
@Luis,
If you need the delay as part of your application logic, I would probably do that outside of an $http interceptor. Since it is a specific context that triggers the toast item, you don't want to try and incorporate that into into a generic $http interceptor.
I don't know the rules of your application; but, I assume there's some sort of Controller or Directive that manages the toast item. Perhaps you could have the toast $rootScope.$emit( "toastClosed" ) an event when it is closed. Then, you could have some other controller listen for that event and initiate the HTTP request at that time:
$rootScope.$on( "toastClosed", function() {
. . . . make the HTTP request
} );
You could also do this with promises and a host of other ways. It also depends on whether or not you want to pass data around, etc. But, definitely, I wouldn't try to incorporate that into the $http logic itself. | https://www.bennadel.com/blog/2802-simulating-network-latency-in-angularjs-with-http-interceptors-and-timeout.htm | CC-MAIN-2020-40 | refinedweb | 839 | 58.48 |
setup.
We're starting an open source project on behalf of this website! In this series of video's well set up the project locally, turn our python code into a python project and also make the first version of it pip-installable.
Notes
We'll make a small change in our
We'll make a small change in our
__init__.py. It will contain this line:
from clumper.clump import Clumper
Make note of this. It will turn out to be an important detail later.
Feedback? See an issue? Something unclear? Feel free to mention it here.
If you want to be kept up to date, consider getting the newsletter. | https://calmcode.io/setup/dunder-init.html | CC-MAIN-2020-34 | refinedweb | 111 | 85.08 |
The System.Collections namespace in the .NET Framework provides a number of collection types that are extremely useful for manipulating data in memory. However, there is one type of collection that is conspicuously missing from System.Collections: the Set.
System.Collections
Set
A Set is a collection that contains no duplicate elements. It is loosely modelled after the mathematical concept of a "set." This implementation is based on the Java Set interface definition, so if you are also a Java programmer, this may seem familiar. The major differences are that this library does not use interfaces, and it provides a number of "standard" Set operators that the Java library neglected to include.
Sets come in handy when an Array or a List won't quite fit the bill. Arrays in .NET have a fixed length, making it tedious to add and remove elements. Lists allow you add new objects easily, but you can have numerous duplicate elements, which is undesirable for some types of problems. Searching Arrays or Lists for elements is just plain slow for large data sets, requiring a linear search. You could keep the array sorted and use a binary search, but that is often more trouble than it is worth (especially since this library, and the .NET Framework, provide better ways that are already written for you).
Array
List
Arrays
Lists
With sets, adding elements, removing elements, and checking for the existence of an element is fast and simple. You can mix and match the elements in different sets using the supported mathematical set operators: union, intersection, exclusive-or, and minus. See the example below for more information.
union
intersection
exclusive-or
minus
You will see some interesting side effects with different Set implementations in this library, depending on the underlying search algorithm. For example, if you choose a sort-based Set, the elements will come out in sort order when you iterate using foreach. If you use a hash-based Set, the elements will come out in no particular order, but checking for inclusion will fastest when dealing with large data sets. If you use a list-based Set, elements will come out in the order you put them in when you iterate. Additionally, list-based sets are fastest for very small data sets (up to about 10 elements), but get slower very quickly as the number of contained elements increases. To get the best of both worlds, the library provides a Set type that uses lists for small data sets and switches to a hash-based algorithm when the data set gets large enough to warrant it.
foreach
The Iesi.Collections library has the following object hierarchy:
Iesi.Collections
DictionarySet
IDictionary
HashedSet
HashTable
SortedSet
SortedList
ListSet
ListDictionary
HybridSet
HybridDictionary
ImmutableSet
Sets
SynchronizedSet
You will probably find the HashedSet and HybridSet to be the most useful implementations. They can contain any object that is immutable, can be compared using Equals(), and has a valid implementation of GetHashCode(). All of the normal value types and many objects already meet these requirements, so for most data types, HashedSet and HybridSet just work. The only downside to using them is that you can't predict the order of iteration.
Equals()
GetHashCode()
SortedSet is useful if you are interested in the iteration order of your Set, but it imposes some different requirements, and they are usually more difficult to meet. In addition to being immutable, elements in a SortedSet must also implement IComparable. Further, they must actually be comparable without throwing an exception. So you would not be able to put string values and int values into the same Set instance.
IComparable
string
int
ListSet is useful for very small data sets. When a ListSet contains less than 10 elements, it is actually going to be faster than any of the other implementations. However, once you get above 10 elements, the run time for many of the set operations increases as the square of the data size. So an operation on a ListSet containing 1,000 elements would be roughly 10,000 times slower than an operation on a ListSet containing 10 items.
ImmutableSet and SynchronizedSet are specialized wrappers. They contain other sets, which then do all the real work. ImmutableSet wraps an internal Set to make it read-only. SynchronizedSet wraps all the functions of an internal Set to synchronize them. This allows the Set to be used by more than one thread. See the documentation for more information on this, since there are special considerations for enumerating collections that are in use by multiple threads.
If you are interested in creating your own Set types, this library supports that. If you want to do it from scratch, you can extend Set and implement all the abstract functions. If you want to create a new Set type based on an existing IDictionary implementation, extend DictionarySet. If you just want to add some new functionality to an existing, working Set implementation, choose one of HashedSet, HybridSet, ListSet, or SortedSet to extend.
The example below demonstrates creating new sets and manipulating them using set operators. The currently supported set operators are described briefly in the following table:
Union()
A | B
Intersect()
A & B
A
B
ExclusiveOr()
A ^ B
Minus()
A - B
A.Equals(B)
The example uses Set instances to represent states in the southwestern United States. Each Set holds the names of the major rivers in the state. It then uses the basic state-river information to derive all sorts of fun facts about the rivers.
using System;
using Iesi.Collections;
namespace RiverDemo
{
class Rivers
{
[STAThread]
static void Main(string[] args)
{
//Use Arrays (which are ICollection objects) to quickly initialize.
Set arizona
= new SortedSet(new string[] {"Colorado River"});
Set california
= new SortedSet(new string[] {"Colorado River", "Sacramento River"});
Set colorado
= new SortedSet(new string[] {"Arkansas River",
"Colorado River", "Green River", "Rio Grande"});
Set kansas
= new SortedSet(new string[] {"Arkansas River", "Missouri River"});
Set nevada
= new SortedSet(new string[] {"Colorado River"});
Set newMexico
= new SortedSet(new string[] {"Rio Grande"});
Set utah
= new SortedSet(new string[] {"Colorado River", "Green River",
"San Juan River"});
//Rivers by region.
Set southWest = colorado | newMexico | arizona | utah;
Set midWest = kansas;
Set west = california | nevada;
//All rivers (at least for the demo).
Set all = southWest | midWest | west;
Print("All rivers:", all);
Print("Rivers in the southwest:", southWest);
Print("Rivers in the west:", west);
Print("Rivers in the midwest:", midWest);
Console.WriteLine();
//Use the '-' operator to subtract the rivers in Colorado from
//the set of all rivers.
Print("Of all rivers, these don't pass through Colorado:",all - colorado);
//Use the '&' operator to find rivers that are in Colorado AND in Utah.
//A river must be present in both states, not just one.
Print("Rivers common to both Colorado and Utah:", colorado & utah);
//use the '^' operator to find rivers that are in Colorado OR Utah,
//but not in both.
Print("Rivers in Colorado and Utah that are not shared by both states:",
colorado ^ utah);
//Use the '&' operator to discover which rivers are present in Arizona,
// California,Colorado, Nevada, and Utah. The river must be present in
// all states to be counted.
Print("Rivers common to Arizona, California, Colorado, Nevada, and Utah:",
arizona & california & colorado & nevada & utah);
//Just to prove to you that operators always return like types, let's do a
//complex Set operation and look at the type of the result:
Console.WriteLine("The type of this complex operation is: " +
((southWest ^ colorado & california) | kansas).GetType().FullName);
}
private static void Print(string title, Set elements)
{
Console.WriteLine(title);
foreach(object o in elements)
{
Console.WriteLine("\t" + o);
Console.WriteLine();
}
}
}
Although there are other kinds of sets available in the library, the example uses SortedSet throughout. This is nice for the example, since everything will print neatly in alphabetical order. But you may be wondering what kind of Set is returned when you "union," "intersect," "exclusive-or," or "minus" two Set instances. The library always returns a Set that is the same type as the Set on the left, unless the left operand is null, in which case it returns the type of the Set on the right.
null
What this means is that since we are using SortedSet instances, we will always get SortedSet instances when we combine sets using the binary operators. So the output in our example will always be in alphabetical order, just as you would expect.
Here is the output from running the example:
All rivers:
Arkansas River
Colorado River
Green River
Missouri River
Rio Grande
Sacramento River
San Juan River
Rivers in the southwest:
Arkansas River
Colorado River
Green River
Rio Grande
San Juan River
Rivers in the west:
Colorado River
Sacramento River
Rivers in the midwest:
Arkansas River
Missouri River
Of all rivers, these don't pass through Colorado:
Missouri River
Sacramento River
San Juan River
Rivers common to both Colorado and Utah:
Colorado River
Green River
Rivers in Colorado and Utah that are not shared by both states:
Arkansas River
Rio Grande
San Juan River
Rivers common to Arizona, California, Colorado, Nevada, and Utah:
Colorado River
The type of this complex operation is:
Iesi.Collections.SortedSet
Press any key to continue
There is an additional example and a lot more technical information included in the documentation. The source code is pretty easy to follow as well. All the hard work of searching and sorting is performed by classes that are already present in the .NET Framework, so none of the code is particularly difficult or tricky. If you have a question that is not covered by the documentation, it should not take more than 5 or 10 minutes to discover the answer by reading the. | http://www.codeproject.com/Articles/3190/Add-Support-for-quot-Set-quot-Collections-to-NET?fid=13500&df=90&mpp=50&sort=Position&spc=Tight&select=3839925&tid=1622578 | CC-MAIN-2015-14 | refinedweb | 1,604 | 51.89 |
Encoding word n-grams to one-hot encoding is simple with numpy, you can read this tutorial below to implement it.
Encode Word N-grams to One-Hot Encoding with Numpy – Deep Learning Tutorial
However, this way need large memory space, for example, if the vocabulary size is 500,000, the one-hot encoding matrix is 500,000 * 500,000,which may fail to encode n-grams if your memory space is limited. We call this way to be static method.
In this tutorial, we will introduce a new way to encode n-grams to one-hot encoding, it can create a one-hot matrix dynamically and need a little of memory space.
Prepare n-grams
As to sentence ‘i like writing‘, we will use its 3-grams to create one-hot encoding.
grams = ['#i#', 'lik', 'ike', 'wri', 'rit', 'iti', 'tin', 'ing']
Select grams to create one-hot encoding
index = np.array([0, 2, 4, 1, 3])
We select position in [0, 2, 4, 1, 3] to create one-hot encoding.
Create a function to create one-hot encoding by grams position
def make_one_hot(data, vab_size): return (np.arange(vab_size)==data[:,None]).astype(np.integer)
Create one-hot encoding dynamically
one_hots = make_one_hot(index, vab_size = len(grams)) print(one_hots)
The one-hot encoding result is:
[[1 0 0 0 0 0 0 0] [0 0 1 0 0 0 0 0] [0 0 0 0 1 0 0 0] [0 1 0 0 0 0 0 0] [0 0 0 1 0 0 0 0]]
Compare two methods
In this example, the vocabulary size is: 8
Static way: need 8 * 8 space.
Dynamic way: maximum space 8*8, minimum space 1 * 8 | https://www.tutorialexample.com/encode-word-n-grams-to-one-hot-encoding-dynamically-with-numpy-deep-learning-tutorial/ | CC-MAIN-2020-50 | refinedweb | 281 | 54.97 |
Hi, I'm trying to set tags via an integration with the Segment Airship Destination. I followed all of the instructions at. However I'm not seeing any of my custom tags when I go to Audience -> Segments -> Select Tag -> Segment Integration. Could you please help me figure out why my tags are missing?
I tried contacting Segment support, but they told me that the Airship destination is owned and maintained by Airship, not by Segment.
I'm not sure if this is my entire issue or just part of it, but I see in the request logs from Segment to Airship the message "Traits do not meet requirements to set tags". Full details below. Could you please explain what requirements are not met?
[ { "request": { "body": { "anonymousId": <Redacted>, "channel": "server", "context": { "app": { "build": "407", "name": "River", "namespace": "com.useriver.river", "version": "1.0.32" }, "device": { "adTrackingEnabled": true, "advertisingId": <Redacted>, "id": "<Redacted>, "manufacturer": "Apple", "model": "iPhone11,2", "type": "ios" }, "ip": "146.115.225.162", "library": { "name": "analytics-ios", "version": "3.7.1" }, "locale": "en-US", "network": { "carrier": "Verizon", "cellular": false, "wifi": true }, "os": { "name": "iOS", "version": "13.4.1" }, "screen": { "height": 812, "width": 375 }, "timezone": "America/New_York", "traits": { "AF_af_message": "organic install", "AF_af_status": "Organic", "AF_install_time": "2020-05-11 20:54:32.888", "AF_is_first_launch": true, "anonymousId": <Redacted>, "app_installer": "testFlight", "environment": "prod", "maxVideoDuration": "21.0", "onboarding_style": "new", "push_notification_authorization_status": "authorized", "userId": <Redacted> } }, "integrations": { "Amplitude": { "session_id": "1589306301" }, "AppsFlyer": false }, "messageId": <Redacted>, "originalTimestamp": "2020-05-12T17:58:21.960Z", "projectId": "2Py9Jo3Wn3", "receivedAt": "2020-05-12T17:58:22.731Z", "sentAt": "2020-05-12T17:58:22.167Z", "timestamp": "2020-05-12T17:58:22.524Z", "traits": { "anonymousId": <Redacted>, "push_notification_authorization_status": "authorized", "userId": <Redacted> }, "type": "identify", "userId":<Redacted>, "version": 2, "writeKey": <Redacted> }, "header": { "Accept": "*/*", "Authorization": "<REDACTED>", "Content-Type": "application/json", "User-Agent": "Segment.io/1.0", "X-Segment-Settings": <Redacted> }, "method": "POST", "url": "" }, "response": { "body": { "message": "Traits do not meet requirements to set tags" }, "header": { "Alt-Svc": "h3-27=\":443\"; ma=2592000,h3-25=\"\"", "Content-Length": "58", "Content-Type": "application/json", "Date": "Tue, 12 May 2020 17:58:23 GMT", "Server": "Google Frontend", "X-Cloud-Trace-Context": <Redacted> }, "status": 400 } } ]
Thank you!
Hello Zach,
The named user must exist for tags to be set, integration does not create the named user so if the named user exists but the traits are empty or don't meet the true/false requirement, there are no tags to set.
The following situations will produce a "Traits do not meet requirements to set tags response":
"traits" : {}
Can you verify if any of the traits meet the above requirements?
Thank you,
Chilun Liu
Airship Support
Hi Chilun,
Thanks for getting back to me. I've set the named user, but I had been sending strings instead of booleans for some of my traits. I tried removing the strings for the trait values, and it appears that I'm no longer getting the intermittent errors.
However, I'm still not seeing any traits in the Airship dashboard. Could you please help me figure out why the traits are still not working?
Thanks,
Zach | https://support.airship.com/hc/en-us/community/posts/360062817771-Set-tags-via-Segment-Integration | CC-MAIN-2021-49 | refinedweb | 505 | 55.74 |
I am trying to understand how change the galois LFSR code to be able to specify the output bit number as a parameter for the function mentioned below. I mean I need to return not the last bit of LFSR as output bit, but any bit of the LFSR ( for example second or third bit). I am really stuck up with this question. Can anybody give some hint how to implement that?
#include < stdint.h > uint16_t lfsr = 0xACE1u; unsigned period = 0; do { unsigned lsb = lfsr & 1; /* Get lsb (i.e., the output bit - here we take the last bit but i need to take any bit the number of which is specified as an input parameter). */ lfsr >>= 1; /* Shift register */ if (lsb == 1) /* Only apply toggle mask if output bit is 1. */ lfsr ^= 0xB400u; /* Apply toggle mask, value has 1 at bits corresponding* to taps, 0 elsewhere. */ ++period; } while (lfsr != 0xACE1u);
If you need bit
k
(k = 0 ..15), you can do the following:
return (lfsr >> k) & 1;
This shifts the register
kbit positions to the right and masks the least significant bit. | http://databasefaq.com/index.php/answer/866/c-prng-shift-register-galois-lfsr-how-to-specify-the-output-bit-number | CC-MAIN-2018-51 | refinedweb | 183 | 71.65 |
I was just wondering if there is a way in Django to detect URLs from a bunch of text and then shorten them up automatically. I know I can use urlize to detect urls, but I am not sure if I can use maybe bitly or something to shorten the links.
And also would it be better to accomplish this task with javascript instead of python? and if that is the case how do I go about it?
For bit.ly if you just want to shorten URLs, its quite simple:
First create an account, and then visit to get your API key.
Send a request to the shorten method of the API, the result is your shortened URL:
from urllib import urlencode from urllib2 import urlopen ACCESS_KEY = 'blahblah' long_url = '' endpoint = '{0}&longUrl={1}&format=txt' req = urlencode(endpoint.format(ACCESS_KEY, long_url)) short_url = urlopen(req).read()
You can wrap that up into a template tag:
@register.simple_tag def bitlyfy(the_url): endpoint = '{0}&longUrl={1}&format=txt' req = urlencode(endpoint.format(settings.ACCESS_KEY, the_url)) return urlopen(req).read()
Then in your template:
{% bitlyfy "" %}
Note: Positional arguments in tags are a feature of django 1.4
If you want the all the features of the bit.ly API, start by reading the documentation at dev.bitly.com/get_started.html, and then download the official python client. | https://codedump.io/share/aKmaFHW98Xza/1/url-detection-and-shortening-in-django | CC-MAIN-2016-50 | refinedweb | 224 | 65.73 |
Hi, all: I'm getting a disturbingly-large discrepancy in running an identical script on different machines, and hoping that someone can suggest a cause and perhaps a fix. I'm suspecting a possible bug in my perl version.
Code being executed (@ARGV = qw/data.txt 636/):
#!/usr/bin/perl -w
use strict;
# Least square fit to quadratic function; input up to ~1k data pairs a
+s space-separated columns.
#
# $Revision: 1.1 $ $Date: 2010-06-28 17:31:23-04 $
# $Source: /home/vogelke/notebook/2010/0628/polynomial-fit/RCS/pfit,v
+$
# $UUID: 631e046d-f070-38bb-b287-bdd10b1d0efa $
#
# John Lapeyre Tue Jan 28 18:45:19 MST 2003
# lapeyre@physics.arizona.edu
#
#
#
# $Revision: 1.2 $ $Date: 2010-10-04 17:08:40-05 $
# Revised by Ben Okopnik
# Added usage message, code strictures, input filtering/validation
#
# fit to A x^2 + B x + C
# Works by solving matrix equation M.c = v.
# c = (A,B,C)
# v = (v1,v2,v3), where vn = \sum_i y_i x_i^(n-1)
# m = ( (m4,m3,m2), (m3,m2,m1), (m2,m1,m0))
# where mn = \sum_i x_i^n
die "Usage: ", $0 =~ /([^\/]+)$/, " <input_file> <target_number>\n"
unless @ARGV == 2;
my $tgt = pop;
my ($m0, $v1, $v2, $v3, $m1, $m2, $m3, $m4, $d, @x, @y) = 0;
while (<>) {
next unless /^\s*(\d+)(?:$|\s+(\d+)\s*$)/;
push @y, $+;
push @x, defined $2 ? $1 : $m0;
$m0++;
}
die "Badly-formatted data (one or two numerical columns required)\n"
unless $m0;
for (0 .. $#x) {
my $x = $x[$_];
my $y = $y[$_];
$v1 += $y * $x**2;
$v2 += $y * $x;
$v3 += $y;
$m1 += $x;
$m2 += $x * $x;
$m3 += $x**3;
$m4 += $x**4;
}
# Used mathematica to invert the matrix and translated to perl
$d = $m2**3 + $m0 * $m3**2 + $m1**2 * $m4 - $m2 * (2 * $m1 * $m3 + $m0
+ * $m4);
my $A = ($m1**2 * $v1 - $m0 * $m2 * $v1 + $m0 * $m3 * $v2 + $m2**2 * $
+v3 -
$m1 * ($m2 * $v2 + $m3 * $v3)) / $d;
my $B = (-($m1 * $m2 * $v1) + $m0 * $m3 * $v1 + $m2**2 * $v2 - $m0 * $
+m4 * $v2 -
$m2 * $m3 * $v3 + $m1 * $m4 * $v3) / $d;
my $C = ($m2**2 * $v1 - $m1 * $m3 * $v1 + $m1 * $m4 * $v2 + $m3**2 * $
+v3 -
$m2 * ($m3 * $v2 + $m4 * $v3)) / $d;
# A x^2 + B x + C
printf "%f\t%f\t%f\n", $A, $B, $C;
# Correct $C for final value
$C -= $tgt;
# PE solver
my $y1 = (-$B + sqrt($B**2 - 4 * $A * $C)) / (2 * $A);
my $y2 = (-$B - sqrt($B**2 - 4 * $A * $C)) / (2 * $A);
printf "Roots: %.3f or %.3f\n\n", $y1, $y2;
[download]
The data being processed (CO2 measurements at Mauna Loa):
1962 318
1982 341
2002 373
[download]
The results:
# i686 running Linux with perl-5.10.0
0.011256 -43.244877 41833.442623
Roots: 2093.870 or 1747.905
# IBM x3400 running Linux with perl-5.10.1
0.011250 -43.220086 41809.478563
Roots: 2083.912 or 1757.866
# Sparc running Solaris-10 with perl-5.10.1
0.011250 -43.220086 41809.478563
Roots: 2083.912 or 1757.866
# After adding "use bignum;" on either of the last two
0.011250 -43.220000 41809.395000
Roots: 2083.912 or 1757.866
[download]
Any help or suggestions would be appreciated.
I suspect the differences result from differences in how floating point is handled on the different systems affecting the cumulated error from the sequence of calculations.
You could try re-implementing using Math::BigFloat and setting a sufficiently large (or is that small?) precision so that the cumulated error is sufficiently small.
If my guess is correct, you might find What Every Computer Scientist Should Know About Floating-Point Arithmetic helpful.
Following on from ig's comment a couple of alarm bell stimulating items in relation to your code, data and results should be mentioned:
This sort of code can be very tricky to get right with careful attention needing to be paid to expected ranges of values at each step and consideration given to changing calculation order to minimise errors propagating through the calculation chain. Using higher precision numerical representation is a quick way around what can otherwise often be a difficult problem.
FWIW: I get the same results on I686 Windows, as you get on I686 linux, which suggests to me that the difference if down to differences in the floating point hardware rather either the way Perl is built, or the underlying CRT math functions.
I thought for a while that it might be down to the IEEE rounding mode is use, but I tried it with all 4 modes and whilst the results do vary, the differences are far less than you are seeing:
C:\test>junk54 MaunaLoa 636 1
0.011256 -43.244877 41833.442623
Roots: 2093.870 or 1747.905
C:\test>junk54 MaunaLoa 636 2
0.011256 -43.245005 41833.967213
Roots: 2093.804 or 1747.982
C:\test>junk54 MaunaLoa 636 3
0.011256 -43.245005 41833.442623
Roots: 2093.939 or 1747.847
C:\test>junk54 MaunaLoa 636 4
0.011256 -43.245005 41833.967213
Roots: 2093.804 or 1747.982
[download]
If I add use bignum; I get the same result regardless of rounding mode (as you'd expect with infinite precision):
C:\test>junk54 MaunaLoa 636 0
0.011250 -43.220000 41809.395000
Roots: 2093.969 or 1747.809
[download]
But the result is still not far from the standard double precision results, and far away from your results on those other platforms.
So then I decided to "check the math".
I fed your sample data into wolfram/alpha's quadratic fit and got a = 0.01125 b=-43.33 c=41809.39500009608 (your results are close enough!)
I then walked manually through the calculation:
The small results are done using calc.exe, the extended numbers in brackets are produced using wolfram alpha.
All of which suggests that your I686 results are correct, and the "bug" is on the other platforms.
Perhaps they're using single precision? Or at least, perhaps lookup tables for sqrt() &| pow() that are only single precision?
Thanks, all, for the very useful responses (I'm responding to BrowserUk's reply specifically because it's so detailed and helpful in so many aspects) - this was very useful for both confirmation and more direction in deciding where to look for the error. I'm not all that familiar with the guts of Perl, but it's looking likely that there's a major compilation difference responsible here: either single-precision lookup tables, or perhaps a radically different math lib. Annoying that something like that could affect a Perl program... but on the other hand, good to learn that it can.
Again, thank you all very much.
either single-precision lookup tables, or perhaps a radically different math lib. Annoying that something like that could affect a Perl program...
I'm very unsure of my ground here as I'm not familiar with the other platforms, but it might not be software--Perl or the libs--but the floating point hardware. If their FPU's are only single precision, that might account for the results.
That said, my best efforts to perform the calculation in single precision doesn't get close to the inaccuracies you;re seeing:
#include <stdio.h>
float sqrtf( float n ) {
float g = n / (float)2.0;
while( ( ( g*g ) - n ) > 0.000001 ) {
g = ( g + n/g ) / (float)2.0;
}
return g;
}
void main( void ) {
float a = (float)0.01125, b = (float)-43.22, c = (float)41173.3950
+0009608;
float two = 2.0, four = 4.0;
float b2 = b * b;
float ac4 = four * a * c;
float sqrtbit = sqrtf( b2 - ac4 );
float a2 = two * a;
float r1 = ( -b + sqrtbit ) / a2;
float r2 = ( -b - sqrtbit ) / a2;
printf( "b2:%f ac4:%f sqrt:%f a2:%f\n", b2, ac4, sqrtbit, a2 );
printf( "roots: %f , %f\n", r1, r2 );
}
[download]
Produces:
C:\test>quads
b2:1867.968506 ac4:1852.802856 sqrt:3.894310 a2:0.022500
roots: 2093.969238 , 1747.808472
[download]
This reduces the amount of floating point calculations, and hence the rounding errors. Of course, there still may be floating point calculations if any of intermediate integers becomes "too large". (Print out $A, $B, $C and $d to make sure).
The adage that I heard was: “Floating point numbers like piles of dirt on a beach. Every time you pick one up and move it around, you lose a little dirt and you pick up a little sand.”
Every implementation ought to produce the same answer, within the useful number of significant-digits, for most calculations. But, the more calculations you do (and depending on exactly how you do it), the more the results will “drift” toward utter nonsense.
And I truly think that you should expect this from any binary floating-point implementation. There are two classic ways that applications (such as, accounting applications in particular) counter this:
Even so, errors can accumulate. This can be further addressed by algorithms such as “banker’s rounding.” There is, of course, the (probably apocryphal) tale of an intrepid computer-programmer who found a way to scoop all of those minuscule rounding-errors into his own bank account...
Float-binary can never be a “pure” data representation. It is well-understood that the fraction 1/3 cannot be precisely expressed as a decimal number. Similar artifacts occur for other fractions in other bases, and, so they tell me, for base-2 floats, one of those unfortunate numbers is 1/10. (“So I have been told.” I don’t have enough geek-knowledge to actually know for sure...)
Yes
No
Results (282 votes). Check out past polls. | http://www.perlmonks.org/?node_id=863499 | CC-MAIN-2017-13 | refinedweb | 1,588 | 73.58 |
Introduction
Thread Sanitizer is a data race detector developed and maintained by Google. This document is in development but shall already very briefly explain the usage of the Thread Sanitizer with Mozilla Firefox while new content will be added in the near future.
Thread Sanitizer supports two major modes of operation, hybrid and pure happens-before (default).
hybrid - reports more false positives but is faster, more predictable and finds more true races.
pure happens-before - will not report false positives unless the code uses lock-less synchronisation methods but may miss races.
Setup
TSan can be used with PIN or Valgrind, we describe here the usage of Valgrind+TSan.
As of today Valgrind+TSan is supported on the following platforms:
Ubuntu 12.04 LTS 32-bit (Valgrind)
Ubuntu 12.04 LTS 64-bit (Valgrind)
MacOS 10.6 32-bit (Valgrind)
MacOS 10.7 32-bit (Valigrind)
However as of writing this I got some aborts with the 64-bit versions on Linux during the launch of Firefox, therefore this document describes the 32-bit version on Ubuntu 12.04.
Download
You can either build TSan from the source or run the self-contained shell script. To make things easy we choose the latter.
git clone tsan
Add TSan to your PATH environment variable in your .bashrc to run it in a convenient way.
export PATH=$PATH:$HOME/tsan/third_party/valgrind/linux_x86/bin
Run
valgrind-tsan.sh --hybrid=no --announce-threads --ignore=tsan.ignore --gen-suppressions $OBJ_DIR/dist/bin/firefox -P valgrind -no-remote 2>&1 | tee tsan.log
Examining Reports with VIM
TSan reports are formatted with VIM folds thus provides us with the oppurtunity to examine them in VIM in an easy way. In order to do so add and adjust your VIM configurations to the settings below.
$HOME/.vim/scripts.vim
if did_filetype() finish endif let lnum = 1 while lnum < 100 if (getline(lnum) =~ 'ThreadSanitizerValgrind') setfiletype tsan endif set foldlevel=1 let lnum = lnum + 1 endwhile
$HOME/.vim/syntax/tsan.vim
sy match TS_Head /Possible data race during.*:/ sy match TS_Concurrent /Concurrent .* happened at or after these points:/ sy match TS_MemoryDescr /Address .* is .* bytes inside data symbol.*/ sy match TS_MemoryDescr /Location .* bytes inside a block starting at .* of size .* allocated.*/ sy match TS_Locks /Locks involved in this report.*/ sy match TS_Fold /\{\{\{/ sy match TS_Fold /\}\}\}/ sy match TS_FirstFunc /#0 0x[0-9A-F]\+: .*/ hi TS_Head ctermfg=Red hi TS_Concurrent ctermfg=Magenta hi TS_MemoryDescr ctermfg=Cyan hi TS_Locks ctermfg=Green hi TS_Fold cterm=bold hi TS_FirstFunc cterm=bold " vim: fdl=1
$HOME/.vimrc
set path=,,.,../include function! Gfw() let b = bufnr('') normal mz let b = bufnr('') wincmd w exe "b " . b normal `zgF endfun nnoremap ;f :call Gfw()<cr>
Once you have completed this step you can open a TSan log file in VIM, split the screen in half with the report on the left and the source on the right.
:vsplit ;f Ctrl+ww
Inside VIM, point your cursor to the beginning of the path of the source file and hit ;f doing so will open the source on the left side of the split VIM screen. Press Ctrl+ww to switch back to the left side.
Ignores
TSan offers the possibility of using 'ignores' to specify locations which shall not be instrumented. This can save time in the excution process. Wildcards are supported as well. The structure of such a ignore file with its supported keywords is shown below.
# shared libraries obj: # source files src: # functions fun: # functions will be ignored together with other functions they call fun_r: fun_hist:
A sample ignore file can be found here:
Suppressions
TSan also supports the known 'suppressions' functionality of Valgrind. Multiple stack traces for one suppression rule are supported.
{ suppression_name tool_name:warning_name { fun:mangled_function_name_wildcard obj:object_name_wildcard ... fun:demangled_function_name_wildcard } { ... fun:another_mangled_function_name_wildcard } }
A sample suppression file can be found here:
Verifying Races
TSan has a built-in race verifier, once the race verifier confirmed a race it is a 100% proof of a race. If the race verifier did not detect a race it proves nothing.
valgrind-tsan.sh --race-verifier=tsan.log $OBJ_DIR/dist/bin/firefox -P valgrind -no-remote 2>&1 | tee tsan.verified | https://developer.mozilla.org/en-US/docs/Thread_Sanitizer$revision/325339 | CC-MAIN-2015-18 | refinedweb | 693 | 57.57 |
Hello, I just installed Gradle and I'm trying to understand it but I encounter a problem when trying to build a project with the plugin 'java'.
Just to rule out somes things, I was able to :
1- Install Gradle and run the following script
task hello {
doLast
}
2- I could compile my Java class using javac and run it using java in the command prompt. Here is the file
public class Hello {
public String execute()
public static void main(String [] args)
3- The file is located in the source folder : src/main/java
4- My build.gradle file look like that :
apply plugin: 'java'
5- I use windows 7 (if it's important)
6- If I run the command, gradle build on a folder that doesn't exist (for example src/main/java doesn't exist), everything work. Here is the result :
C:\Users\Denis\Documents\Temp>gradle build
:compileJava UP-TO-DATE
:processResources UP-TO-DATE
:classes UP-TO-DATE
:jar
:assemble
:compileTestJava UP-TO-DATE
:processTestResources UP-TO-DATE
:testClasses UP-TO-DATE
:test
:check
:build
BUILD SUCCESSFUL
7- I have no idea what to do now. Help appreciated
So now here is what I get when I run the command gradle build --stacktrace on an existing folder. In this case the folder where my Hello.java file is located. (src/main/java)
C:\Users\Denis\Documents\Temp>gradle build --stacktrace
:compileJava
FAILURE: Build failed with an exception.
BUILD FAILED
Total time: 2.948 secs
Thx for help!
Not sure if I understand you correctly. You have one source file in src/main/java and as soon as you run "gradle build" from the root project you get the Nullpointer exception? Unfortunately I cannot yet reproduce your bug.
regards,
René
Can you post the output of `gradle -v`, please? Also, what is your JAVA_HOME environment variable set to, if anything?
Thx for reply.
Here is the output of gradle -v
C:\Users\Denis\Documents\Temp>gradle -v
------------------------------------------------------------
Gradle 1.0-milestone-9
------------------------------------------------------------
Gradle build time: mardi 13 mars 2012 16 h 10 UTC
Groovy: 1.8.6
Ant: Apache Ant(TM) version 1.8.2 compiled on December 20 2010
Ivy: 2.2.0
JVM: 1.6.0_23 (Sun Microsystems Inc. 19.0-b09)
OS: Windows 7 6.1 amd64
C:\Users\Denis\Documents\Temp>
I didn't had a variable JAVA_HOME but the path for the JDK was in my PATH variable.
I tried creating a variable JAVA_HOME with the path to the JDK but it didn't seems to change anything, still get the same error.
@Rene Yes, I'm in a folder containaing the folder src/main/java/Hello.java. When I run the command "gradle build" it give me the NullPointerException.
I tried changing the sourceSet like it was explain in the documentation and it crash too. In fact as soon as I referenced an existing folder it crash but if I referenced a folder that doesn't exist it print that the build is a success with the usual message :
I was able to reproduce this error by:
1. point the JAVA_HOME to a JRE installation instead of a JDK
2. remove the JAVA_HOME variable at all.
Can you please double check that your JAVA_HOME variable points to a valid JDK installation (by calling "set JAVA_HOME" directly before running the gradle command)? Nevertheless, Gradle should be more gracefully here and not fail with a NullpointerException.
Thx for the help!!
It work. This morning when I restart my comp it gave me an error about an invalid JAVA_HOME variable instead of the NullPointerException.
Then instead of putting the bin folder of the JDK as my path I just put the path to the JDK and now it works
Any way to mark this issue as fixed? | https://issues.gradle.org/browse/GRADLE-2187.html | CC-MAIN-2021-31 | refinedweb | 633 | 64.51 |
This isn't necessarily a plugin question, but this still seems like the most appropriate forum. I've noticed some languages, like Python, include rules for automatically indenting to the proper level when adding a newline. For example:
- Code: Select all
def foo():<press enter>
| # <-- cursor automatically indented to |
I've skimmed the Python.tmLanguage file, assuming that's where rules like this are defined, but haven't had any luck figuring out exactly what's doing this. I'd like to implement similar smart indentation to HTML, so that when someone types <div> followed by a newline, the cursor is positioned one level beyond the "<".
Any help would be appreciated. Thanks!
-andy | http://www.sublimetext.com/forum/viewtopic.php?f=6&t=1490 | CC-MAIN-2014-41 | refinedweb | 113 | 53.92 |
From: David Abrahams (dave_at_[hidden])
Date: 2005-09-14 21:10:22
"Robert Ramey" <ramey_at_[hidden]> writes:
> David Abrahams wrote:
>
>> The best thing you could do, IMO, is write a recommendation that
>> works on conforming compilers, e.g.
>>
>> Overload serialize in the namespace of your class.
>>
>
> Oh, that's news to me. I thought that would work only in compilers that
> implemented ADL.
I have to say, Robert, I am completely baffled at your last two
replies. If I didn't know better, I'd say you were just playing
head games with me.
I said "conforming compilers." ADL is a part of the standard. Any
compiler not implementing it is nonconforming. But surely you already
knew that?
> I understood from previous postings that this would entail building
> a set a macros which addressed the varying aspect of conformity.
The macro and usage pattern I suggested was there to address the
nonconforming compilers: the ones that don't implement ADL, and don't
implement 2-phase name lookup. I don't believe there's a compiler
that implements 2-phase name lookup but doesn't implement ADL; that
would be the only case my second suggestion wouldn't cover.
> In 1.32 I had #if ... in order to place stl serializations in either
> the stl namespace or the boost::serialization namespace depending on
> whether the compiler supported ADL and two-phase lookup or neither
> of these things.
I can't see how putting overloads in std:: could be of any use to your
library on compilers that don't support ADL, so I assume you put them
in std:: on conforming compilers and put them in boost::serialization
for the ones that don't conform. Putting new definitions in namespace
std invokes undefined behavior. So it sounds like, depending on the
compiler, you either have a broken compiler (no ADL) or a broken
library (undefined behavior). That is a no-win approach.
How to solve this problem is not a mystery. We just had a long thread
about it entitled "[range] How to extend Boost.Range?" Have you
completely missed it?
I am getting frustrated here because it seems to me that you haven't
done your homework, yet, after asking me to propose solutions, you
resist and call them ugly.
> That was the only way I could get everything to compile on all
> platforms.
It seems to me from what you've been saying that your approach to
solving the serialize( ... ) dispatching/customization problem here
has been to try different configurations of the code until you could
get it to pass select tests on certain platforms, without regard for
standard conformance and theoretical portability. That's a well-known
way to end up with code that doesn't work portably or in
configurations you haven't tried (for example, hmm, let me
think... when the order of headers changes).
> Then generated the requirement to address the issue in the
> documentation with a table on what the user should use. That's what
> I call ugly.
Huh? I didn't propose a table. And, IMO there's nothing particularly
inelegant about using a table in documentation; there's
well-established precedent for it in the standard and Boost.
Certainly documenting what the user should do is a must. So I don't
see what you're unhappy about.
If you don't like having to tell users to do something special on
nonconforming compilers, well, the answer is to either find a clean
idiom that works everywhere (for this particular problem, many have
tried -- I don't believe such an idiom exists) or stop supporting
those broken compilers.
> FWIW - I'm not sure if the example you site would ever come up in
> practice - it certainly hasn't yet. If I understand this correctly,
> the problem would occur if someone writes somethng like:
>
> my_class.hpp
>
> class my_class ...
>
> template<class Archive, my_class>
> void serialize(Archive &ar, my_class & t, cont unsigned int version); //
> declaration only
>
> my_app.cpp
>
> main(...
> {
> my_class mc;
> ...
> ar & mc;
> }
>
> template<class Archive, my_class>
> void serialize(Archive &ar, my_class & t, cont unsigned int version); //
> definition only
No. The problem has nothing to do with the location of definitions.
You can keep all definitions and declarations together, and the
problem still occurs.
// <boost/serialization/some_file.hpp>
namespace boost { namespace serialization
{
// Something in the serialization library
template <class T>
int call_serialize(T const& x)
{
serialize(x);
return 0;
}
}}
// "user_header.hpp"
namespace me
{
class X {};
}
// User overloads in boost::serialization, per your recommendation
namespace boost { namespace serialization
{
void serialize(me::X)
{
// definition
}
}}
// "user.cpp"
int y = boost::serialization::call_serialize(me::X());
Read the passage of the standard I quoted to you, and look carefully
at the example above. Do your homework.
> I'm not absolutly sure that this is the only scenario which would
> create problem and that's why I wanted more time to consider it.
>
> If this is the only way the two-phase problem could manifest itself,
> I would guess that in practice it would never be an issue. I don't
> see users doing this. This was what I was getting at when I asked
> why the problem hasn't appeared upto now. Of course I don't really
> know whether this is because so many compilers fail to implement
> two-phase lookup.
Jeez Louise, Robert! This is a well-known problem. It occurs in
practice. There's nothing unique about the serialization library in
this respect. It has a customization point (called "serialize") that
is used from the library without qualification. You expect to find it
via Argument Dependent Lookup, **whose job is to look for overloads in
the associated namespace of the arguments**. If the overload is in
boost::serialization and no arguments are associated with that
namespace, it won't be found via ADL. That leaves ordinary lookup,
which only looks backwards from the templates point-of-definition.
Could that _possibly_ be any clearer?
Lots of generic libraries have a similar dispatching issue to address,
e.g. Boost.Range. There's a nice, free online compiler againist which
you can test your 2-phase problems. I've quoted the standard chapter
and verse. I've given you simple example programs that demonstrate
the problem. I don't know what more I can do... do I need to build a
program that actually uses the serialization library before you'll
believe me?!
>>> b) I'm going to remove the class declaration and function
>>> implementation from the Archive Concept part of the document.
>
>> Are you planning to replace it with anything? It would be pretty
>> silly to have a section called Archive Concept with no concept
>> description.
>
> I meant to leave in the text below the "class" schema. Before I started I
> reviewed again the SGI documents and concluded that that text could be fit
> into the SGI form for concepts. I'm using
> as a "typical example". This
> isn't hard - its basically a question of reformatting. I hope that will be
> satisfactory.
Me too. That will depend on your execution :)
-- Dave Abrahams Boost Consulting
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2005/09/93534.php | CC-MAIN-2019-39 | refinedweb | 1,200 | 57.06 |
Using Preact with Storybook 3
Using Preact with Storybook 3
Preact works wonders for projects where we want the component architecture from React, but need a very small bundle footprint. One good example of this is our identity-widget. Since it’s an embeddable widget, the full bundle weight of a typical single page application would be way too much and would harm performance on sites using it.
Another library we use heavily at Netlify is Storybook. Being able to work on the UI of components in isolation from application state is liberating, and makes it much easier to take things like error paths, validations, and edge cases into account when building and testing UI components. We use it religiously for our main web app.
Storybook is built for React, however. So what if we wanted to use it for our Preact-based widgets?
Preact / React compatibility
The key to making this work is a library called
preact-compat that offers a wrapper around Preact that mimics the API of the
react and
react-dom libraries.
The way to use it, is to tell webpack to cheat, and import
preact-compat whenever a file is trying to import either
react or
react-dom.
You can read more about
preact-compat and how to generally use it with webpack in their documentation.
Getting Storybook to use preact-compat
How do you get webpack set up with this? I’m going to assume you have an existing project based on Preact and webpack, and then show the steps to set up Storybook from scratch.
Start by installing
storybook and
preact-compat:
yarn add -D @storybook/react preact-compat
Once the dependencies are in place, create a
.storybook/config.js file with the standard storybook config:
import { configure } from '@storybook/react'; function loadStories() { require('../stories/index.js'); // You can require as many stories as you need. } configure(loadStories, module);
Now you need to extend storybook’s config to use the Preact based loader. Storybook lets you do this by defining a
.storybook/webpack.config.js file and export an object with overwrites of their base config.
Our goal by customizing the webpack config, is to inject the resolver so
react and
react-dom gets replaced with
preact-compat when building our stories.
To do that, create this file and save it as
.storybook/webpack.config.js:
module.exports = { resolve: { extensions: [".js", "jsx"], alias: { react: "preact-compat", "react-dom": "preact-compat" } } };
Writing Stories
All that’s left to do now is write stories and run your storybook.
To get started with this, create a file
stories/index.js:
import { h } from "preact"; import { storiesOf } from "@storybook/react"; storiesOf("Storybook With Preact", module) .add("render some text", () => <h1>Hello, Preact World!</h1>)
And add a storybook script in the
"
scripts
" section of your
package.json:
"scripts": { "storybook": "start-storybook -p 9001 -c .storybook" }
Now you can run
yarn storybook and visit
localhost:9001 to view your Preact based stories.
Add your thoughts in the comments | https://www.netlify.com/blog/2018/01/17/using-preact-with-storybook-3/ | CC-MAIN-2020-05 | refinedweb | 500 | 56.05 |
Impulse Framework
A bootstrap framework designed to expedite the creation of Unity projects. The purpose of the framework is to empower developers to focus on developing the game features and worry less about common game systems such as scene management, camera systems, etc. by providing customizable implementations out of the box.
Note about licensing : Almost everything in this framework is licensed under Unlicense and can be used for any purpose (including commercial). The exceptions are:
- The Impulse logo and Impulse Framework splash, located in "Assets/Sprites/Not_Available_For_Commercial_Use". These are proprietary and may not be used for any purposes, commercial or non-commercial.
- Zenject Framework, located in "Assets/Plugins/Zenject". Zenject is licensed under MIT.
- Fonts, located in the "Assets/Fonts" folder. These free fonts are included only so the demo scenes render properly, however they are not available under the Unlicense. Please acquire the appropriate license to use them from their respective websites.
Development Philosophy
We, the creators, believe clean code is based around SOLID principles. More specifically, this means:
- Dependency injection and events (we integrate Zenject for this).
- Scene and game management through states.
- Separation of concerns into Data, Model, Presenter, Service. For those coming from a MVC background, this architecture may seem strange. We have found over years of implementing various Unity project architectures that this design puts us in a middle ground between taking advantage of Unity's features while also enabling testability, mocking, and not overengineering systems that are simple enough to remain MonoBehaviours.
- Data. This is usually a ScriptableObject containing gameplay data. It contains no logic outside of editor scripts used to generate / randomize the data if needed. Note the emphasis on the word gameplay – rendering data such as movement speed, animation parameters, and other data related to presentation are not specified in this file.
- Model, a plain C# class. Has fields matching Data and expects to be fed Data within an initialization method. Contains logic for operating and working on gameplay data. Is testable, and it's highly recommended to create Models with a TDD approach.
- Presenter, a MonoBehaviour-derived class. Contains rendering data (movement, animation, etc.) that takes many iterations to tune. Designed to hold all data that does not need to be tested or is difficult to test. If there are other components on the gameobject, the Presenter is responsible for hooking them together.
- Service, which can be a plain C# class or a MonoBehaviour. Service is a general term used for systems that can be used where needed, when needed. For example, SceneService is responsible for switching scenes in the game (optionally with transitions like a loading screen) and can be injected and called by any script that needs to trigger a scene change.
The framework, however, does not enforce any rigid programming structure. It provides several tools that just work out of the box but leaves the implementation of your game up to you.
Project Setup
In the Build Settings, set the Splash scene to 0 and Menu to 1. Unity preloads everything in each scene, with the exception of the first scene (scene 0). For optimal performance, you should keep your splash scene as lightweight as possible and try not to add too many objects.
Project-Scoped Services (Singleton Managers)
The framework comes with commonly used services such as SceneService. If you want to add your own project-scoped managers, you can do so via the following:
- Create a prefab for your singleton service and add your desired component scripts necessary for functionality.
- Add a GameObjectContext and MonoInstaller component. Ensure the MonoInstaller binds your instance. For example:
using UnityEngine; using Zenject; public class SaveControllerInstaller : MonoInstaller<SaveControllerInstaller> { public override void InstallBindings() { transform.GetComponent<GameObjectContext>().Container.BindInstance(GetComponent<ILocalDataManager>()); } }
- Drag the prefab to the _MainSystemStartup (Zenject) prefab.
Your singleton service prefab will now be spawned in the project scope no matter what scene you start Play mode from.
SceneService
The SceneService is used for loading scenes (with or without transitions). A scene can be loaded in the following ways:
- Show a custom splash image when the game is first started, then load the main menu. This is the default behavior of the SceneService.
- Load a scene with a fade to black transition.
- Load a scene with a fade to black, then loading screen, then fade out transition once the scene is ready.
- Load a scene with a fade to black, then loading screen, prompt user for input once scene is ready, then fade out transition once input is received. This is the default behavior when loading a new scene from the main menu.
These loading methods are called programmatically - look at SceneService.cs to see the methods.
Set a Custom Splash Image
Many games have a splash image or studio logo shown before the game begins. The framework can be set up to display a custom splash image before loading the main menu.
- Locate the Resources/Prefabs/Scene/SplashFadeIn object in the project files. Select the ImageToFade child object.
- Set the Source Image of the Image component to whatever splash image you want to display.
Scene Loading Methods (fade in/out, interpolation, duration, loading screen, wait for keypress)
Fade In / Out:
- Locate the Resources/Prefabs/Scene/SceneService object.
- In the SceneService component, you can specify the Duration of fade in/out as well as the Interpolation of the fade. If you do not want to fade in/out scenes, set the duration to 0.
By default, SceneService scene changes have a fade in / out time. You can change this by editing the SceneService prefab or programmatically. For the latter, the SceneService is assigned to the main system startup prefab and can be injected as a dependency into any script.
The loading screen service offers both interactive (click to continue) and automatic (starts scene once loaded) transitions. Loading screens are selected at random from a LoadingScreenConfig scriptable object. This design follows the trend in many games where a random loading screen is selected that shows gameplay tips for the user to read while they wait.
To add a loading screen:
Create the UI for the new loading screen. The root parent object must be regular rectTransform (not a canvas!)
Add the LoadingScreenPresenter component to the parent object and fill out its fields as follows:
Non-interactive loading screen:
2.1a: Assign the “Progress Fill” Image and Text fields as desired. If you don’t want to use one of these fields, you can leave it empty.
2.2a: Make sure the “Requires User Input” field is empty.
2.3a: Set the default delay (after loading) in the Time After Completion field. This is the amount of time that must pass before the next scene is automatically loaded.
Interactive loading screen:
2.1b: Follow the same steps as above, but ignore 2.2a. Make sure the “Requires User Input” field is marked as Active.
2.2b: If desired, assign the gameobject you want to display when the loading process is completed in the Press Any Key Obj field. This can be a UI object (rectTransform only, no canvas) or any gameobject you want to display once the loading process is complete.
Finally, locate the LoadingScreenConfig scriptable object (normally in Assets/Configurations) and add the newly created loading screen to the Possible Loading Screens collection.
Customize the Main Menu
The framework provides a customizable main menu that is contained within a single scene in order to remain mobile-optimized.
Refer to the video in the section above for a video walkthrough of the main menu.
- Open _Scenes/Menu.unity
- Open the MenuSystem object. You'll notice a main menu and options menu are already set up for you, but are inactive.
- Create a new child object under MenuSystem and attach the MenuScreen script to it.
- Add your new menu elements to this new child object.
- Set your new child object as inactive once you are finished with it.
To switch menus using UGUI OnClick(), call the MenuManager.ChangeMenuAndFade() or MenuManager.ChangeMenu() function.
To run one of the examples in the GameExamples folder, replace Menu and the sample Level01 in the build settings with the specific menu and game scene from the example game's _Scenes folder.
StateMachine (Finite State Machine)
A deterministic finite state machine that works with C# objects as states. It derives from MonoBehaviour to be compatible as a component on game objects that need their own state machine but can also be used for controlling overall project state. While the setup can seem cumbersome, it ensures states are properly identified by their class implementations while also allowing for mocking of states through Zenject binding a different array of test states.
Usage
The following steps must be repeated for each state machine in your game.
Create an abstract state extending the State class. You must create an enum for each state that will derive from this base state. In addition, you must create an enum to identify transitions, as this is a deterministic finite state machine and each state must have a transition (and no transition can be used by multiple states).
In the below example, we define enums for Game states and transitions (the 'Game' prefix separates these states from states for other state machines).
public enum GameStateId { Menu, Play, } public enum GameStateTransition { BeginPlay, GameOver, } public abstract class GameState : State<FSM, GameStateId, GameStateTransition>; { public override void BuildTransitions() {} public override void Enter() {} public override void Exit() {} public override void FixedUpdate() {} public override void Update() {} }
Create subclasses of your abstract class above for each state you want in the game. For example:
using System.Collections; using Zenject; public class PlayState : GameStateBase { private PlayerDeathSignal _playerDeathSignal; public PlayState() { stateId = GameStateId.Play; } [Inject] public void Construct(PlayerDeathSignal playerDeathSignal) { _playerDeathSignal = playerDeathSignal; } public override void BuildTransitions () { AddTransition(GameStateTransition.NullTransition, GameStateId.GameOver); } public override void Enter () { _playerDeathSignal += GameOverHelper; } public override void Exit() { _playerDeathSignal -= GameOverHelper; } private void GameOverHelper() { StartCoroutine(GameOver()); } private IEnumerator GameOver() { yield return null; MakeTransition(GameStateTransition.NullTransition); } }
Note that your subclass must assign the _stateId for itself. This is the GameStateId we defined in the abstract class deriving from State (stateId is a protected property of the base State class). In the example above, we make this assignment in the constructor and then create a separate injection function for dependencies.
Create a new C# class deriving from MonoBehaviour that will be your finite state machine. In the Awake() method, you must pass a few parameters:
- This class (reference to self as a State Machine).
- An array of states. Be sure to specify your derived base state and not the State.cs class.
- Enum ID of the initial state, which will be transitioned to during Awake()
- (Optional) Enum ID of a debug state.
- (Optional) Enum ID of a tracking state.
In addition, your state machine should call the base class's associated MonoBehaviour methods such as Update(), FixedUpdate(), OnTriggerEnter, etc.
We recommend copying and pasting the following, then adapting it to your needs:
using System.Collections.Generic; using Impulse.FiniteStateMachine; using UnityEngine; using Zenject; public class GameStateMachine : MonoBehaviour { // Configurable [SerializeField] private GameStateId _initialGameState; public GameStateId InitialGameState => _initialGameState; [SerializeField] private bool _debug; // Internal private StateMachine<GameStateMachine, GameStateId, GameStateTransition> GameFsm { get; set; } private List<State<GameStateMachine, GameStateId, GameStateTransition>> _states; [Inject] private void Construct(List<State<GameStateMachine, GameStateId, GameStateTransition>> states) { _states = states; } private void Awake() { GameFsm = new StateMachine<GameStateMachine, GameStateId, GameStateTransition>( this, _states, _initialGameState, _debug); } private void Update() { GameFsm.Update(); } private void FixedUpdate() { GameFsm.FixedUpdate(); } private void OnDestroy() { GameFsm.Destroy(); } #if UNITY_EDITOR private void OnGUI() { if (_debug) { GUI.color = Color.white; GUI.Label(new Rect(0.0f, 0.0f, 500.0f, 500.0f), string.Format("Current State: {0}", GameFsm.CurrentStateName)); } } #endif }
Attach your derived state machine to a gameobject in the scene. Next, add a GameObjectContext and create an installer where you will assign the states list to be injected into the state machine. For example:
using Impulse.FiniteStateMachine; using System.Collections.Generic; using Zenject; public class GameStateMachineInstaller : MonoInstaller { private List<State<GameStateMachine, GameStateId, GameStateTransitio>> _states; public override void InstallBindings() { var menuState = new MenuState(); var playState = new PlayState(); var gameOverState = new GameOverState(); Container.BindInstance(menuState); Container.BindInstance(playState); Container.BindInstance(gameOverState); Container.QueueForInject(menuState); Container.QueueForInject(playState); Container.QueueForInject(gameOverState); _states = new List<State<GameStateMachine, GameStateId, GameStateTransition>> { menuState, playState, gameOverState }; Container.BindInstance(_states); } }
This is how our state machine looks in the hierarchy:
Transitions
Before transitioning to a new state, you must first add the transitions by calling AddTransitions() inside the BuildTransitions() method of a State, which gets called by the state machine after a transition. For example:
public override void BuildTransitions () { AddTransition(StateTransition.BEGIN_PLAY, StateID.PLAY); }
The first argument is the enum of the transition state, while the second argument is the enum of the state to transition to.
You can transition to another state by calling MakeTransition([enum ID of transition]);
It is important to note you cannot change state within the Enter() or Exit() methods of an existing state since you cannot change state during the middle of a state transition. In some cases it is necessary to use a coroutine to change state to allow an Enter() or Exit() method to finish. This is especially true in states where gameplay setup is done:
public override void Enter() { base.Enter(); StartCoroutine(Init()); } private IEnumerator Init() { yield return null; MakeTransition(StateTransition.BEGIN_PLAY); }
We put the transition after the yield statement to ensure setup completes before transitioning to the next state.
Note: Because we override StartCoroutine(), you cannot use nameof() to generate the argument, you must reference the coroutine function with curly brackets on the end, like above.
Audio
Playing Music and Managing Playlists
The music manager and music playlist system allow for easy playback and organization of background music within scenes.
For a video demonstration of the music manager and music playlists:
- Drag the MusicManager prefab from Assets/Prefabs/Music/MusicManager into your splash scene, or whichever scene is the first one in your build settings. The MusicManager is persistent from scene to scene, so you do not need to instantiate it in each scene.
- In each scene where you want music to be played, create a new empty game object and attach the MusicPlaylist.cs script. This script can be found in Assets/Scripts/Music/MusicPlaylist.cs. I recommend naming the game object 'MusicPlaylist'. Then, just populate the Music List array in the game object with song files. Leave 'Activate On Awake' to true if you want the playlist to begin playing as soon as the scene is loaded.
Cameras
Top-Down Camera
This camera is best suited for 2D games.
For a video demonstration of the top-down camera:
- Locate the script in Assets/Scripts/Camera/TopDownFollow_Camera.cs.
- Attach this script to a camera object in your scene.
- Drag a Transform into the Follow Target parameter. This is the object the camera will try to follow.
- Set the Target Offset and Move Speed parameters to your liking. Target Offset is x,y,z distance from the follow target (the camera position offset relative to the follow target object). Move speed is how fast the camera moves when the object moves.
Third Person Camera
This camera is based on the camera used in many popular MMORPG games and automatically zooms in when the follow target is obstructed by an object.
For a video demonstration of the third person camera:
- Locate the script in Assets/Scripts/Camera/Third_Person_Camera.cs
- Attach this script to a camera in your scene.
- Create an empty game object and rename it to 'LookAt'. This is the object the camera will focus on and follow.
- Make the LookAt object a child object of the gameobject you want to follow.
- Assign the LookAt object in the Target Look Transform parameter of the Third Person Camera component on the camera.
- To add mouse controls such as zoom-in with the mouse scrollwheel, attach the Third_Person_Mouse_Input.cs script to the camera. This script is located in the Assets/Scripts/Camera folder.
User Interface
The framework includes an InterfaceManager that allows easy switching between interface screen (canvas) objects by setting them active / inactive. The only requirement is that each canvas object has an Interface Screen component (InterfaceScreen.cs). You can find the Interface Manager prefab in the "Assets/02_Prefabs/UI" folder.
The Interface Manager also has methods for calling the SceneService to change scenes. This is useful for games that have a main menu.
See InterfaceManager.cs for different interface screen (canvas object) switching methods, as well as scene change methods.
AI
Most AI scripts in the framework are based around a Faction component that specifies what faction a gameobject belongs to. For a gameobject to be used with the AI scripts, it must have the Faction.cs script attached along with a faction specified (factions can be neutral in addition to friendly or hostile).
Faction.cs is located in the Assets/Scripts/AI folder.
Waypoints
The waypoint system provides an easy way of generating a connected path of points, with the option to ensure it is a closed loop. The waypoints system does not use the Faction system, it simply creates waypoints that any object can follow.
- Locate the WaypointPathManager.cs script in Assets/Scripts/Utility/Waypoints
- Create an empty game object in your scene and attach WaypointPathManager.cs
- Create any number of empty game objects and place them throughout your scene. Make them a child object of the transform with the WaypointPathManager component. These empty game objects are the 'waypoints' in the path.
The WaypointPathManager loops through each child object in its transform and generates a path through them.
Public methods for using waypoints (located in WaypointPathManager.cs):
- public int FindNearestWaypoint (Vector3 fromPos, float maxRange) – Returns the integer index of the nearest waypoint from the supplied position and within the supplied maximum range.
- public int FindNearestWaypoint (Vector3 fromPos, Transform exceptThis, float maxRange) – The same as the above, except a waypoint transform can be passed in to ensure the nearest waypoint is not the waypoint an object is currently at.
- public int GetNextWaypoint (int index, bool reverse) – Gets the next waypoint in the path based on the supplied index. If reverse is true, then it assumes the path is going backward (e.g. point 0 is next after point 1).
- public Transform GetWaypoint (int index) – Returns the transform of the waypoint at the given index.
- public int GetTotal () – Returns the total number of waypoints in the path (number of child objects under the WaypointPathManager object).
- public bool ReachedEndOfPath (int index) – Returns true if the waypoint at the given index is the last waypoint in the path or the first waypoint in the path. This is useful for switching the waypoint traversal of an object if you want it to go back and forth from one end of the path to the other.
Viewcones
These are procedurally generated cones that can be used to give a gameobject the ability to 'see' other gameobjects.
Sphere Detector
The sphere detector projects an invisible sphere around an object. The idea is other objects within this sphere are 'detected' by the object, similar to radar. This system does not actually involve AI behavior, but can be useful in setting one up.
For a video demonstration of the sphere detector:
- Locate the script in Assets/Scripts/AI/Detector.cs
- Attach the Detector script to a gameobject.
- In the Detector component, assign allied and enemy factions. Objects belonging to a faction that is not assigned will show up under the 'Detected Neutral' array during runtime.
The Detector component includes useful methods for fetching data during runtime:
Data Loading
The framework comes with a basic data loading system that reads JSON files and turns them into .asset files with an associated prefab.
To see an example of how it works:
- Inspect the ItemsJson.json file located at Assets/Resources/InventoryDemo/Text/ These JSON objects will have their data converted into .asset files, which will be used to generate prefabs.
- Inspect the JsonReader.cs script located at Assets/Scripts/Inventory/ This script is invoked by the JsonItemExtractor.cs script to read the JSON file and converts the JSON object data into a dictionary.
- Inspect the JsonItemExtractor.cs script located at Assets/Scripts/Inventory/ This script uses the JsonReader.cs script to create .asset files and generate prefabs for each object.
In actual production, you probably don't want to generate new prefabs each time the JSON files change but instead have the .asset files read at runtime when necessary. The prefab generation is included in the JsonItemExtractor functionality for demonstration purposes. | https://unitylist.com/p/1e9/Impulse | CC-MAIN-2018-43 | refinedweb | 3,403 | 56.05 |
Python 3.1 also includes several changes to the standard library, described below.
The major new addition is an ordered dictionary class, which got its own PEP. When you iterate over an ordered dict, you get a list of keys and values in the same order in which they were inserted, which is often desirable. As an illustration, here's some code that shows the difference between an ordered dict and a regular dict:
>>> items = [('a', 1), ('b', 2), ('c', 3)]
>>> d = dict(items)
>>> d
{'a': 1, 'c': 3, 'b': 2}
>>> from collections import OrderedDict
>>> od = OrderedDict(items)
>>> od
OrderedDict([('a', 1), ('b', 2), ('c', 3)])
>>> list(d.keys())
['a', 'c', 'b']
>>> list(od.keys())
['a', 'b', 'c']
As, you can see the ordered dict preserves the initial item order, while the standard dict doesn't. However, I was a little surprised to find out that if you populate the dictionary with named arguments rather than key/value pairs, it does not maintain the order. I would even consider that behavior a bug, because using named arguments is a perfectly valid way to initialize a dictionary, and the items have a clear order (left to right) just like the first example with the items list:
>>> d = dict(a=1, b=2, c=3)
>>> d
{'a': 1, 'c': 3, 'b': 2}
>>> od = OrderedDict(a=1, b=2, c=3)
>>> od
OrderedDict([('a', 1), ('c', 3), ('b', 2)])
The new Counter class in the collections module is a dictionary that keeps track of how many times an object occurs in a collection.
>>> import collections
>>> x = [1, 1, 2, 3, 4, 5, 4, 4, 6, 4]
>>> c = collections.Counter(x)
>>> c = collections.Counter(x)
>>> c
Counter({4: 4, 1: 2, 2: 1, 3: 1, 5: 1, 6: 1})
The class supports the typical set of dict methods: keys(), values() and items() for accessing its contents; however, the update() method differs from a regular dict update(). It accepts either a sequence or a mapping whose values are integers. If you use a sequence, it counts the elements and adds their count to the existing counted items. For a mapping it adds the count of each object in the mapping to the existing count. The following code updates the Counter class initialized in the preceding example:
>>> c.update([3, 3, 4])
>>> c
Counter({4: 5, 3: 3, 1: 2, 2: 1, 5: 1, 6: 1})
>>> c.update({2:5})
>>> c
Counter({2: 6, 4: 5, 3: 3, 1: 2, 5: 1, 6: 1})
>>> c.update({2:5})
>>> c
Counter({2: 11, 4: 5, 3: 3, 1: 2, 5: 1, 6: 1})
The Counter class also has a couple of special methods. The elements() method returns all the elements in the original collection grouped together and sorted by value (not frequency):
>>> list(c.elements())
[1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
3, 3, 3, 4, 4, 4, 4, 4, 5, 6]
The most_common() method returns object:frequency pairs, sorted by the most common object.
>>> c.most_common()
[(2, 11), (4, 5), (3, 3), (1, 2), (5, 1), (6, 1)]
If you pass an integer N to most_common, it returns only the N most common elements. For example, given the Counter object from the preceding examples, the number 2 appears most often:
>>> c.most_common(1)
[(2, 11)]
The itertools module lets you work with infinite sequences and draws inspiration from Haskell, SML and APL. But it's also useful for working with finite sequences. In Python 3.1 it received two new functions: combinations_with_replacement() and compress().
The combinations() function returns sub-sequences of the input sequence in lexicographic order without repetitions (based on position in the input sequence, not value). The new combinations_with_replacement() function allows repetition of the same element, as the following code sample demonstrates:
from itertools import *
print(list(combinations([1, 2, 3, 4], 3)))
print('-' * 10)
print(list(combinations_with_replacement(['H', 'T'], 5)))
Output:
[(1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4)]
----------
[('H', 'H', 'H', 'H', 'H'), ('H', 'H', 'H', 'H', 'T'),
('H', 'H', 'H', 'T', 'T'), ('H', 'H', 'T', 'T', 'T'),
('H', 'T', 'T', 'T', 'T'), ('T', 'T', 'T', 'T', 'T')]
Note that in both functions each sub-sequence is always ordered.
The compress() function allows you to apply a mask to a sequence to select specific elements from the sequence. The function returns when either the sequence or the selectors mask is exhausted. Here's an interesting example that uses both compress() and count() to generate an infinite stream of integers, map() to apply a lambda function (+1) to the elements of count(), and chain(), which chains two iterables together. The result stream is very similar to the non-negative integers, except that 1 appears twice. I'll let you guess what the compress() function selects out of this input stream:
from itertools import *
selectors = [1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1]
sequence = chain(iter([0, 1]), map(lambda x: x+1, count()))
print(list(compress(sequence, selectors)))
Output:
[0, 1, 1, 2, 3, 5, 8, 13, 21]
Python has a powerful and flexible industrial-strength logging module that supports logging messages at different levels to arbitrary target locations that include memory, files, network, and console. Using it requires a certain amount of configuration. Libraries that want to provide logging can either configure themselves by default or require users to configure them. If, as library developer, you require users to configure logging, you're likely to annoy users who don't care about logging. However, if your library configures itself what should the configuration settings be?
There are two common options: log to a file or log to the console. Both options cause clutter. Until Python 3.1 best practice required the library developer to include a small do-nothing handler and configure its logger to use this handler. Python 3.1 provides such a NullHandler as part of the logging module itself.
Here's a logging scenario: Suppose you have the following library code in a module called lib.py. It has an init() function that accepts a logging handler, but defaults to the new NullHandler. It then sets the logger object to use the provided logger (or the default one). A logging handler is an object that determines where the output of the logger should go. The example function a_function_that_uses_logging() calls the global logger object and logs some funny messages:
import logging
logger = None
def init(handler=logging.NullHandler()):
global logger
logger = logging.getLogger('Super-logger')
logger.setLevel(logging.INFO)
logger.addHandler(handler)
def a_function_that_uses_logging():
logger.info('The capital of France is Paris')
logger.debug('Don\'t forget to fix a few bugs before the release')
logger.error('Oh, oh! something unpalatable occurred')
logger.warning('Mind the gap')
logger.critical('Your code is a mess. You really need to step up.')
The next bit of application code configures a rotating file handler. This is a sophisticated handler for long-running systems that generate large numbers of logged messages. The handler limits the amount of logging info in each file, and also saves a pre-set number of backup files. These restrictions ensure that the log files never exceed a given size, and that the latest logging info (up to the limit) is always preserved.
For example purposes, the code configures the handler to store only 250 bytes in each log file and maintain up to 5 backup files. It then invoke the venerable a_function_that_uses_logging().
import logging
import logging.handlers
from lib import a_function_that_uses_logging
log_file = 'log.txt'
handler = logging.handlers.RotatingFileHandler(
log_file, maxBytes=250, backupCount=4)
init(handler)
for i in range(4):
a_function_that_uses_logging()
Here's what I found in my current directory after running this code. The handler created a rotating log file (log.txt), with four backups because the example allowed only 250 bytes in each file.
~/Documents/Articles/Python 3.1/ > ls
article.py log.txt log.txt.1 log.txt.2 log.txt.3 log.txt.4
To view the contents of those files I simply concatenated them:
~/Documents/docs/Publications/DevX/Python 3.0/Article_6 > cat log.*
Mind the gap
Your code is a mess. You really need to step up.
Your code is a mess. You really need to step up.
The capital of France is Paris
Oh, oh! something unpalatable occurred
Mind the gap
Your code is a mess. You really need to step up.
The capital of France is Paris
Oh, oh! something unpalatable occurred
The capital of France is Paris
Oh, oh! something unpalatable occurred
Mind the gap
Your code is a mess. You really need to step up.
The capital of France is Paris
Oh, oh! something unpalatable occurred
Mind the gap
This works well, but sometimes users don't care about the logged messages—they just want to invoke the function without having to configure the logger, and they need it to work in a way that will not cause the disk to run out of space or the screen to be filled with messages. That's where the NullHandler class comes in. The next bit of code does the same thing as the preceding example, but doesn't configure a logging handler and gets no logging artifacts. Note how much ceremony went away; there are no imports for logging and logging.handlers, and no hard decisions about which handler to use or how to configure it.
init()
for i in range(3):
a_function_that_uses_logging()
Advertiser Disclosure: | https://www.devx.com/opensource/Article/42659/0/page/3 | CC-MAIN-2021-43 | refinedweb | 1,590 | 63.09 |
...) {
int T = hi;
hi = lo;
lo = T;
}
quick_srt(array, low, lo);
quick_srt
hi
update in db ....so with out using javascript
...only html,java,db should
hi
update in db ....so with out using javascript
...only html,java,servlets,db
Hi - Struts
Hi Hi Friends,:
Hi... - Struts
Hi... Hello Friends,
installation is successfully
I... write the classpath and send classpath command Hi,
you set path = C:\j2sdk1.5\bin;
and JAVA_HOME=C:\j2sdk1.5;
information.
Thanks.
Hi...hi Hi.... let me know the difference between object and instance variables with examples.... Hi friend,
Objects are key... - Struts
Hi... Hello,
I want to chat facility in roseindia java expert please tell me the process and when available experts please tell me Firstly you open the browser and type the following url in the address bar", " roseindia - Java Interview Questions
hi roseindia advantages of object oriented progamming language? Hi friend,
Some advantages of object oriented progamming language...://
Thanks,
Hi .Again me.. - Java Beginners
Hi .Again me.. Hi Friend......
can u pls send me some code......
REsponse me.. Hi friend,
import java.io.*;
import java.awt....://
Thanks. I am sending running code
The quick overview of JSF Technology
;
The quick Overview of JSF Technology
This section gives you an overview of Java
Server Faces technology, which simplifies the development of Java
Based web applications. In this JSF
Quick introduction to web services
Quick introduction to web services
Introduction:
Web services...
talk to java web services and vice versa. So, Web services are used plzz reply
hi plzz reply in our program of Java where we r using the concept of abstraction Plz reply i m trying to learn java ...
means in language of coding we r not using abstraction this will be used only for making ideas | http://roseindia.net/tutorialhelp/comment/29642 | CC-MAIN-2014-35 | refinedweb | 294 | 65.73 |
Looking to auto-magically make the button images look disabled on Android with a custom renderer. On iOS, this was simple enough:
[assembly: ExportRenderer(typeof(Button), typeof(CustomButtonRenderer))] namespace AppName.iOS.Renderers { public class CustomButtonRenderer : ButtonRenderer { protected override void OnElementChanged(ElementChangedEventArgs<Button> e) { base.OnElementChanged(e); if (Control == null) return; Control.AdjustsImageWhenDisabled = true; } } }
But I failed to replicate this effect for Android. Is there a way?
Answers
What's wrong with actually disabling them at the shared layer? Why would you want enabled buttons to look disabled?
I'll clarify: look disabled when disabled.
With the custom renderer for iOS above, the images in the buttons go gray when disabled.
Ok... Let me fill in the blank that I think you're implying but not saying..
Problem: Disabled buttons don't look disabled.
Is that what you're getting at? That your disabled buttons don't look disabled? I'm just guessing here. Screen shots would help a lot.
Fair enough. First shot the six buttons should look disabled, and the second shot, the buttons are enabled (match with the lock button). When there's just text, it's fine but with images, they don't look disabled. It was the same thing on iOS until I made that simple renderer (don't have a device handy for a screenshot).
Understandably, I could set up a trigger event to change the image when the button is disabled, but if there's a simple solution for Android as there was for iOS, I would prefer that.
Ah... Ok. This is specific to
ButtonImage- I missed that even though you said that. Sorry. I never use the thing because of all the problems they have.
This one is already documented as an issue.
So you may or may not even be able to band-aid it through a renderer if the problem runs deep.
Which means you're just looking to hack around it at this point.
A fast agnostic fix would be to put a panel over it of a mostly transparent white that is only visible when the button is disabled. (Bind
IsVisibleto the button's
IsEnabledthrough a
BoolInvertConverter) Then the blocking semi-transparent panel will give the entire control that frosted/greyed-out look.
I guess I'm looking for hack, but a native level hack if anyone knows of one. | https://forums.xamarin.com/discussion/comment/343473/ | CC-MAIN-2019-18 | refinedweb | 392 | 66.84 |
Hi all,
I’m new to the Arduino (and reasonably new to “proper” programming as well!) and I’ve run into an issue with attachInterrupt which I’m sure is purely down to my lack of knowledge!
I have a Dallas DS18S20 and I am trying to put together a device which performs an action based on when the temperature changes.
The issue I have run into is that I need to have constant polling of other devices occurring at the same time, so I thought that attachInterrupt would work
The DS18S20 is wired in the usual way (4.7K pull-up resistor, pin 2 on sensor → pin2 on arduino (or seeeduino in my case)) and my current code is as follows:
#include <OneWire.h> OneWire ds(2); void setup(void){ Serial.begin(9600); attachInterrupt(0,readTemp, CHANGE); } void readTemp; } for ( i = 0; i < 9; i++) { // we need 9 bytes data[i] = ds.read(); } // convert the data to actual temperature unsigned int raw = (data[1] << 8) | data[0]; if (type_s) { raw = raw << 3; // 9 bit resolution default if (data[7] == 0x10) { // count remain gives full 12 bit resolution raw = (raw & 0xFFF0) + 12 - data[6]; } } else { byte cfg = (data[4] & 0x60); if (cfg == 0x00) raw = raw <<; Serial.println(celsius); } void loop(void){ }
As you can see, this is basically the “dallas temperature” example that comes with the OneWire library, however I think I’m “doing it wrong” when it comes to configuring the interrupts.
Any help that can be provided (especially links to further reading etc.) are more than welcome.
Kind regards,
Prof | https://forum.arduino.cc/t/using-interrupts/103362 | CC-MAIN-2022-05 | refinedweb | 262 | 68.2 |
Ticket #12 (closed defect: fixed)
Static content doesn't work for applications that are included in a larger site
Description
Since static files are configured in the config file, if they are included in an application that is joined up to form a larger site, the static files will not appear in the proper place beneath the application root.
Change History
comment:3 Changed 11 years ago by fumanchu@…
This is probably fixed in CherryPy? changeset 760, which allows arbitrary mount points using a VirtualPathFilter.
comment:5 Changed 11 years ago by godoy
comment:6 Changed 10 years ago by jorge.vargas
- Component changed from TurboGears to CherryPy
with the move for CP3 this should become obsolete.
comment:7 Changed 10 years ago by alberto
- Milestone changed from 1.1 to __unclassified__
Batch moved into unclassified from 1.1 to properly track progress on the later
comment:8 Changed 9 years ago by Chris Arndt
- Keywords needs confirmation added
- Component changed from CherryPy to TurboGears
comment:9 Changed 8 years ago by lszyba1
How are you including the static content? For example if you are mounting you application via a modwsgi then all url and static request would have SCRIPT_NAME appended to them.
This tickiet #2033 makes sure that all urls are encoded with tg.url('someurl') by default in tg2 quickstart and therefore they should be properly mounted via modwsgi.
So the question for this bug would be how do you include your application in larger site? It should work with modwsgi.
comment:10 Changed 8 years ago by chrisz
This problem appears with TG 1.x when you put you an app inside in a namespace package, because the default app.cfg sets
[/static] static_filter.on = True static_filter.dir = "%(top_level_dir)s/static"
So the static directory (the same applies to [/favicon.ico]) is set to the top level, i.e. the level of the namespace package instead of the application which is one level deeper. We should add a comment to the app.cfg file that in this case, you should use package_dir instead of top_level_dir, or maybe even generally set
[/static] static_filter.on = True static_filter.dir = "%(package_dir)s/static"
See also here in the mailing list.
comment:11 Changed 8 years ago by chrisz
- Status changed from new to closed
- Severity changed from normal to minor
- Priority changed from highest to low
- Version changed from 0.9a6 to 1.0.8
- Milestone changed from __unclassified__ to 1.0.x bugfix
- Keywords needs confirmation removed
- Resolution set to fixed
To finally close this matter, changed the app.cfg as suggested in r6168. Also adapted the online docs accordingly and explained the difference between top_level_dir and package_dir. Since this can be considered a bug, and since it cannot break existing TG 1.0 applications, committed this to the 1.0.x bugfix branch as well.
John Speno mentioned that the VirtualPathFilter? may help here. | http://trac.turbogears.org/ticket/12 | CC-MAIN-2017-17 | refinedweb | 485 | 66.94 |
Displaying Data on Web Pages Using DHTML
Microsoft Corporation
October 1998
Summary: Discusses how the Microsoft® Visual J++™ version 6.0 Dynamic HTML class library can be used to design and deploy Web-based applications. (16 printed pages) Covers:
Wiring a Dynamic HTML Table to a Data Source
Displaying a Partial Recordset
Formatting the Live Table
Using Header and Footer Rows
Introduction
Microsoft Visual J++ 6.0 supports the creation of the next generation of Web-based applications using Dynamic HTML (DHTML) classes. Dynamic HTML is the most reliable way to create cross-platform applications. Soon to be a W3C standard, DHTML is a uniform language for creating applications across browsers, operating systems, and hardware configurations. It provides for user interaction and data presentation in an easy-to-understand combination of HTML, script code, and a robust document object model (DOM).
Visual J++ 6.0 contains a rich set of classes that enable developers to generate DHTML code without having to learn a scripting language or the nuances of the DOM. The Dynamic HTML class library gives developers control over the Web page hosting the application, allowing for a richer Web client application. The Windows Foundation Classes (WFC) Dynamic HTML class library can be used to build enterprise client/server applications that adhere to Internet standards of HTTP and HTML. The same class library can be used to build richer client applications when used in conjunction with Microsoft Internet Explorer 4.0, Win32®, and ActiveX® controls.
Using the Dynamic HTML class library, developers can author DHTML pages using only the Java language. The resulting Java application can directly render DHTML on the fly. With the Visual J++ 6.0 Dynamic HTML library, developers have the ability to design and deploy truly integrated Web- and Windows-based applications that can be executed on multiple platforms.
Wiring a Dynamic HTML Table to a Data Source
The DhTable class can be used to display tabular data on a Web page. As with any DHTML application, there are two fundamental modes of operation: client-side and server-side. If your client machines run the Win32 platform, have Internet Explorer 4.01 SP1, have a version of the Microsoft Virtual Machine (VM) for Java that has the WFC run time, and have access to your database, you can run your DHTML data applications as client-side apps. Otherwise, if you can't make assumptions about your client machines or they don't have access to the database, you will need to run your application on the server and have the HTML that is generated sent to client machines. In the first part of this document, we will develop a client-side application that displays data from the "Pubs" database, an example database that ships with Microsoft SQL Server™ version 6.5. The section later in this article titled "Server-Side Data Tables" describes how to write the data table a server-side Web application.
To create a client-side data table Web application using DHTML, you first need to create a Code Behind HTML project. This project sets up a class, Class1 by default, which derives from DhDocument and an HTML page, Page1.htm. The HTML document represents a single Web page, and the Class1 object represents the Java object that "lives" behind the Web page and allows the Web page to be manipulated programmatically from Java code. The data table, as a DhTable object, will be added to the Class1 object, and this will cause it to be displayed on the Web page. To begin, we first need to set up a DataSource object that will represent the connection to a database. For example, you can create a DataSource object, set the query using the setCommandText method, and set the connection string using the setConnectionString method. The client machine will need to have access to this data source, perhaps as an Open Database Connectivity (ODBC) connection. The following code added to the initForm() method of your document class will set up a DataSource connected to a database:
The table is created and told what DataSource to use:
When the table is added to the document, a call to DataSource.begin() is made to populate the table with records returned by the query:
You will also need to add these imports:
Congratulations! You have created a DHTML data table. The table that is specified by the previous code is quite "bare bones"—it has no formatting and no header row indicating what is in each column, and all the data is displayed, even if the recordset is very large. The following class, simpleAddress.java, illustrates these concepts. Insert your own Data Source Name (DSN) information in the connection string.
import com.ms.wfc.html.*; import com.ms.wfc.core.*; import com.ms.wfc.ui.*; import com.ms.wfc.data.*; import com.ms.wfc.data.ui.*; public class simpleAddress extends DhDocument { DhTable theTable; DataSource dataSource1; public simpleAddress() { theTable = new DhTable(); // set up the data source with connection string and query String connection = "DSN=ISQL_w;DATABASE=pubs;UID=myUID;PWD=myPWD"; dataSource1 = new DataSource(); dataSource1.setConnectionString(connection); dataSource1.setCommandText("Select * from authors"); // attach the datasource to the table theTable.setDataSource(dataSource1); // add the table to the document this.add(theTable); // activate the Data Source try { dataSource1.begin(); } catch(Exception e) { add(new DhText("Exception in dataSource1.begin(): " + e.toString())); } } }
Displaying a Partial Recordset
The table can be broken up into pages. To specify a page size, use the setPageSize function of the DhTable class. When using a table with pages, the functions showNextPage and showPreviousPage are used to move from one page to another. If you would rather specify a record range, use the setRecordRange function. The records are considered to be numbered from zero; setRecordRange(0, 10) will display ten records numbered 0 through 9. If you want to find out what records the table is displaying, use the functions getRangeStart() and getRangeEnd().
Using a Repeater Row to Customize the Table
In the previous example, the table simply inherited the style of the document that contained it. Also, the table showed all the fields of the records in the order they were returned from the database. In most situations, you will want more control over what data is displayed and how it is displayed. If you want to specify the formatting for the cells of the table or specify what fields you want, you will need to create a repeater row. The repeater row is a DhRow object that is used as a template for all the rows of the table. Thus, one can set properties and styles on the repeater row and specify bindings of each cell of the repeater row with the desired field in the recordset. The following example illustrates this:
myTable = new DhTable(); myTable.setDataSource(ds); DhRow repRow = new DhRow(); // set Properties for the entire body of the table repRow.setBackColor(Color.GRAY); for (int i = 0; i < numColumns; i++) { repRow.add( new DhCell() ); // set Properties for a specific column. if (i == 0) { repRow.getCell(i).setBackColor(Color.PURPLE); } }")});
In the previous example, "Field1," "Field2," and so on should be replaced by the names of your fields. If the field names are not found in the database, your table will be empty.
Because the repeater row is a template for all rows in the table, setting the properties of the row affects the entire body of the table, and setting the properties of a cell in the repeater row affects that entire column.
If a repeater row is not specified, a default repeater row is created with data bindings set to all the fields, in order of appearance in the recordset. Thus, if you use your own repeater row, you don't get the default data bindings, and you have to set them in your code, as just shown. If you don't do this, the data table will be empty. Specifying your own databindings allows you to pick and choose the columns you want in the order you want them shown.
You can also specify a style to be associated with the repeater row or any of its columns. Because the DhCell elements that determine the style of the columns are children of the DhRow elements, setting a style on a cell will override the setting for the row. The following example illustrates this:
myTable = new DhTable(); myTable.setDataSource(ds); DhRow repRow = new DhRow(); // set style for the entire body of the table DhStyle rowStyle = new DhStyle(); rowStyle.setBackColor(Color.GRAY); repRow.setStyle(rowStyle); for (int i = 0; i < numColumns; i++) { repRow.add( new DhCell() ); // set style for a specific column. if (i == 0) { DhStyle column0_Style = new DhStyle(); column0_Style.setBackColor(Color.PURPLE); repRow.getCell(i).setStyle(column0_Style); } }")});
The cell and the table have borders, but the rows do not. To set the style of the border, use the setBorderStyle method on either the cell or the entire table. There are several border styles encoded by the enum class DhBorderStyle spanning the range of border types supported by HTML. The setBorderColor method is used to give the colors of the various edges of the border. Use the DhBorders enum to specify the top, bottom, left, or right sides.
Formatting the Live Table
If you want a particular style to apply to some rows but not others, you will need to do this after the table is populated with data and added to the document. In this case, it will not do to set the style of the repeater row, because that applies to all rows. You will need to get the DhRow objects for the rows that were actually created in the live table and change the properties of that object. The following example shows the use of the getBodyRows() and getBodyRowCount() functions to modify the table after it has been added to the document. Unlike our other examples, this code will not function correctly when called from the constructor or from initForm(). It will only work after the HTML elements for the table have been generated; for example, in the onDocumentLoad event handler. In this example, every other row is highlighted in yellow:
If you need to access the data in the tables, you can either look at the recordset directly or access the text in the table using the getText function. Due to a known problem in the Visual J++ 6.0 release, it may be necessary to use the following functions to get and set the text of a cell:
public String getCellText(int row, int col) {( text ); return text; } public void setCellText(int row, int col, String s) {( s ); }
The previous code can also be modified to add elements to cells.
Note If your data contains HTML tags, they will be interpreted, so you can use this fact to display active links, embedded images, and other HTML elements in the cells of your table.
Using Header and Footer Rows
If you want to have a default header row created for you, you can set the AutoHeader property on the DhTable object, which will create a header row with the field names for each column:
If you want to specify your own unique header or footer row, you can add a header or footer row to the table by creating a DhRow object, adding cells, setting the properties and text of the cells, and calling setHeaderRow or setFooterRow:
Using Events
You can make your data table more interactive by adding events to it. The following code (added before the call to setRepeaterRow) adds event handlers to the repeater row:
Event handlers added to the repeater row are automatically copied to all body rows of the table. The previous code, when the following functions are added to the class, supports hot tracking, so when the mouse hovers over a row, it is highlighted:
DhElement currElement = null; public void Table_MouseEnter( Object sender, MouseEvent event) { currElement = (DhElement) sender; currElement.setBackColor(Color.RED); } public void Table_MouseLeave( Object sender, MouseEvent event) { currElement = (DhElement) sender; currElement.resetBackColor(); }
You can hook up event handlers to individual columns by setting the event handler of a cell in the repeater row, and you can hook up event handlers to individual cells after the table is added to the document. You can, of course, also hook event handlers up to header and footer rows, specific cells in these rows, or to elements (such as DhButton) that are added to cells.
Server-Side Data Tables
If you cannot make assumptions about the client machines or the client machines don't have direct access to the data source, you will need to have your DHTML application run on the server. When this is done, the database is accessed only from the server, rather than from each client. On the server, the running Java code (actually, the constructor and initForm() method of your DhDocument derived class) generates an HTML page and sends this to the client, where it is displayed as a normal HTML page with the client's browser. Once the HTML code is sent to the client, the Java program has no more control over it or communication with it. You can still take advantage of most of the functionality of the DhTable class; however, you cannot use events, and you cannot look at the cells of the table because the elements do not actually exist on the server where the Java code is running. If you do require interaction with the user, you will need to set up a DhForm with a DhSubmitButton, as described later.
Instead of an HTML page, the page will be loaded as an Active Server Pages (ASP) page. The ASP page needs to set up the DhModule with the Java class and the HTML template to use, if any. The following is a sample ASP page that does this:
On the server, one will need to set up a "System DSN" for the database using the ODBC Manager on Windows Control Panel. Also, the class file(s) need to be on the classpath—for example, in C:\WINNT\Java\classes. The output directory for class files can be set in the Project Properties dialog box under Compile.
Apart from events and user interaction, the same code for data tables can be used on the client as on the server. For server-side Web applications, communication with the user is achieved via the submission of HTML forms.
The ServerDataBinding example included in the Visual J++ samples can be used as a starting point. In this example, there is limited user interaction via a form on the HTML page in which an SQL query can be entered by the user. A table is displayed with the results of the query. To better understand the server-side programming model, it is worthwhile to examine the sequence of events as this application is executed.
There are three files in the project: the ASP file, the HTML file, and the Java class. The ASP file is the entry point—the user on the client machine navigates to this ASP file using their browser. On the server, the ASP code sets up a module (a DhModule object) and sets the HTML document that will be used as the template. The Java class is loaded (assuming it is correctly on the classpath) and begins execution at the constructor, binding Java objects to the elements in the HTML template. The first time the user accesses the link, the default query is used to generate the table. This is turned into HTML that is sent back to the client machine and displayed on the user's browser. The user may then enter another query and select Submit Query. This causes the query to be sent back to the server as part of the HTTP request in the usual way for a form with a Submit button. At this point, an entirely new execution of the Java class is begun, starting from the constructor. The only difference is that when the query is examined, it may have changed from the default query. A new table is constructed with a new query, turned into raw HTML, and sent back to the client's browser. Whenever the user changes the query and selects Submit Query, this whole process repeats with an entirely new Java object.
How does the Java code know what information was returned by the submitted form? The HTML code contains an INPUT tag with the name attribute queryString. Whatever is in the text box is therefore submitted as part of the HTTP request. The DhModule object provides a way to access this information. The queryString parameter was accessed from Java code as follows:
In a similar way, one can add buttons for moving to the next and previous pages of a multipage table. Each of these buttons must be a Submit button, because whenever they are pressed on the client side, a form will need to be submitted containing the information about what button was pressed, along with any other information required from the client, such as what page of a multipage table they were looking at. None of this information can be saved in the Java code because a newly constructed class is executed each time a button is pressed. The solution to this is to have a hidden INPUT tag in the HTML that contains the data you want to preserve. This can be added to the HTML form from your Java code by adding a DhRawHTML object. For example, the following code could be used to preserve and retrieve the pageSize of a table:
int pageSize = 0; // Get the OLD pageSize as a String from the http request. String pageSizeString = DhModule.getCurrentModule(). getQueryParameter( "pageSize" ); // attempt to convert to an integer try { pageSize = Integer.parseInt( pageSizeString ); } catch(NumberFormatException except) { messageBox.show("The pageSize parameter could not be read."); pageSize = 10; } // Code here may change the pageSize; for example, if a submit // button for changing the pageSize was pressed. // add the NEW pageSize to the HTML (to be sent to the client) so // that it is submitted as a hidden parameter in the next http // request. DhForm form1 = new DhForm(); DhRawHTML pageSizeRawHTML = new DhRawHTML( "<input type=hidden name=pageSize"); form1.add( pageSizeRawHTML );
A complete example is included here of a server-side DhTable with "Next Page" and "Previous Page" functionality implemented using Submit buttons. The current record is saved using the method just described. Notice that because the table is created anew each time a button is pressed, showNextPage() and showPreviousPage() functions on DhTable will not be useful. The current record is determined from the HTTP request, and setRecordRange() is used to set the current page.
The file SampleServerSide.java:
import com.ms.wfc.html.*; import com.ms.wfc.core.*; import com.ms.wfc.ui.*; import com.ms.wfc.data.*; import com.ms.wfc.data.ui.*; /** * This class demonstrates simple server-side data binding * using a DhTable class and the sample Northwind database * */ public class SampleServerSide extends DhDocument { DhForm tableArea; DhEdit queryEdit; boolean prevPageHit = false; boolean nextPageHit = false; int pageSize = 10; int recordNum = 0; /** * The constructor, which just calls initForm */ public SampleServerSide(){ initForm(); } /** * initForm is where you should do all your setup * of the elements in the HTML template that you wish to bind to. */ protected void initForm() { tableArea = new DhForm(); // retrieve the queryString parameter String query = DhModule.getCurrentModule(). getQueryParameter( "queryString" ); // default to "SELECT * FROM Products" if ( query == null || query.equals( "" ) ){")); this.add(form1); this.add(new DhHorizontalRule()); this.add(tableArea); } /** * This function creates a table and initializes it to be populated * with data from the specified SQL query * @param query The SQL statement to retrieve data to populate table * @return A DhTable initialized with the data from the <i>query</i> */ private DhTable createDataTable(String query){ // create and format the table DhTable table = new DhTable(); table.setBorder( 1 ); table.setAutoHeader( true ); table.setBackColor( Color.CONTROL ); table.setForeColor( Color.BLACK ); table.setPageSize(pageSize); Recordset rs = null; String error = null; try{ // create a DataSource object, and set it up DataSource ds = new DataSource(); ds.setConnectionString("DSN=Northwind;"); ds.setCommandText( query ); // cause the DataSource to generate a recordset rs = ds.getRecordset(); }catch( Exception ex ){ error = ex.getMessage(); } // if there is an error, or we have an empty recordset, display the // appropriate message if ( rs == null || rs.getEOF() ){ if ( error != null ){ error = "The query produced the following error message:<BR><b>" + error + "</b>"; }else{ error = "The query produced no records. Please try another."; } DhRow r = new DhRow(); DhCell c = new DhCell( error ); c.setForeColor( Color.RED ); r.add( c ); table.setAutoHeader( false ); table.setBorder( 0 ); table.resetBackColor(); table.setFont( Font.ANSI_FIXED ); table.add( r ); }else{ // set the Recordset as the data source for the table table.setDataSource( rs ); // make sure we don't go beyond the end of the Recordset int recordCount = rs.getRecordCount(); if (recordNum >= recordCount) { recordNum = recordCount - pageSize; } table.setRecordRange(recordNum, recordNum + pageSize ); } return table; } }
The following ASP file is used to load the previous class:
<HTML> <HEAD> <META NAME="GENERATOR" Content="Microsoft Visual Studio 98"> <META HTTP- <TITLE>Document Title</TITLE> </HEAD> <BODY bgcolor=tan> <% ClassName = "SampleServerSide" Set Module = Server.CreateObject( "SampleServerSide.Module1" ) Module.setCodeClass( ClassName ) Module.setHTMLDocument( "" ) %> </BODY> </HTML>
This example also uses a Module1.java: | http://msdn.microsoft.com/en-us/library/aa260509(v=vs.60).aspx | CC-MAIN-2014-41 | refinedweb | 3,535 | 60.14 |
Create HTML and PDF receipts and invoices
ExamplesExamples
OverviewOverview
At its core, Revoice simply combines data into a Handlebars template to generate a HTML file. After that, it uses PhantomJS to render the page and writes it into a PDF file.
Apart from that, it also performs a few more functions:
- Provides Handlebar helpers that enables you to create your own templates
- Uses JSON Schema-compliant schema
- Uses
heto decode HTML
Getting StartedGetting Started
InstallationInstallation
You can install revoice through npmjs.com
$ yarn add revoice
UsageUsage
import Revoice from 'revoice'; const data = { "id": "yvjhn76b87808", "date": "2017-02-02", "issuer": { "name": "Brew Creative Limited", "address": [ "1905, Nan Fung Centre", "264-298 Castle Peak Road", "Tsuen Wan", "New Territories", "Hong Kong" ], "contact": { "name": "Daniel Li", "tel": "+852 1234 5678", "email": "dan@brew.com.hk" } }, "invoicee": { "name": "Cerc Lannister" }, "items": [{ "id": "7A73YHAS", "title": "Amazon Echo Dot (2nd Generation)", "date": "2017-02-02", "amount": 39.99, "tax": 10.00, "quantity": 12 }] } const options = { template: 'default', destination: path, name: 'index' } Revoice.generateHTMLInvoice(data, options); // Returns a promise
Data SchemaData Schema
The schema for the invoice and each item can be found under
/src/schema. The schema are written in accordance with the JSON Schema specification, and are validated using Ajv.
ItemItem
* denotes a required field
id*string - Unique identifier for the item; e.g.
UG23H7F9Y
title*string - Title of the goods or service
descriptionstring - Description of the goods or service
linkstring - URL link to the web page for the product or sevice
datestring - The date the goods or service were provided, in a format satisfying RFC3339; e.g.
2017-08-25
amount*number - Unit Price per item (excluding tax)
tax*number - Amount of tax charged for unit
quantity*integer - Number of units
InvoiceInvoice
* denotes a required field
id*string - Unique identifier for the invoice; e.g.
AUS-0001-A
date*string - Date the invoice is issued, in a format satisfying RFC3339; e.g.
2017-08-25
duestring - Date the invoice is due, in a format satisfying RFC3339; e.g.
2017-08-25
issuer*object - Details about the party issuing the invoice
name*string - Name of the issuer; e.g.
Acme Corp
logostring - URL of the issuer's logo
address*[string] - An array of string that composes the issuer's address; e.g.
["123 Example Street", "London", "United Kingdom", "W1 2BC"]
contact*object - Details of the issuer's contact details
name*string - Name of the contact person representing the issuer
position*string - Position / role of the contact person representing the issuer
tel*string - Telephone number of the contact person representing the issuer
faxstring - Fax number of the contact person representing the issuer
address*[string] - An array of string that composes the contact person's address (this may be different to the issuer's registered address); e.g.
["123 Example Street", "London", "United Kingdom", "W1 2BC"]
websitestring - Website of the contact person representing the issuer
invoicee*object - Details of the entity the invoice is addressed to
name*string - Name of the invoicee; e.g.
Jon Snow
address[string] - An array of string that composes the invoicee's address; e.g.
["123 Example Street", "London", "United Kingdom", "W1 2BC"]
contactobject - Details of the invoicee's contact details
namestring - Name of the invoicee
position*string - Position / role of the contact person representing the invoicee
telstring - Telephone number of the invoicee
items*[Items] - A list of items included in the invoice (see the
Itemschema above)
commentsstring - Any messages/comments the issuer wishes to convey to the invoicee
OptionsOptions
Default options can be accessed at
Revoice.DEFAULT_OPTIONS, which evaluates to:
{ template: 'default', destination: './tmp', format: 'A3', orientation: 'portrait', margin: '1cm', }
A detailed explanation of available options is as follows:
namestring - the name to give to the invoice file(s). For example, you can use your invoice number as the name of the file E.g.
YG87ASDG. Overrules the
nomenclatureoption.
nomenclaturestring - the rules for which files are named. Valid values are
'hash', which will generate a SHA512 hash of the HTML invoice and use that as the name of the file. Is overruled by the
nameoption.
templatestring - the template to use for the invoice. You can specify a pre-defined template by using its name (e.g.
'default'), or specify your own template by entering the path to the file (e.g.
'./test/sample/templates/test.html') .
destinationstring - the destination directory where you want to the invoice to be outputted at (e.g.
'./tmp')
formatstring - format of the page, valid values are
'A3',
'A4',
'A5',
'Legal',
'Letter',
'Tabloid'
orientationstring - orientation of the page, valid values are
'portrait'and
'landscape'
marginstring - margin on the page. Should be a number followed by a unit (e.g.
'1cm'). Valid units are
'mm',
'cm',
'in',
'px'
format,
orientation and
margin options are derived from PhantomJS's
paperSize object options
TestingTesting
Make sure you're running the latest version of Node (currently 7.8.0) and yarn (currently 0.19.1).
Clone the repository, install the dependencies and run the coverage script.
$ git clone $ cd revoice/ $ yarn $ yarn run coverage
TODOsTODOs
- Set up code styling with ESLint
- Provide better documentation on how to create your own template | https://www.npmjs.com/package/revoice | CC-MAIN-2021-49 | refinedweb | 850 | 50.36 |
Quadratic regression is a type of regression we can use to quantify the relationship between a predictor variable and a response variable when the true relationships is quadratic, which may look like a “U” or an upside-down “U” on a graph.
That is, when the predictor variable increases the response variable tends to increase as well, but after a certain point the response variable begins to decrease as the predictor variable keeps increasing.
This tutorial explains how to perform quadratic regression in Python.
Example: Quadratic Regression in Python
Suppose we have data on the number of hours worked per week and the reported happiness level (on a scale of 0-100) for 16 different people:
import numpy as np import scipy.stats as stats #add legend hours = [6, 9, 12, 12, 15, 21, 24, 24, 27, 30, 36, 39, 45, 48, 57, 60] happ = [12, 18, 30, 42, 48, 78, 90, 96, 96, 90, 84, 78, 66, 54, 36, 24]
If we make a simple scatterplot of this data we can see that the relationship between the two variables is “U” shaped:
import matplotlib.pyplot as plt #create scatterplot plt.scatter(hours, happ)
As hours worked increases, happiness also increases but once hours worked passes around 35 hours per week happiness starts to decline.
Because of this “U” shape, this means quadratic regression is likely a good candidate to quantify the relationship between the two variables.
To actually perform quadratic regression, we can fit a polynomial regression model with a degree of 2 using the numpy.polyfit() function:
import numpy as np #polynomial fit with degree = 2 model = np.poly1d(np.polyfit(hours, happ, 2)) #add fitted polynomial line to scatterplot polyline = np.linspace(1, 60, 50) plt.scatter(hours, happ) plt.plot(polyline, model(polyline)) plt.show()
We can obtain the fitted polynomial regression equation by printing the model coefficients:
print(model) -0.107x2 + 7.173x - 30.25
The fitted quadratic regression equation is:
Happiness = -0.107(hours)2 + 7.173(hours) – 30.25
We can use this equation to calculate the expected happiness level of an individual based on their hours worked. For example, the expected happiness level of someone who works 30 hours per week is:
Happiness = -0.107(30)2 + 7.173(30) – 30.25 = 88.64. = np.polyfit(x, y, degree) p = np.poly1d(coeffs) #calculate r-squared yhat = p(x) ybar = np.sum(y)/len(y) ssreg = np.sum((yhat-ybar)**2) sstot = np.sum((y - ybar)**2) results['r_squared'] = ssreg / sstot return results #find r-squared of polynomial model with degree = 3 polyfit(hours, happ, 2) {'r_squared': 0.9092114182131691}
In this example, the R-squared of the model is 0.9092. This means that 90.92% of the variation in the reported happiness levels can be explained by the predictor variables.
Additional Resources
How to Perform Polynomial Regression in Python
How to Perform Quadratic Regression in R
How to Perform Quadratic Regression in Excel | https://www.statology.org/quadratic-regression-python/ | CC-MAIN-2021-21 | refinedweb | 492 | 55.24 |
Server-Side Asynchronous Web Methods
Matt Powell
Microsoft Corporation
October 2, 2002
Summary: Matt Powell shows how to make use of asynchronous Web methods on the server side to create high performance Microsoft ASP.NET Web services. (8 printed pages)
Introduction
In my September 3rd column, I wrote about calling Web services asynchronously over HTTP using the client-side capabilities of the Microsoft® .NET Framework. This approach is an extremely useful way to make calls to a Web service without locking up your application or spawning a bunch of background threads. Now we are going to look at asynchronous Web methods that provide similar capabilities on the server side. Asynchronous Web methods are similar to the high performance provided by the HSE_STATUS_PENDING approach to writing ISAPI extensions, but without the coding overhead of having to manage your own thread pool, and with all the benefits of running in managed code.
First let's consider normal, synchronous Microsoft® ASP.NET Web methods. The response for a synchronous Web method is sent when you return from the method. If it takes a relatively long period of time for a request to complete, then the thread that is processing the request will be in use until the method call is done. Unfortunately, most lengthy calls are due to something like a long database query, or perhaps a call to another Web service. For instance, if you make a database call, the current thread waits for the database call to complete. The thread has to simply wait around doing nothing until it hears back from its query. Similar issues arise when a thread waits for a call to a TCP socket or a backend Web service to complete.
Waiting threads are bad—particularly in stressed server scenarios. Waiting threads don't do anything productive, like servicing other requests. What we need is a way to start a lengthy background process on a server, but return the current thread to the ASP.NET process pool. Then, when the lengthy background process completes, we would like to have a callback function invoked so that we can finish processing the request and somehow signal the completion of the request to ASP.NET. As it turns out, this capability is provided by ASP.NET with asynchronous Web methods.
How Asynchronous Web Methods Work
When you write a typical ASP.NET Web service using Web methods, Microsoft® Visual Studio® .Net simply compiles your code to create the assembly that will be called when requests for its Web methods are received. The assembly itself doesn't know anything about SOAP. Therefore when your application is first launched, the ASMX handler must reflect over your assembly to determine which Web methods are exposed. For normal, synchronous requests, it is simply a matter of finding which methods have a WebMethod attribute associated with them, and setting up the logic to call the right method based off of the SOAPAction HTTP header.
For asynchronous requests, during reflection the ASMX handler looks for Web methods with a certain kind of signature that it recognizes as being asynchronous. In particular, it looks for a pair of methods that have the following rules:
- There is a BeginXXX and EndXXX Web method where XXX is any string that represents the name of the method you want to expose.
- The BeginXXX function returns an IAsyncResult interface and takes as its last two input parameters an AsyncCallback, and an object respectively.
- The EndXXX function takes as its only parameter an IAsyncResult interface.
- Both must be flagged with the WebMethod attribute.
If the ASMX handler finds two methods that meet all these requirements, then it will expose the method XXX in its WSDL as if it were a normal Web method. The method will accept the parameters defined before the AsyncCallback parameter in the signature for BeginXXX as input, and it will return what is returned by the EndXXX function. So if we had a Web method whose synchronous declaration looked like this:
Then an asynchronous declaration would look like this:
The WSDL for each would be the same.
After the ASMX handler reflects on an assembly and detects an asynchronous Web method, it must handle requests for that method differently than it handles synchronous requests. Instead of calling a simple method, it calls the BeginXXX method. It deserializes the incoming request into the parameters to be passed to the function—as it does for synchronous requests—but it also passes the pointer to an internal callback function as the extra AsyncCallback parameter to the BeginXXX method.
This approach is purposefully similar to the asynchronous programming paradigm in the .NET Framework for Web service client applications. In the case of the client-side support for asynchronous Web service calls, you free up blocked threads for the client machine, while on the server side we free up blocked threads on the server machine. There are two key differences, however. First of all, instead of your server code calling the BeginXXX and EndXXX functions, the ASMX handler will call them instead. Secondly, you will write the code for the BeginXXX and EndXXX functions instead of using code generated by WSDL.EXE or the "Add Web Reference" Wizard in Visual Studio .NET. However, the result—freeing up threads so that they can perform some other processing—is the same.
After the ASMX handler calls your server's BeginXXX function, it will return the thread to the process thread pool so it can handle any other requests that are received. The HttpContext for the request will not be released yet. The ASMX handler will wait until the callback function that it passed to the BeginXXX function is called for it to finish processing the request.
Once the callback function is called, the ASMX handler will call the EndXXX function so that your Web method can complete any processing it needs to perform, and the return data can be supplied that will be serialized into the SOAP response. Only when the response is sent after the EndXXX function returns will the HttpContext for the request be released.
A Simple Asynchronous Web Method
To illustrate asynchronous Web methods, I start with a simple synchronous Web method called LengthyProcedure, whose code is shown below. We will then look at how to do the same thing asynchronously. LengthyProcedure simply blocks for the given number of milliseconds.
Now we will convert LengthyProcedure to an asynchronous Web method. We must create a BeginLengthyProcedure function and an EndLengthyProcedure function as we described earlier. Remember that our BeginLengthyProcedure call will need to return an IAsyncResult interface. In this case I am going to have our BeginLengthyProcedure call make an asynchronous method invocation using a delegate and the BeginInvoke method on that delegate. The callback function passed to BeginLengthyProcedure will be handed over to the BeginInvoke method on our delegate, and the IAsyncResult returned from BeginInvoke will be returned by the BeginLengthyProcedure method.
The EndLengthyProcedure method will be called when our delegate is completed. We will call the EndInvoke method on the delegate passing in the IAsyncResult that we received as input to the EndLengthyProcedure call. The returned string will be the string returned from our Web method. Here is the code:
[WebService] public class AsyncWebService : System.Web.Services.WebService { public delegate string LengthyProcedureAsyncStub( int milliseconds); public string LengthyProcedure(int milliseconds) { System.Threading.Thread.Sleep(milliseconds); return "Success"; } public class MyState { public object previousState; public LengthyProcedureAsyncStub asyncStub; } [ System.Web.Services.WebMethod ] public IAsyncResult BeginLengthyProcedure(int milliseconds, AsyncCallback cb, object s) { LengthyProcedureAsyncStub stub = new LengthyProcedureAsyncStub(LengthyProcedure); MyState ms = new MyState(); ms.previousState = s; ms.asyncStub = stub; return stub.BeginInvoke(milliseconds, cb, ms); } [ System.Web.Services.WebMethod ] public string EndLengthyProcedure(IAsyncResult call) { MyState ms = (MyState)call.AsyncState; return ms.asyncStub.EndInvoke(call); } }
When Do Asynchronous Web Methods Make Sense
There are several issues to consider when determining whether asynchronous Web methods make sense for your application. First of all, the BeginXXX function for your call must return an IAsyncResult interface. IAsyncResults are returned from a number of asynchronous I/O operations for accessing streams, making Microsoft® Windows® Sockets calls, performing file I/O, interacting with other hardware devices, calling asynchronous methods, and of course calling other Web services. You will most likely want to get the IAsyncResult from one of these types of asynchronous operations, so that you can return it from your BeginXXX function. The other option is to create your own class that implements the IAsyncResult interface, but then you would more than likely be wrapping one of the previously mentioned I/O implementations anyway.
For almost all of the asynchronous operations we mentioned, using asynchronous Web methods to wrap the backend asynchronous call makes a lot of sense and will result in more efficient Web service code. The exception is when you make asynchronous method calls using delegates. Delegates will cause the asynchronous method calls to execute on a thread in the process thread pool. Unfortunately, these are the same threads used by the ASMX handler to service incoming requests. So unlike calls that are performing real I/O operations against hardware or networking resources, an asynchronous method call using delegates will still block one of the process threads during execution. You might as well block the original thread and have your Web method run synchronously.
The following example shows an asynchronous Web method that calls a backend Web service. It has flagged the BeginGetAge and EndGetAge methods with the WebMethod attribute so that it will run asynchronously. The code for this asynchronous Web method calls a backend Web method called UserInfoQuery to get the information it needs to return. The call to UserInfoQuery is performed asynchronously and is passed the AsyncCallback function that was passed to the BeginGetAge method. This will cause the internal callback function to be called when the backend request completes. The callback function will then call our EndGetAge method to complete the request. In this case the code is much simpler than our previous example, and has the added benefit that it is not launching the backend processing in the same thread pool that is servicing our middle tier Web method requests.
[WebService] public class GetMyInfo : System.Web.Services.WebService { [WebMethod] public IAsyncResult BeginGetAge(AsyncCallback cb, Object state) { // Invoke an asynchronous Web service call. localhost.UserInfoQuery proxy = new localhost.UserInfoQuery(); return proxy.BeginGetUserInfo("User's Name", cb, proxy); } [WebMethod] public int EndGetAge(IAsyncResult res) { localhost.UserInfoQuery proxy = (localhost.UserInfoQuery)res.AsyncState; int age = proxy.EndGetUserInfo(res).age; // Do any additional processing on the results // from the Web service call here. return age; } }
One of the most common types of I/O operations that occur within a Web method is a call to a SQL database. Unfortunately, Microsoft® ADO.NET does not have a good asynchronous calling mechanism defined at this time, and simply wrapping a SQL call in an asynchronous delegate call does not help in the efficiency department. Caching results is sometimes an option, but you should also consider using the Microsoft SQL Server 2000 Web Services Toolkit to expose your databases as a Web service. You will then be able to use the support in the .NET Framework for calling Web services asynchronously to query or update your database.
The lesson with accessing SQL through a Web service call is one that should be taken to heart for many of your backend resources. If you have been using TCP sockets to communicate with a Unix machine, or are accessing some of the other SQL platforms available through proprietary database drivers—or even if you have a resource you have been accessing using DCOM—you might consider using the numerous Web service toolkits that are available today to expose the resources as Web services.
One of the benefits of taking this approach is that you can take advantage of the advances in the client-side Web service infrastructure, such as asynchronous Web service calls with the .NET Framework. Thus you will get asynchronous calling capabilities for free, and your client-access mechanism will just happen to work efficiently with asynchronous Web methods.
Aggregating Data with an Asynchronous Web Method
Many Web services today access multiple resources on the backend and aggregate the information for the front-end Web service. Even though calling multiple backend resources adds complexity to the asynchronous Web method model, there are plenty of efficiencies gained.
Say your Web method is calling two backend Web services, Service A and Service B. From your BeginXXX function, you can call Service A asynchronously and Service B asynchronously. You should pass to each of these asynchronous calls your own callback function. In order to trigger completion of the Web method after you receive the results from both Service A and Service B, the callback function you supplied will verify both requests are complete, do any processing on the returned data, and then will call the callback function passed to your BeginXXX function. This will trigger the call to your EndXXX function, which upon its return will causes the asynchronous Web method to complete.
Conclusion
Asynchronous Web methods provide an efficient mechanism within ASP.NET Web services for invoking calls to backend services without causing precious threads in the process thread pool to block while doing nothing. In combination with asynchronous requests to backend resources, a server can maximize the number of simultaneous requests they can handle with their Web methods. You should consider this approach for developing high-performance Web service applications.
At Your Service | https://msdn.microsoft.com/en-us/library/aa480516.aspx | CC-MAIN-2015-11 | refinedweb | 2,239 | 53 |
Code-Behind and XAML
Code-behind is a term used to describe the code that is joined with the code that is created by a XAML processor when a XAML page is compiled into an application. This topic describes requirements for code-behind as well as an alternative inline code mechanism for code in XAML.
This topic contains the following sections.
This topic assumes that you have read the XAML Overview and have some basic knowledge of the CLR and object-oriented programming. CLR namespace identified by x:Class. You cannot qualify the name of an event handler to instruct a XAML processor for a XAML based application. page's APIs contained within the other CLR namespaces. You also cannot define multiple classes in the inline code, and all code entities must exist as a member or variable within the generated partial class. Other language specific programming features, such as macros or #ifdef against global variables or build variables, are also not available. For more information, see x:Code XAML Directive Element. | http://msdn.microsoft.com/en-us/library/aa970568(v=vs.90).aspx | CC-MAIN-2014-15 | refinedweb | 172 | 54.73 |
I've seen using strings, integer timestamps and mongo datetime objects.
Solution 1
The best way is to store native JavaScript Date objects, which map onto BSON native Date objects.
> db.test.insert({date: ISODate()}) > db.test.insert({date: new Date()}) > db.test.find() { "_id" : ObjectId("..."), "date" : ISODate("2014-02-10T10:50:42.389Z") } { "_id" : ObjectId("..."), "date" : ISODate("2014-02-10T10:50:57.240Z") }
The native type supports a whole range of useful methods out of the box, which you can use in your map-reduce jobs, for example.
If you need to, you can easily convert
Date objects to and from Unix timestamps1), using the
getTime() method and
Date(milliseconds) constructor, respectively.
1) Strictly speaking, the Unix timestamp is measured in seconds. The JavaScript Date object measures in milliseconds since the Unix epoch.
Solution 2
One datestamp is already in the _id object, representing insert time
So if the insert time is what you need, it's already there:
Login to mongodb shell
[email protected]10-0-1-223:~$ mongo 10.0.1.223 MongoDB shell version: 2.4.9 connecting to: 10.0.1.223/test
Create your database by inserting items
> db.penguins.insert({"penguin": "skipper"}) > db.penguins.insert({"penguin": "kowalski"}) >
Lets make that database the one we are on now
> use penguins switched to db penguins
Get the rows back:
> db.penguins.find() { "_id" : ObjectId("5498da1bf83a61f58ef6c6d5"), "penguin" : "skipper" } { "_id" : ObjectId("5498da28f83a61f58ef6c6d6"), "penguin" : "kowalski" }
Get each row in yyyy-MM-dd HH:mm:ss format:
> db.penguins.find().forEach(function (doc){ d = doc._id.getTimestamp(); print(d.getFullYear()+"-"+(d.getMonth()+1)+"-"+d.getDate() + " " + d.getHours() + ":" + d.getMinutes() + ":" + d.getSeconds()) }) 2014-12-23 3:4:41 2014-12-23 3:4:53
If that last one-liner confuses you I have a walkthrough on how that works here:
Solution 3
I figured when you use pymongo, MongoDB will store the native Python
datetime object as a
Date field. This
Date field in MongoDB could facilitate date-related queries later (e.g. querying intervals). Therefore, a code like this would work in Python
from datetime import datetime datetime_now = datetime.utcnow() new_doc = db.content.insert_one({"updated": datetime_now})
After this, I can see in my database a field like the following (I am using Mongo Compass to view my db). Note how it is not stored as a string (no quotation) and it shows
Date as the field type.
Regarding javascript usage, this should also work there. As long as you have the +00:00 (UTC in my case) or
Z at the end of your date, Javascript should be able to read the date properly with timezone information.
Solution 4
Use the code below to create a datetime variable that can be assigned in a document (Note that I'm creating a datetime object, not a date object):
from datetime import date from datetime import datetime import random def random(date): my_year=random.randint(2020,2022) my_month=random.randint(1,12) my_day=random.randint(1,28) selected=datetime(year = my_year, month = my_month, day = my_day, hour = 0, minute = 0, second = 0) def insert_objects(collection): collection.insert_one( { "mydate": random_date() }) | https://solutionschecker.com/questions/best-way-to-store-datetime-in-mongodb/ | CC-MAIN-2022-40 | refinedweb | 517 | 56.96 |
Ever since Kanrad Zuse invented Plankalkul back in 1948, we’ve just kept coming up with new ways of telling computers what we want out of them. With each new offering, programmers ask whether they should learn the language. How about C#? Is it smart to master this language and learn those C# interview questions to apply for a job?
Image CC, by Daniel Iversen, via Flickr
C# is one of the more versatile languages out there; in some respects truly multipurpose. It’s easy to work with and will serve you well. It’s great for game building, strikes a balance between simplicity and complexity, and is very scalable. Need some more convincing? We’ve got the lowdown for you, plus tips for how to get that perfect job.
What Is C#?
The first step on your journey to successfully answering C# interview questions (and getting a job, which, let’s face it, is the goal for most of us) is knowing exactly what it is.
What It Does
As with any language, its primary purpose is to define operations in a series so a computer knows how to accomplish a task (and what task you want it to do!) Most of the time, you’ll find people using C# tasks to work with text or with numbers. But in reality, absolutely anything a computer is physically capable of doing you can define and instruct using this language.
Developer: Microsoft
Microsoft developed and released this language back in 2002. They built it to be similar to Java in order to shorten the learning curve for programmers. It’s a general purpose language that is object-oriented.
Microsoft also went to great lengths to develop a language that would be easy to learn and also…(hold your breath)…fun to use! Naturally, it’s particularly suited for building apps for anything running a Microsoft platform, but it’s also useful if you work with the Unity Game engine and for mobile it can cross platforms.
Learning Curve: Shallow
This is a high-level language, meaning that it reads a bit like the language you actually speak rather than something an alien invented. That makes it simpler to master than some other, more complex languages.
C# is also pretty good at abstracting tasks for you. This fancy term just means it knows a lot of the tasks you want to accomplish and will automate them so you can focus on programming rather than on tedious details.
Scalability: Excellent
What does scalability mean? Scalability in a programming language is a lot different that scalability in a program itself. With a program, scalability means opening it up to massive numbers of users at one time. In a programming language, scalability refers to how you work with it.
Programming is a complex task, and the more threads you can work on simultaneously in your programming, the better. The problems come when you try to bring all those threads together. That’s when the rubber hits the road and you find out whether they different threads are going to interfere with each other or not.
What you want is a programming language that allows clean message passing and multiple threads in real-time that don’t interfere with each other. You get that with C#.
Community Support: Great
With any programming language, the more people that are using it and talking about it, the better it is for everyone. That way we get to share our triumphs, tricks, failures, and even the odd quirks we found by accident. C# has a good user base, and you’ll be able to get plenty of help in the StackOverflow online community and Meetup.com groups. Currently, there are well over 500 Meetup groups talking about C#.
And since the Unity Game engine uses C#, and many people use this for cross-platform development, you’ll find lots of help on Unity forums from people who nailed their C# interview questions and can help you do the same.
How Does C# Compare?
Comparing programming languages is always tricky business. They are, after all, designed to do different things. Some languages are general purpose while others are made for highly specific tasks. There are five types of programming languages, broadly speaking.
Procedural
You’re interested in languages (programming languages, anyway) so maybe you’ve already guessed that procedural means a language tells the computer how to use a specific procedure to solve a problem.
Example: Your mother tells you get her keys out of her purse, go upstairs, use the big key to unlock the secret room, find the third floorboard from the window, pry it up, use the small key to unlock the safe, remove the green file, lock everything back up, and bring it to her. (Surprise, honey! You’re adopted! Here are the papers). You have to do every step of the procedure the way she tells you or the process won’t work.
Functional
This style of programming language is usually contrasted with procedural languages. Functional languages seek to take the minimum number of mathematical expressions and use them to control an infinite number of variations.
Example: Your mother is sad that she had to tell you you’re adopted, so she asks you to use tea leaves, water, and milk to make her some tea. She doesn’t tell you every step along the way, like “get out a pan” and “turn on the faucet,” because you already know the basic “expression” for how to make a liquid boiled mixture.
Want to know more? Check out this explanation from Coding Tech.
Object-Oriented
Now we’re getting somewhere in your quest to nail those C# interview questions. C# is one of the object-oriented languages, though it is also functional. These languages look at the world as objects which have specific characteristics and try to find ways to build new things using these known specifics.
Example: Your mother forgot to mention she wants sweet tea, so she sends you to the kitchen for sweetener. She can’t remember the name, but she does remember it comes in small, colored, rectangular packets of paper with printing on the outside and something sweet on the inside.
You suggest Splenda, Nutrasweet, Sweet ‘N Low, Truvia…none of those are what she wants. So you design a new type of sweetener, CariesSweet, and put it in a small, colored, rectangular packet of paper with printing on the outside. It’s slightly different, but she knows just what to do with it because it was built on the object model of the others.
Here’s a bit more on the difference between procedural and object-oriented programming:
Scripting
These languages are very specific directions, and while they’re easier to learn than others, they are also very easy to make a mess with. They have elements of both object-oriented and procedural languages. Scripting languages are not often used to build big, complex programs but rather to bear down and give specific instructions to make the most enhanced features run properly.
Example: Your mother has been poisoned by your CariesSweet and needs to go to the hospital. You’ve never driven the car before, so she writes down for you: “Push gas pedal (on right) to go. Push brake pedal (on left) to stop. Turn wheel to steer right or left.”
The rest of the process of getting her to the hospital is already written in your code. You know how to get her up and into the car, how to find your way there, and how to get her out. You interpret her line-by-line instructions in real time as you sit in the car and use it.
Want to understand scripting better? Check out this helpful video.
Logic
These languages let programmers tell a computer to do “if-then” tasks. They don’t really tell computers how to do something. Instead, they tell them the way in which to do something the computer already knows how to do.
Example: The doctor says your mother will be fine and you should take her home. She tells you to take care of her; but if your mother’s elbow breaks out in bright red spots, call 911 immediately. You already know how to take care of your mother, but the doctor has just told you specifically how to do so “if” something is true.
Languages And How They Compare
Jobs Where C# Is Used
Is it worth it learning C# interview questions and how to answer them? Is working with C# a good career choice? Let’s see:
What Can You Do With It?
C# is a great all-around tool for a software developer to have under their belt. With it, you can design web applications, work on mobile applications across platforms, and also make desktop apps for Windows. You can also design games with C#. You can work with Android or iOS, apps or games, websites or desktop clients. That’s a lot of variety, and if you can master those C# interview questions, all kinds of doors will open for you.
Image CC0, by Bhautik Joshi, via Flickr
As a C# developer, you can work in development, tech support, or green field development. You can design, troubleshoot, or test. You can work in big finance companies, major corporations, or directly in software development.
Which Companies Use It?
Here’s the great part: all kinds. This doesn’t mean every company uses C#; it means every TYPE of company uses it. If you want to work for a startup, you can find one easily.
Image via StackShare
If you prefer working for a giant corporate conglomerate where no one knows your name (but you get in-house yoga), you can easily find a demand for C# from these guys. If you want to set out on your own and work freelance, you can step into all kinds of roles with a background in C#.
What About Job Security?
Because such a wide variety of companies are using C#, your prospects in learning this language are very solid. One of the most encouraging signs is that a lot of the bigger financing corporations are choosing to use it; that means there are high-salary jobs in play here.
C# is also used worldwide, so if you don’t love where you live you can move along. Don’t forget that you can always specialize in a niche market as it gets hot: but you want to have a solid foundation to fall back on when today’s fad is tomorrow’s trash. C# can be that solid foundation, and that’s reason enough to learn how to nail those C# interview questions.
OK, But What About Other Programs?
You’ve only got so much time, and you aren’t sure whether to work on C# interview questions, learn Java, concentrate on Ruby, or go with Javascript. You’re thinking: we like that. Let’s hash it all out, shall we?
Java
So this language is quite similar to C#, and if you’re just breaking into the programming world you might be tempted to spring in this direction just because the name is so famous.
Learning Java is NOT a bad career decision. Let’s just get that out to the way. Also, we have to admit that Java might give you better support if you’re working with big data frameworks. However, it falls down when you’re talking about building desktop clients.
Not only that, but Java features are starting to fall behind C#. It’s been a big player for a long time, but that’s slowly changing. If you’re just starting out, C# might be your VHS and Java your Betamax. That’s a bold claim, but consider this: Nothing works like the C languages when it comes to IoT devices and wearables. The future is smart tech, and that means C is likely to expand.
Javascript
Image CC0, by Christiaan Colen, via Flickr
This one is in very high demand right now, and it’s easy to learn. React Native lets you use it for mobile apps, and you can do desktop with Electron in Javascript.
However, it’s clunky. It wasn’t meant to work with enormous codebases, and after a while you’ll get tired of doing the front-end framework over and over again, reinventing the wheel each time.
It doesn’t hurt to have some exposure to Javascript, but our advice is to put this in the secondary priority category in favor of C#. Javascript is not being used to build the most popular modern software and eventually there won’t be much legacy software that needs it.
Ruby
Ruby has some good things going for it, and if you want to work remotely there are typically more jobs available in Ruby than in C#. But even the most die-hard Ruby devotee will usually admit that you get better overall performance out of C#.
C# allows you to do static typing, which saves you time and allows you to refactor a lot more quickly. Again, there’s nothing wrong with moving to Ruby, but you’re likely to get more out of your C# training.
Programming Language Popularity Trends
Top Languages in 2018
If the numbers in those don’t seem too impressive, consider this: Back in 2002, C# was in the #11 spot of most popular languages. Java, meanwhile, has been #1 for a long time, but has recently slipped to third place.
Getting Your C# Job
One of the draws of programming as a career is that you don’t necessarily need a degree in order to succeed. You do, however, need training. Here’s what to do:
Learn Your Language
Start with the basics, right? Get some books, take some classes, and get familiar with C# so you have some hope of answering C# interview questions.
Get Experience
Image CC0, by Roland Tanglao, via Flickr
In any job, this is the Catch 22. You have to have
experience to get a job, and you have to have a job to get experience. So what do you do? Here’s some advice:
- Develop a great, complex project. Put this up on Github so you can send a link with your resume.
- Develop a few smaller projects to go with it. Just remember that it’s better to have fewer perfect projects than a whole lot of mediocre ones.
- Contribute to some open source projects. This will beef up your resume and present you as a helpful and established member of the coding community.
How To Get Ready For An Interview
When you get a job, no matter what area you want to work in, you have to present yourself the right way. No matter how amazing your coding skills are or how well you answer C# interview questions, if you don’t come across as someone a company actually wants to hire you’re not going to get the job.
Decide What You Want
Image CC0, via 477th Fighter Group
Are you interested in working with software? Web development? In-house or freelance? Big company or small startup? Decide what you’d like to do, as well as what you’re willing to do (because we don’t always get what we want), and then move on to step two.
Do Some Research
You know what companies like? Job applicants that know something about the company and want to work there. If you do some research on the company and the position, you show that you’re organized and motivated. You show that you are capable of learning. You show that you’ll fit into the corporate culture of the place you’re looking to land.
Build Your Resume and Portfolio
We tech people tend to minimize the value of a good resume. We figure our work will speak for itself. We figure that if we can answer the C# interview questions well, we’ll be all set.
That’s partially true, but it’s not the whole story. You have to get your foot in the door so someone will ask you those C# questions, and nothing will get you in there faster than a good resume. This is especially true if you’re looking to work for something like a finance company or a university rather than directly for a software development company.
How should you do your resume? Here are some tips:
KEEP IT SIMPLE
Fancy colors and complicated layouts aren’t going to get you noticed. They’re going to get you overlooked. Keep your fonts big enough to read but not so big it looks like a preschool project. 10-12pt is the right size.
BE EASY TO FIND
Your contact info needs to be correct and clear. If you’re still using your high school email, [email protected], or you have a weird nerd handle like hackslash1995, get a professional email address right now.
START WITH A CLEAR OBJECTIVE
If someone picks up your resume, this is one part they definitely will read. This isn’t the place to be funny or sarcastic. Be clear, direct, and simple. Feel free to adjust it to show you can meet your employer’s needs.
SHOW EDUCATION AND EXPERIENCE CLEARLY
Use active verbs to describe what you’ve done. Avoid phrases like “I was in charge of…” and stick with things like “Developed a flight simulator…”
DON’T INCLUDE IRRELEVANCIES
Your mom is mighty proud of your spelling bee tournament win, but this doesn’t matter to your prospective employer. Keep things relevant unless they show mastery of a soft skill that is useful on the job. If you successfully led a team, for example, flag that up.
USE NUMBERS WHERE YOU CAN
Numbers are specific. Did you increase profits? By how much? Did you make something work more efficiently? How much more efficiently?
DON’T ADD YOUR PICTURE
Unless you’re applying for a job as a model, no one needs to see your face before an interview. If they do, you probably don’t want to work for them anyway.
Practice for the Real Thing
Once you line up an interview, you need to buckle down and prepare. It’s not enough that you can nail those C# interview questions: you also have to be able to present well with everything else.
Keep Your Brain Coding
Practice what you’ll be asked and practice your coding skills. Brush up on chapters of your training books that you haven’t dealt with in a while. Make sure your brain is knee-deep in the specifics of coding before you set out.
Do a Mock Interview
You don’t have to do a bunch of these: just one or two is fine. Make sure you get help from a friend who actually does interviews rather than from your mom or Don your drinking buddy.
Practice Non-Tech Skills
So you can code beautifully: that’s great! But if the interviewer asks you to talk about yourself, your previous experience on a team, what motivates you, or how you have learned from previous experience, you need to be able to give competent answers to these questions.
What Not to Do
Yup. There are some things you should definitely avoid when preparing for the interview.
TRY TO LEARN A NEW LANGUAGE
No, you can’t master Python by next week. It’s much better to be a virtual C# ninja and impress them with your answers to C# interview questions than to be able to “sort of code” in three languages.
SKIP SLEEP
Yes, you want to bone up on skills. But remember how not sleeping didn’t really help you in your high school exams? You’re even older now, and skipping sleep will not help even more than it didn’t help before. (Don’t show your high school English teacher that last sentence).
FORGET TO FOCUS ON THE COMPANY
Tailor everything about your interview prep to the specific job and the specific company you’re going to be talking to.
BURN OUT
After an hour or so, give yourself a break. Target your strengths rather than try to learn new stuff. And don’t jump around like a frantic frightened rabbit from book to book and website to website. There’s too much out there. Instead, concentrate on a few good resources of C# interview questions.
INVEST YOUR SELF-WORTH IN ONE INTERVIEW
There will be more. You will get a job. Sometimes people don’t get jobs, not because they’re stupid or unworthy, but just because they didn’t. If you don’t get this job, don’t let it throw you into a funk.
Show Up Looking Professional
By professional, we do not necessarily mean you have to wear a suit. Lots of tech companies are super casual. The way to do an interview right is to find out what the company wears to work and wear something one step smarter. If everyone wears suits, though, stick with a suit: suits are the most formal of business wear.
Just make sure it fits. A suit that doesn’t fit tells your employer you’re not very self-aware. If everyone is casual, you go casual, but make it smart casual, not “ripped-jeans-with-wife-beater” casual.
Here are a few more tips to bear in mind:
Men, here are your tips for getting the dress right.
Women, here’s what you need to know to get the job.
Practice Your C# Interview Questions
There are a lot of possible questions that could come up, but here are 10 you should definitely master:
1. What Will the Following Code Snippet Output?
using System;
public class Program
{
public static void Main(string[] args)
{
Console.WriteLine(Math.Round(6.5));
Console.WriteLine(Math.Round(11.5));
}
}
6 12
2. What Will the Following Code Snippet Output?
using System;
public class Program
{
public static void Main(string[] args)
{
byte
num = 100;
dynamic val = num;
Console.WriteLine(val.GetType());
val += 100;
Console.WriteLine(val.GetType());
}
}
System.Byte
System.Byte
3. What Will the Following Code Snippet Output?
using System;
using System.Collections.Generic;
namespace
TechBeamers
{
delegate string del(string str);
class sample
{
public static string DelegateSample(string a)
{
return a.Replace(‘,’, ‘*’);
}
}
public class InterviewProgram
{
public static void Main(string[] args)
{
del str1 = new del(sample.DelegateSample);
string str = str1(“Try,,CariesSweet,,not,,Poison”);
Console.WriteLine(str);
}
}
}
Try**CariesSweet**not**Poison
4. Describe Dependency Injection
Dependency injection unlinks classes so they are no longer directly dependent on one another. There are three ways to do this in C#: method, property, or constructor dependency.
5. You Have a Word String with $ Symbols, Like: “My Mother Drinks $ Tea $ with Sweetener $” How Do You Remove the Second and Third $ from This String?
Use an expression like:
string s = “like for example $ you don’t have $ network $ access”;
Regex rgx = new Regex(“\$\s+”);
s = Regex.Replace(s, @”($s+.*?)$s+”, “$1$$”);
Console.WriteLine(“string is: {0}”,s);
6. Can You Store Mixed Datatypes in One Array? How?
Yes, because an array can be an object storing any datatype and class object. Here’s an example:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ConsoleApplication8
{
class Program
{
class Customer
{
public int ID { get; set; }
public string Name { get; set; }
public override string ToString()
{
return this.Name;
}
}
static void Main(string[] args)
{
object[] array = new object[3];
array[0] = 101;
array[1] = “C#”;
Customer c = new Customer();
c.ID = 55;
c.Name = “Manish”;
array[2] = c;
foreach (object obj in array)
{
Console.WriteLine(obj);
}
Console.ReadLine();
}
}
}
7. What Will the Following Code Snippet Output?
class Program {
static String location;
static DateTime time;
static void Main() {
Console.WriteLine(location == null ? “location is null” : location);
Console.WriteLine(time == null ? “time is null” : time.ToString());
}
}
location is null
1/1/0001 12:00:00 AM
8. Is This a Valid Comparison? Explain
static DateTime time;
/* … */
if (time == null)
{
/* do something */
}
is is allowed by the compiler, but it could lead to bugs. The == will try to get a common type on both sides so it can compare, and you will get the expected result. However, you might also get some unexpected behavior, and though valid, the result is false.
9. What Is the Algorithm to Check If a Number Is Prime?
/*
* C# Program to Check Whether the Given Number is a Prime number if so then
* Display its Largest Factor
*/
using System;
namespace example
{
class prime
{
public static void Main()
{
Console.Write(“Enter a Number :
“);
int num;
num = Convert.ToInt32(Console.ReadLine());
int k;
k = 0;
for (int i = 1; i <= num/2; i++)
{
if (num % i == 0)
{
k++;
}
}
if (k == 2)
{
Console.WriteLine(“Entered Number is a Prime Number and the Largest Factor is {0}”,num);
}
else
{
Console.WriteLine(“Not a Prime Number”);
}
Console.ReadLine();
}
}
}
10. How Do You Check Whether a Number Is an Armstrong Number?
/*
* C# Program to Check Whether the Entered Number is an Armstrong Number or Not
*/
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ConsoleApplication6
{
class Program
{
static void Main(string[] args)
{
int number, remainder, sum = 0;
Console.Write(“enter the Number”);
number = int.Parse(Console.ReadLine());
for (int i = number; i > 0; i = i / 10)
{
remainder = i % 10;
sum = sum + remainder*remainder*remainder;
}
if (sum == number)
{
Console.Write(“Entered Number is an Armstrong Number”);
}
else
Console.Write(“Entered Number is not an Armstrong Number”);
Console.ReadLine();
}
}
}
You can find plenty more example questions that appear frequently in C# interviews by checking out this video.
After The Interview
Here’s what to do once you’re done:
Write Down the Experience
Record what you were asked, how you answered, and questions you wished you’d asked and didn’t. This could be very helpful in follow-up interviews or interviews with another company.
Know What to Expect
If they said they would be in touch within 10 days, then wait 10 days and then reach out with a quick email to the hiring manager. Make it polite and short.
Talk to Your References
If a company might be contacting someone on your reference list, give that person a heads up just in case.
Prepare for Salary Negotiations
Make sure you know the average salary for the position generally, but specifically for your region. The more experience you have, the more you can ask for: you’ll need around five years experience for it to make a difference.
Ask for a bit more than you want so you can fall back to your desired salary; but don’t oversell or undersell yourself.
Getting Your Dream Job
Whether you want to work for a big tech company or do things entirely on your own terms. C# is a way to get there. Prepare yourself for the interview, nail those C# interview questions, and follow up sensibly. You’ll be surprised what a difference some preparation can make.
| https://csharp-station.com/csharp-interview-questions/ | CC-MAIN-2021-31 | refinedweb | 4,519 | 72.87 |
But there’s a trick to keep them around → show them an image first!
Check this out:
See how you barely notice the page refresh? That’s on purpose.
The main
App.render() method is wrapped in a conditional statement that checks if the data is available. If it is, then we render the interactive visualization; if it isn’t, then we render a screenshot and default descriptions.
// src/App.js render() { if (this.state.techSalaries.length < 1) { return ( <Preloader /> ); } // render the main dataviz }
The
Preloader component can be a functional stateless component, like this:
// src/App.js import StaticViz from './preloading.png'; const Preloader = () => ( <div className="App container"> <h1>The average H1B in tech pays $86,164/year</h1> <p className="lead">Since 2012 the US tech industry has sponsored 176,075 H1B work visas. Most of them paid <b>$60,660 to $111,668</b> per year (1 standard deviation). <span>The best city for an H1B is <b>Kirkland, WA</b> with an average individual salary <b>$39,465 above local household median</b>. Median household salary is a good proxy for cost of living in an area.</span></p> <img src={StaticViz} style={{width: '100%'}} /> <h2 className="text-center">Loading data ...</h2> </div> );
The
Preloader component mimics the structure of your normal dataviz, but it’s hardcoded. The information is real, and it’s what people are looking for, but it doesn’t need the dataset to render.
The easiest way to get this is to first build your real dataviz, then screenshot the picture, and then copy-paste the descriptions if they’re dynamic. Without dynamic descriptions, half your job is done already.
That’s about it, really:
- render an image
- wait for data to load
- replace image with dynamic dataviz
It sounds dumb, but increases user satisfaction 341324%.
If it works …
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/a-trick-to-make-your-big-dataviz-load-super-fast-a | CC-MAIN-2018-30 | refinedweb | 319 | 57.77 |
pygdal 1.9.2.0
Virtualenv and setuptools friendly version of standard GDAL python bindings
Virtualenv and setuptools friendly version of standard GDAL python bindings.
This package is for you if you had problems installing GDAL in your virtualenv. You can install GDAL into your virtualenv using this package but you still need to install GDAL library and its header files on your system. On Ubuntu it can be done this way:
$ sudo apt-get install libgdal1-dev
Version of the same package, and GDAL, so that if you have installed GDAL 1.8.1 you need to install the version 1.8.1 of this package:
$ gdal-config --version 1.8.1 $ git clone git@github.com:dezhin/pygdal.git $ cd pygdal $ virtualenv --no-site-packages env $ env/bin/pip install 1.8.1/
Or you can install package directly from PyPi:
$ virtualenv --no-site-packages env $ env/bin/pip install pygdal==1.8.1
Only a small set of GDAL versions is currently supported. At this point they are: 1.8.1, 1.9.2 and 1.10.1. Package numpy is also listed as a dependency (using setup_requires and install_requires directives), so you do not need to install it before installing GDAL.
After package is installed you can use is same way as standard GDAL bindings:
from osgeo import gdal
- Downloads (All Versions):
- 58 downloads in the last day
- 447 downloads in the last week
- 1396 downloads in the last month
- Author: Aleksandr Dezhin
- License: MIT
- Categories
- Development Status :: 5 - Production/Stable
- Intended Audience :: Developers
- Intended Audience :: Science/Research
- License :: OSI Approved :: MIT License
- Operating System :: OS Independent
- Programming Language :: C
- Programming Language :: C++
- Programming Language :: Python :: 2
- Topic :: Scientific/Engineering :: GIS
- Topic :: Scientific/Engineering :: Information Analysis
- Package Index Owner: dezhin
- DOAP record: pygdal-1.9.2.0.xml | https://pypi.python.org/pypi/pygdal/1.9.2.0 | CC-MAIN-2015-32 | refinedweb | 302 | 52.8 |
28 July 2011 11:54 [Source: ICIS news]
SINGAPORE (ICIS)--Asian and Middle Eastern producers have raised their offers for coating-grade low density polyethylene (LDPE) by at least $70/tonne (€49m) in ?xml:namespace>
Asian and Middle Eastern producers offered cargoes this week at $1,750-1,850/tonne (cost and freight)
Asia’s supply of coating-grade LDPE is expected to be tighter next month because a major producer in
Chinese distributors have also raised their offers for locally produced material in north
Expectations that regional supply would be tight in August also supported LDPE coating grade prices in Thailand, where locally produced material was offered at a Thai baht (Bt) 10/kg ($0.34/kg, $340/tonne) premium over LDPE film grade, local distributors said.
In the Thai market, locally produced LDPE coating grade was offered at Bt61/kg this week, the distributors said.
($1 = CNY6.44 / $1 = Bt29.73 / $1 = €0.70)
Additional reporting by Angie Li | http://www.icis.com/Articles/2011/07/28/9480556/asia-mideast-coating-grade-ldpe-makers-hike-prices-supply-tight.html | CC-MAIN-2015-11 | refinedweb | 161 | 50.87 |
On Thu, 2009-09-24 at 23:13 +0100, Don Stewart wrote: > >). > > As a side issue, the get/put primitives on Data.Binary should be > efficient (though they're about twice as fast when specialized to a > strict bytestring... stay tuned for a package in this area). They are efficient within the constraint of doing byte reads and reconstructing a multi-byte word using bit twiddling. eg: getWord16be :: Get Word16 getWord16be = do s <- readN 2 id return $! (fromIntegral (s `B.index` 0) `shiftl_w16` 8) .|. (fromIntegral (s `B.index` 1)) Where as reading an aligned word directly is rather faster. The problem is that the binary API cannot guarantee alignment so we have to be pessimistic. We could do better on machines that are tolerant of misaligned memory accesses such as x86. We'd need to use cpp to switch between two implementations depending on if the arch supports misaligned memory access and if it's big or little endian. #ifdef ARCH_ALLOWS_MISALIGNED_MEMORY_ACCESS #ifdef ARCH_LITTLE_ENDIAN getWord32le = getWord32host #else getWord32le = ... #endif etc Note also that currently the host order binary ops are not documented as requiring alignment, but they do. They will fail eg on sparc or ppc for misaligned access. Duncan | http://www.haskell.org/pipermail/haskell-prime/2009-September/003024.html | CC-MAIN-2014-23 | refinedweb | 199 | 67.96 |
Typed and edited by Juan Schoch. It was Vitvan’s wish to reprint the complete
works of Gerald Massey (i.e. see The Problem of Good and Evil). Alvin Boyd Kuhn
in The Lost Meaning of Death says of Massey that he was “the sole Egyptologist in
the ranks of scholars who measurably understood what the sages of Egypt were
talking about,” saying in passing, “that the renowned Egyptologists have missed the
import of that body of sublime material utterly. Massey came nearer the inner
sanctuary of understanding than any other.” This disclaimer is not to be removed.
Any donations, support, comments are not only wanted but welcome. I can be
contacted at pc93@phlo.net. I include this message in the case that it be your will to
contribute something, i.e. for continuance of the work, i.e., for easier access to more
information, seeking out and purchasing of books, donating of textual materials, etc.
Thank you and much exuberance. Ref: Juan Schoch
members.tripod.com/~pc93
Join gnosis284! - Send e-mail to:
gnosis284-subscribe@yahoogroups.com
ANCIENT EGYPT
T HE L IGHT OF T HE W ORLD
A Work of Reclamation and
Restitution in Twelve Books
BY
GERALD MASSEY.
AUTHOR OF
“A BOOK OF THE BEGINNINGS” AND “THE NATURAL GENESIS”
VOLUME I.
Leeds
CELEPHAÏS PRESS.
2008
It may have been a Million years ago.
BOOK PAGE
I. SIGN-LANGUAGE AND MYTHOLOGY AS PRIMITIVE
MODES OF REPRESENTATION . . . . . . 1
PAGE
I. APT, THE FIRST GREAT MOTHER . . . . . . . 124
[The Errata page is omitted; these corrections have been entered into the text]
ANCIENT EGYPT
THE LIGHT OF THE WORLD
BOOK I like-
ness. But, then, ignorant as he might be, he was more or less the
heir to human faculty as it is manifested in all its triumphs over
external nature at the present time. Now, it has been and still is a pre-
valent and practically universal assumption that the same mental stand-
point con-
ceive, ch. XXIV, 184.)
“In early philosophy throughout the world,” says Mr. Tylor, “the
2 ANCIENT EGYPT
sun and moon are alive and as it were human in their nature.”
Professor Max Müller,, p. sub-
jective, or of ourselves.” (Ib., p. 495.) Illustrationors,amo-
graphic,
S IGN -L ANGUAGE AND M YTHOLOGY 3
then the primary representation of the Nature-Powers (which became
the later divinities) ought to have been anthropomorphic, and the
likeness reflected in the mirror of the most ancient mythologies should
have been human. Whereas the Powers and Divinities were first repres-
ented I,” repre-
sented by the animals that the appearance could be mistaken for a
primitive belief that the animals were his ancestors. But the powers
4 ANCIENT EGYPT
first perceived in external nature were not only unlike the human; they
were very emphatically and distinctly more than human, and there-
fore could not be adequately expressed by features recognisable pro-
portionre-
hend
S IGN -L ANGUAGE AND M YTHOLOGY representa-
tion Concep-
tual and early man had possessed the power to impose the likeness of
human personality upon external phenomena it would have been in
the image of the Male, as a type or in the types of power; whereas
the primal human personification is in the likeness of the female. The
6 ANCIENT EGYPT
great Mother as the primal Parent is a Universal type. There could
be no divine Father in Heaven until the fatherhood was individualised
on earth. Again, if primitive men had been able to impose the human
likeness on the Mother-Nature the typical Wet-nurse would have been
a woman. But it is not so; the Woman comes last. She was pre-
ceded ex-
ternal-
ordial-
antly worshipped; and thus they became the coins as it were in the
current medium of exchange for the expression of primitive thought or
feeling.
Sign-language includes the gesture-signs by which the mysteries
were danced or otherwise dramatized in Africa by the Pygmies and
Bushmen; in Totemism, in Fetishism, and in hieroglyphic symbols;
very little of which language has been read by those who are con-
tinually connec-
tion betwixt words and things, also betwixt sounds and words, in a
very primitive range of human thought. There is no other such a
record known in all the world. They consist largely of human
S IGN -L ANGUAGE AND M YTHOLOGY pri-
m origin and world-
wide authorised
8 ANCIENT EGYPT., p. zoötypes
S IGN -L ANGUAGE AND M YTHOLOGY pheno-
mena was based upon this primitive system of thought and expression,
and how the things that were thought and expressed of old in this
language continue the primary stratum of what is called “Myth-
ology” to-day.
In the most primitive phase Mythology is a mode of represen-
ting sugges-
tive
10 ANCIENT EGYPT despe-
rate leap and landed safely in the Moon, where she has remained to
this day. (Wilson, Trans. of Ethnol. Society, 1866, New Series, v. 4,
p. transforma-
tion.-
hopper, then, which uttered a voice that did not come from its
mouth, was a living type of superhuman power. And being
an image of mystery and superhuman power, it was also con-
sidered a fitting symbol of Kagn, the Bushman Creator, or Great
Spirit of creative mystery. Moreover, the grasshopper made his
music and revealed his mystery in dancing; and the religious
mysteries of Kagn were performed with dancing or in the grass-
hopper’s dance. Thus the Initiates in the mysteries of the Mantis are
identical with the Egyptian Mystæ
S IGN -L ANGUAGE AND M YTHOLOGYæ who
transformed in trance, as well as leaped and danced in the mysteries.
The Frog andised them in their dances.
The Leapers were the Dancers, and the leaping Mantis, the Grass-
hopper, the Frog, the Hare, were amongst the pre-human prototypes.
The frog is still known in popular weather-wisdom as the pro-
phes, p. 82, Eng. tr.). The Frog was a prophet of Rain in
some countries, and of spring-time in others. In Egypt it was the
prophet of the Inundation, hence Hekat was a Consort of Khnum,
the Lord of the Inundation, and King-
pole ΕΓω ΕΙΜΙ ΑΝΑCΤΑCΙC, “I am the Resurrection.” (Lanzone,
Dizionario, p. 853; Budge, The Mummy, p. 266.) In this figure the
lamp is an equivalent for the rising Sun, and the frog upon it is the
type of Ptah, who in his solar character was the Resurrection and the
life in the Mythology before the image passed into the Eschatology,
12 ANCIENT EGYPT
Lamas had an idea that the earth rested on a Golden Frog, and that
when the Frog stretched out its foot there was an Earthquake. (“A
Journey from St. Petersburgh to Pekin in the year 1719.” Pinkerton’s
Voyages, v. 7, p.ennys, Folk-Lore of China, p. 117.) As Egyptian, the Mother
of the West was the Goddess who received the setting Sun and
reproduced its light. The immortal liquor is the Solar Light. This
was stolen from the Moon. Chang-ngo is equivalent to the frog-
headed Hekat who represented the resurrection. The frog, in Egypt,
was a sign of “myriads” as well as of transformation. In the
Moon it would denote myriads of renewals when periodic repeti-
tion was a mode of immortality. Hekat the frog-headed is the
original Cinderella. She makes her transformation into Sati, the
Lady of Light, whose name is written with an Arrow. Thus, to
mention only a few of the lunar types, the Goddess Hekat repre-
sented the moon and its transformation as the Frog. Taht and
his Cynocephalus represented the Man and his dog in the Moon.
Osiris represented the (doubt-
less because of the bird’s keen scent for blood). The sheathen claw
is a determinative of peaceful actions. The hinder part of the Lioness
denotes the great magical power. The Tail of by means of these
living object-pictures cannot now be measured, but the moralising
S IGN -L ANGUAGE AND M YTHOLOGY 13
power or soul in Nature before there was any representation of the
human Soul or Ancestral Spirit in the human form. Hence we are
told that when twins are born the Batavians believe that one of the
pair is a crocodile. Mr. Spenser accepts the “belief” and asks, “May
we not conclude that twins, of whom one gained the name of crocodile,
gave rise to the recognised by the Congo
natives as a type of Soul. Miss Kingsley tells of a Witch-Doctor
who administered emetics to certain of his patients and brought away
young crocodiles. She relates that a Witch-Doctor had been opened
after death, when a winged Lizard-like thing was found in his inside
which Batanga said was his power. The power being another name
for his Soul.
Mr. Spenser the New. The trans-
formation was visible and invariable, and the product of transformation
was always the same in kind. There was no sign or suggestion of an
unlimited possibility in metamorphosis. Neither was there ever a
race of savages who did think or believe (in the words of Mr. Spenser)
14 ANCIENT EGYPT
“that any kind of creature may be transformed into any other,” no more
than there ever were boys who believed that any kind of bird could
lay any other kind of bird’s egg. They are too good observers for any
such self-delusion as that.
Mythical representation did not begin with “stories of human
adventure,” as Mr. Spencer puts it, nor with human figures at all, but
with, and to put
the hives into mourning. The present writer has known the house-
wife to sally forth into the garden with warming-pan and key and
strips of crape to “tell the Bees,” lest they should take flight, when one
of the inmates of the house hadalm-
ing the dead. Thus the Bee, as a zoötype of the Soul, became a
messenger of the dead and a mode of communication with the ances-
tral for ever over the living”:—
“ Bienchen, unser Herr ist todt,
Verlass mich nicht in meiner Noth.”
(Gubernatis, Zoological Mythy., v. 2, p.
S IGN -L ANGUAGE AND M YTHOLOGY of con-
tinued-
w
16 ANCIENT EGYPT humour. When the wedded pair were going to
bed she would not undress unless he let her cut off his tail. For this
remained unmet con-
demned as unclean, to be cast out with curses; and so the real
animals became the outcasts of the mental world, according to the
later religion, in the language of letters which followed and super-
sededhuman
S IGN -L ANGUAGE AND M YTHOLOGY-doors.
One of the most universal of the Folk-Tales which are the débris of
Mythology is that of the Giant who had no heart (or spark
18 ANCIENT EGYPT
lack of Intelligence that made the Giant of the Märchen such a big
blundering booby, readily out-witted by clever little Jack, Horus or
Petit Yorge, the youthful Solar God; and so easily cajoled by the
fair princess forb
S IGN -L ANGUAGE AND M YTHOLOGY, ‘J, pp. repre-
sentation went on developing in Egypt, keeping touch with the advanc-
ing snake.
20 ANCIENT EGYPT mon-
ster by thrusting a huge to “be off.” “The Mon-
ster”
Giant is the Sun, and that made use of cus-
tom at Burford of making a dragon annually and “carrying it up and
down the town in great jollity, on Midsummer Eve,” to which he says,
not knowing for what reason, “they added a Giant.” (Brand, “Mid-
summer Eve.”) Both the Dragon, and was
said, that Nine Maidens were devoured by the Dragon of darkness.
The Myth originated when Darkness was the devouring Giant and the
weapon of the warrior was a stone that imaged the Solar orb. In the
S IGN -L ANGUAGE AND M YTHOLOGY 21
contest of the young and ruddy hero David with the Giant Goliath the
Hebrew Version of the Folk-tale still retains the primitive feature
of the stone.
We know the universal Mother suffer-
ing.
22 ANCIENT EGYPTising the
animal types. At first the Apap-reptile rose up vast, gigantic, as the
swallowing darkness or devouring dragon. This, when humanised, degra-
dationised or divinised in human form
the re-cast may be fatal to the mythical meaning; primitive sim-
plicity implica-
tions. Especially as Osiris, according to Spencer, was once a man!
S IGN -L ANGUAGE AND M YTHOLOGYhabit, vol. iv, first series; vol. iv, p. 21, second
series.) In this phase it is the female who cohabits with the Corpse
of the dead Male. But in neither were the actors of the drama
human, although they are humanised representations of
24 ANCIENT EGYPT-
phalia “made the Ass a symbol of the dull St. Thomas, and were
accustomed to call it by the name of ‘the Ass Thomas,’ the laggard
boy who came the last to school upon St. Thomas’s Day.”
(Zoological Mythology, vol. i, p. 362.) But we find the resurrection which at first was Soli-Lunar in the
Mythos; afterwards a symbolic representation of the Soul that was
awakened from the Sleep of death by Horus in his rôle of Saviour or Folk-Tale, repeated by Pliny (Hist. Nat., 7, 3),
which tells of a time when a Mother in Egypt bore seven children at
S IGN -L ANGUAGE AND M YTHOLOGY Irenæus (B. I, ch. v. 2, 3), “they call Ogdoas,
Sophia, Earth, Jerusalem.” Jerusalem is identified by Jeremiah with
the ancient Mother who was the bringer-forth of seven sons as the
“Mother of the young men,” “she that hath borne. There were “the Seven Children of the Thigh”
in the Astronomical Mythology. Thus the Ancient Genetrix was the
Mother who brought forth Seven Children at a birth, or as a com-
panionship, according to the category of phenomena. Her seven
children were the Nature-Powers of all mythology. They are
variously represented under divers types because the powers were re-
born thesefold our inquiry
which country the Aryan Märchen came from last. The Seven
Hathors or Cows in the Mythos are also the Seven Fates in attend-
ance at the birth of a Child; and in the Babar Archipelago Seven
26 ANCIENT EGYPTised. summarised, but not begotten. Now, a child whose
S IGN -L ANGUAGE AND M YTHOLOGY ex-
planation in natural fact, in the ancient Luni-Solar-Mythos. Horus
the Bastard was the child of light that was born of Isis in the Moon,
when the Moon was the Mother of the child and the Father-source
of light was unidentified. But sooner or later there was a secret
knowledge recognised as the Father of Horus who was previously the Mother’s
child that knew not his Father. Moreover, in the Märchen it is
sometimes the Father who is killed in the combat, at other times it is
the Son. And, in the Mythos, Osiris the Father rises again upon the
third day in the Moon, but at other times he rises as Horus the
triumphant Son. A legend like this of the combat betweenour every day.” Her heart knew him. She seized upon him and
said to him, “Come, let us lie down for a while. Better for thee. . .
beautiful clothes.” Like Joseph in the Hebrew version, the youth
28 ANCIENT EGYPT Sungod Horus of the East.
The name of Bata signifies the Soul (ba) of life in the earth (ta) as a
title of the Sun that rises again. On this account it is said that Bata
goes to “the Mountain of-
ever they are found. The Solar Power on the two horizons or the
Sun with a dual face was represented impubescent child; and she unites with Hu the
Virile Solar God and glories in his fertilisingise, as this was Egyptian mythology, not
Semitic history.
When the Aryan philologists have done their worst with the sub-
ject Tafneo-
morphic representation, and in a region where the plummet of the
Aryanists has never sounded. As the Egyptians apprehended, the
foremost characteristic of the Dawn was its dewy moisture and
S IGN -L ANGUAGE AND M YTHOLOGY 29
refreshing coolness, not its consuming fire. The tree of dewy cool-
ness,fnu gives the moisture from
the Tree of Dawn in heavenly dew, but in another character she is
fierce as fire, and is portrayed in the figure of a lioness. The truth is,
there was Egyptian science enough extant to know that the dew of
Dawn was turned into the vapourisments or distortions. But we may
depend upon it that any attempt to explain or discuss the Asiatic,
American, Australian, and European mythologies with that of Egpyt
omitted is the merest writing on the sand which the next wave will
obliterate.
Max Müller asked how it was that our to-day is fatally in error. Neither will it
avail to begin with idiots who called each other nick-names in San-
skrit. Let us make another test-case of Bekhi the Frog. The San-
skritist does not start fair. He has not learned the language of
30 ANCIENT EGYPT
animals. The mythical representation had travelled no repre-
sentative trans-
formsinder-
ella (so to say) of the three sisters, who are Ank, Sati, and Hekat, the
three goddesses of the myth who survive as the well-known three
Sisters of the Märchen. The “Sun-frog” then was Khnum, “the
King of Frogs,” as the Sun in the night of the underworld, who was
wedded to Hekat, the lunar frog in the mythos which supplied the
matter for the Märchen.
S IGN -L ANGUAGE AND M YTHOLOGY the Beast, as in “Beauty and the Beast,” which was pro-
hibited; and if the lover looked upon the Maiden under certain
conditions she would transfigure into a Frog or other amphibious
creature, and permanently retain that shape, as the story was told
when the myth was moralised in the Märchen; the exact antithesis of
the Frog that transformed into a beautiful Princess, the transformation
of Bekhi, and possibly (or certainly) of Phryne, the Frog, whose
sumptuous beauty was victoriously unveiled when she was derobed and lonely
state goes underground or enters the waters to make her transformation
and is invisible during three nights (and days), which correspond to
the three days’ new moons to the year; and it is these
mystical reasons repre-
sentation was first made Amenta was not known as the monthly
32 ANCIENT EGYPTising the law intended to be taught and fulfilled.
The mystical Bride who was not to be seen naked was personated
by the Wife who wore the bridal veil, or the Wife whose face was
never to be seen by her husband until she had borne him a child:
or who is only to be visited under cover of the night. For, like the
Sun and the Moon, they dwell in separate huts and only meet occa-
sionally so degraded or
undeveloped as the Bushmen have their hidden wisdom, their Magic,
with an Esoteric interpretation of their dramatic dances and panto-
mime, by which they more or less preserve and perpetuate the mystic
meaning of their religious mysteries. What we do really find is that
the Inner African and other aborigines still continue to talk and think
S IGN -L ANGUAGE AND M YTHOLOGY 33
their thought in the same figures of speech that are made visible by
art, such as is yet extant among the Bushmen; that the Egyptians
also preserved the primitive consciousness together with the clue to
this most ancient knowledge, with its symbolic methods of com-
munication, and that they converted the living types into the later
lithographs and hieroglyphics. Animals that talk in the folk-tales of and aboriginal
men to the test. Try them with the miracles of the Old or New
Testament, presented to them for matters of fact, as a gaugeænas.” a plain untruth, and that
it was a shame to tell such lies with a serious countenance.” They at
once proceeded to test the statement by reckoning the ‘it must be a
lie, and to talk to them of burning in fire after this life was an abomin-
able mis-
taken the primitive method of representing them. It is we, not they,
who are the most deluded victims of false belief. Christian capacity
for believing the impossible in nature is unparalleled in any time past
amongst any race of men. Christian readers denounce the primitive
34 ANCIENT EGYPT
realities of the mythical representation as puerile indeed, and yet their
own realities alleged to be eternal, from the fall of Adam to the re-
demption by means of a crucified Jew, are little or nothing more than
the shadows of these primitive simplicities of an earlier time. It will
yet be seen that the culmination of credulity, the meanest emascula-
tion Chris-
t. Primi-
tive repre-
sentation has led to the ascribing of innumerable false beliefs not only
to primitive men and present-day savages, but also to was as a mode
of praying that the branches of the bedwen or birch were strewn in
the ancient British graves. It is the same language and the same
sign when the Australian aborigines approach the camp of strangers
with a green bough in their hands as the sign of amity equivalent
to a prayer for peace and good-will. Acted Sign-language is a
practical mode of praying and asking for what is wanted by portraying
instead of saying.
S IGN -L ANGUAGE AND M YTHOLOGY Tran-
sylv-
cessor 1st Amazulu, p.
36 ANCIENT EGYPT kte…j
S IGN -L ANGUAGE AND M YTHOLOGY pri-
marily.) They knew the natural magic of
the emblem if the European did not. Also, they were identifying the
woman with the abode. In Bent’s book he gives an illustration of an
iron-smelting furnace, conventionally showing the female figure and
the maternal mould. “All the furnaces found in Rhodesia are of that
form, but those which I have seen (and I have come upon five of
them in a row) are far more realistic, most minutely and statuesquely
so, all in a cross-legged sitting position, and clearly showing that the
production or birth of the metal is considered worthy of a special
religious expression.
38 ANCIENT EGYPT
of Fructification associated with plants and fruits, flowers and foliage,
which are seen issuing from his body. He is the “Lord of Aliment,”
in whom the reproductive powers of earth are ithyphallically por-
trayed.
forever. Thy phallus is eternal.” (Rit., XXXIX, 8.). (Rit., xciii, 1.) mys-
tery to-day,ra-
lia, primi-
tive,
S IGN -L ANGUAGE AND M YTHOLOGY 39
is in such ways as this the Wisdom of Old Egypt will enable
us to read the most primitive Sign-language and to explicate
the most ancient typical customs, because it contains the gnosis
or science of the earliest wisdom in the world. The “Lan-
guage-
ph. (Nat.ic shows the
process of accretion or agglutination which led to the word Aiu, Iao,
Ioa, Iahu
40 ANCIENT EGYPTo-
graphs of ideas; as likenesses of nature-powers; as words, syllables,
and letters; and what they said is to be read in Totemism, Astro-
nomy,i-
sectionama-
tions, image of things, nor can they be the equivalent of
the mental representation which we call thinking. It is the meta-
physician
S IGN -L ANGUAGE AND M YTHOLOGY,”—
“B afold lan-
guage.ah,”
42 ANCIENT EGYPTo-
graphic con-
tinued until the written superseded the painted alphabet. These
pictorial signs, as Egyptian, include an
BOOK II-language of Totemism. Ceremonial
rites were established as the means of memorizing facts in Sign-
language when there were no written records of the human past. In
these the knowledge was acted, the Ritual was exhibited, and kept in
ever-living, p.
TOTEMISM , TATTOO AND F ETISHISM 47
pre-Totemic people who were as yet undivided by the Totemic Rites
of Puberty which are now illustrated in the mystery of the dance.
In the Initiation ceremonies of the males described by Messrs.
Spencer and Gillen (p. cere-
monies
48 ANCIENT EGYPT,
p.-language,
p. 116),
TOTEMISM , TATTOO AND F ETISHISM, p. Resurrec-
tion,-language. travelled lan-
guage of the clickers, it is yet extant with the Aborigines, amongst
whom the language-makers may yet be heard and seen to work in
the pre-human way. The earliest human language, we repeat, consisted
of gesture-signs which were accompanied with a few appropriate
sounds, some of which were traceably continued from the prede-
cessors of Man. A sketch from life in the camp of the Mashona
50 ANCIENT EGYPT
chief Lo Benguela, made by Bertram Mitford, may be quoted, much
to the present purpose:—
“ ‘He comes—the Lion!’ and they roared.
“ ‘Behold him—the Bull, the black calf of Matyobane!’—and at
this they bellowed.
“ ‘, p. 28.) In this Sign-language,.,” p.-language. They cheep and chirrup or whistle in their
speech with a great variety of notes.
The Supreme Spirit, Tharamulun, who taught the Murrung tribes
TOTEMISM , TATTOO AND F ETISHISM-language that was both visual and vocal at the
same time when the brothers and sisters were identifying them-
selves, distin-
guished from the gregarious horde with its general promiscuity of
intercourse between the sexes is now beginning to be known by the
name of Totemism, a word only heard the other day. Yet nothing later
52 ANCIENT EGYPT, p. 29), though “Worship,” we protest again and again, is
not the word to employ; in this connection it is but a modern counter-
feit.
TOTEMISM , TATTOO AND F ETISHISM.
Twm (Tom) in Coptic signified joining together as in the Tem.
The word “Tem” Tem is one with the Total,
and the Total comprised two halves at the very point of bifurcation
and dividing of the whole into two; also of totalling a number into a
whole which commences with Tem., Jour-
nal of Royal Geographical Society, vol. I, 1832.) The Egyptian Tem
is also a place-name as well as a personal name for the social unit, or
division of persons. The Temai was a District, a Village, a Fortress,
54 ANCIENT EGYPT “Tem”-
hood-
hood-
hood proceeded in the course of her long
development from the state of primitive Totemism in Africa: the
state which more or less survives amongst the least cultured or most
TOTEMISM , TATTOO AND F ETISHISM Tem-language-language. continual., pp. 8
56 ANCIENT EGYPT,” Eng. tr., p. inhabi-
tants of the Oxyrhynchus Nome did not eat a kind of Sturgeon known
as the Oxyrhynchus. (Of Isis and Osiris, p.
TOTEMISM , TATTOO AND F ETISHISM., p.., p.. (Pp. 204-7.)” (p.
58 ANCIENT EGYPT-language. them-
selves represen-
tation
TOTEMISM , TATTOO AND F ETISHISM 59
is repeated in Egyptian mythology. In Totemism the dual Mother-
hoodchichte,
vol. III, p. 293; Wearne, S., Journey to the Northern Ocean, p.ish-
ing inter-
course of Son and Mother, whether of the uterine Son or only of
60 ANCIENT EGYPT
the same Totem, which in this case was the Crocodile. (Magic
Papyrus, p. cele-
brated-language. The fact was tattooed on the person. A cicatrice
was raised in the flesh. Down was exhibited as a sign of the pubes.
The Zulu women published their news with the Um-lomo or mystical
mouth-piece. The act may be read on behalf of the women by assum-
ing.,
p. 407.) some-
thing
TOTEMISM , TATTOO AND F ETISHISM 61
by two different signs or zoötypes. Sign-languagera-
lian customs, no girl was marriageable until the rite of intro-
c recog-
nition of the Mother-blood, even in the undivided horde, would
naturally lead to the Blood-motherhood which we postulate as
fundamental in Totemism. At first no barrier of blood was recog-
nized. recog-
nized
62 ANCIENT EGYPT
she is visited by the great serpent, or, in other legends, she is said to
change into a serpent. In the Arunta tradition the two females who
are the founders of Totemism and finishers of the human race made
their transformation into the lizard. (N.T., p. 389.) The native
women of Mashonaland also tattoo themselves with the lizard-pattern
that is found on their divining tablets when they come of age. (Bent.,
p. trans-
formation-
hood,
TOTEMISM , TATTOO AND F ETISHISM culti-
vate a long and manifold development in the application
of the Sign to the Motherhoods and Brotherhoods, and to the inter-
marriage of the groups now called Totemic.
There are two classes of tradition derived from Totemism concern-
ing the descent of the human race. According to one, human beings
were derived from the Totemic animals, or Birds, as the Haidahs in
Queen Charlotte Sound claim descent from the Cow. According to
the other, the Totemic zoötypes are said to have been brought forth
by human mothers. The Bakalai tribes of Equatorial Africa told Du
Chaillu that their women gave birth to the Totemic animals, we have
64 ANCIENT EGYPT
seen how, and that one woman brought forth a Calf, others a Crocodile, a
Hippopotamus, a Monkey, a Boa, or a Boar. (Du Chaillu, Explorations
and Adventures in Equatorial Africa, p. 308.) The same statement
as this of the Bakalai is made by the Moqui Indians, who affirm that
the people of their Snake-Clan are descended from a woman who
gave birth to Snakes. (Bourke, Snake-dance of the Moquis, p. representatives,
p.-
qufold recognised in Totemism was
TOTEMISM , TATTOO AND F ETISHISM form, p.
66 ANCIENT EGYPT, p. naturally,
TOTEMISM , TATTOO AND F ETISHISM rudi-
mentary., p. 388), by means
of the Totemic rites. They are said to have changed the Inapertwa
into human beings belonging to six different Totems—(1), p. Founda-
tion.
68 ANCIENT EGYPT repre-
sents demon-
strated that blood was the basis of womanhood, of motherhood, of
childhood, and in short, of human existence. Hence the preciousness
of the Mother-blood. Hence the customs instituted for its preserva-
tion and the purity of racial descent. Only the mother could originate
and preserve the nobility of lineage or royalty of race. And the old
dark race in general has not yet outlived the sanctity of the Mother-
blood which was primordial, or the tabu-laws which were first made
statutable by means of the Mother’s Totem.
In the Egyptian system of representation there are Seven Souls
TOTEMISM , TATTOO AND F ETISHISM,-
ology.æus Neph sacri-
fice of a slave, and her body is painted with his blood. This was the
Blood-Mother as a Virgin, in the first of the two characters assigned
70 ANCIENT EGYPT con-
sidered,
p. 28.) It was a custom long continued by the Egyptians to preserve
the Mother-blood by the marriage of the brother and sister, a custom
that was sacred to the Royal family, thus showing that the Mother-
TOTEMISM , TATTOO AND F ETISHISM 71, p. zoötype
72 ANCIENT EGYPT drink-
ing (p. 211, abbreviated). She was walking about as gay
and lively as anyone, when one of her boys invited Mr. Hunt to the
funeral. Her two sons considered she had lived long enough. They
TOTEMISM , TATTOO AND F ETISHISM,
Eng. tr., p. every one parts law of Tabu, it was the custom for everyone
to share and share alike all round in killing and eating the sacrifice.
74 ANCIENT EGYPT., II, 29. Cited in Encyclopædia Brit., v. XXI, p. 137,
Ninth ed.), p. 21), the heroine of the drama “is scarcely
dead before she is invoked by the chorus as a superhuman Power able
to give and to withhold favours, now that she has been transub-
stant, pp.
TOTEMISM , TATTOO AND F ETISHISM 75
eaten at the family meal, and that the human sacrifice was com-
muted representa-
tive of the Great Mother. The Emu was the bird of Earth in
Australia, like the Goose in Egypt. As layer of the egg it repre-
sented-
hood.
76 ANCIENT EGYPTa-
tory., p. (p. (p.-
cised
TOTEMISM , TATTOO AND F ETISHISM. (p. 403).
Two Women of the Magpie Totem (p. 404). Two Women of the
Hakea Totem (p. 436). Two Women of the Kangaroo Totem
(p. 464). Two Women who accompanied the Men of the Plum-tree
in the Alcheringa, as Two Sisters, Elder and Younger (pp. 149, 315).
The starting point of the Hakea-flower Totem is from Two Female
Ancestors (p.
78 ANCIENT EGYPT ‘duality’ of Ngalalbal, the wives of
Daramulun.” These are seen to glide from the forest past the fire
and to disappear in the gloom beyond to a slow and rather melan-
choly Neb.t-Paru or Mistress of the House. This
was also an Inner-African marriage institution. The first corre-
spond
TOTEMISM , TATTOO AND F ETISHISMi-
fied, p.marriageable groups. As testified to by
the latest witnesses, the “fundamental feature” in the organism of the
Australian tribes is “the division of the tribe into two exogamous
intermarrying groups” (p. 55).” (p. recog-
80 ANCIENT EGYPT
nized as a matter of course; and that he could always ascertain
whether they belonged to the division into which he could legally
marry, though the places were 1,000 miles apart and the languages
were quite different.” (Fison and Howitt, p. subdivisional arrange-
ment to that of the first Two Classes; as when a man will lend his
wife to a stranger, always provided that he belongs to the same class
as himself (N.T., p.lo-ur-
inga of the Arunta with other emblems of the Tree and Rock
of earth.
The Australian Totemic system begins with being Dichoto-
m.
TOTEMISM , TATTOO AND F ETISHISMur-
inga (wood and stone), the two Poles (North and South), the two
women, represent the Motherhood that was duplicated in the two
female ancestors; and that the Totems of the sub-divisions repre-
sent the blood-brotherhoods, thus affiliated to the Mother-blood,
which were followed finally by the blood-fatherhood. The Arunta
beginning is immeasurably later than the Egyptian tradition pre-
served. (P. 627.);
82 ANCIENT EGYPT of-language, con-
nubium betwixt the son and the Mother, whereas the marriage of
a brother and sister, blood or tribal, was allowed as the only proper
connection now for preserving the Mother-blood without committing
incest.
TOTEMISM , TATTOO AND F ETISHISM-
inyeri-language,join-
ing com-
munal, as she was then open and accessible to all the males, at least
on this occasion when she entered the ranks of womanhood as
common property, which was afterwards made several by develop-
ment of the marriage-law. mar-
riage originated as a mode of rescuing or ransoming the woman from
the clutch of the general community in which the female was common
84 ANCIENT EGYPT
to all the males of the group. In the special marriage of individual
pairs the woman had to be captured and carried off from the group—
only instead of being captured we might say “rescued” by the in-
dividual (and his friends) from being the promiscuous property of the
community. Hence the custom of compensation to the group (or,
later, parents) for permitting the female to become private property in
personal marriage. The primitive rite of connubium was first con-
summ. 7, 21, 1882.) This was communal con-
n, p. 2.)
Thus there was a rite of promiscuity observed as a propitiatory pre-
paration for individual marriage. This was to be seen at the temple
of Belit in Babylon, where the women offered themselves to all men pro-
misc ¢rkteia
TOTEMISM , TATTOO AND F ETISHISM, Eng. tr., p., p.æ.., pp. 96-101.) This does not
stand alone. According to the report of Mr. Kühn in Kamilaroi and
Kurnai (by L. Fison and A. W. Howitt, pp. 285-7),
86 ANCIENT EGYPT primi-
tive+1.-land,”, p. 106, note, Eng. tr.) It has been previously
TOTEMISM , TATTOO AND F ETISHISM 87
shown that the custom of couvade was a dramatic mode of
affiliating the offspring to the father which had previously derived
its descent from the Mother. (Nat. Admu
88 ANCIENT EGYPTfold more natural than the pretended explanations of their
modern misinterpre., p. 77.)., p.
TOTEMISM , TATTOO AND F ETISHISM, p.., p. 263.) But as the
practice proves, it is performed as an assertion of manhood, and is a
mode of making the boy into a man, or creating man. Now, at this
time it was customary to cast the Motherhood aside by some signifi-
cant. (p. begin-
ning.
90 ANCIENT EGYPT genitalia of a divine being who is both male
and female blended in the formation of the Father-Mother, from
whom the soul of blood was now derivable. The drops of blood
are described as issuing from the person of Atum when he per-
formedar-
chate to the Patriarchate. The primitive essence of human life was
blood derived from the female source, with Nature herself for the
witness. In the later biology it was derived from the “double primi-
tive super-
session of the Motherhood, and that in the Arunta double-cutting
the figure of the female was added to the member of the male. Nor
is this suggestion without corroboration. In his ethnological studies
(p.,
p. circum-
cision by the Arunta and in sub-incision, which is re-circumcising in
TOTEMISM , TATTOO AND F ETISHISM condi-
tionusion which led to a confusion of identity and per-
sonality. represen-
tatives of superhuman powers, though not as the direct object of
human worship. The life-tie assumed between Totemic man and the
Totemic animal or zoötype was consciously assumed, and we can per-
ceive ances-
tor, excep-
tions being made solely on the artificial ground of the Totemic
motherhood or brotherhood. The beast only became of the “same
flesh” with the particular family because it had been adopted as their
Totem, ancestral animal, or foster-brother of the blood-covenant, and
92 ANCIENT EGYPT
of ignorant belief that the medicine-men had everywhere the power
of transforming into wolves, hyænas, con-
dition
TOTEMISM , TATTOO AND F ETISHISM trans-
formation
94 ANCIENT EGYPT Inoit Inoit mysteries, when the
controlling spirit of a Shaman was consulted, it was customary for
the mask which represented the particular power invoked to be laid
upon the Shaman’s face, and this mask was the skin of a victim that
moment killed. (Réclus, Prim. Folk, Eng. tr., p.
TOTEMISM , TATTOO AND F ETISHISM cere-
mony of free-man-making. (E. H. Man, Aboriginal Inhabitants of
the Andaman Islands, p. 62.) The boy was anointed when he made
his change into the adult. Horus was anointed when he transformed
from the mortal Horus to the Horus in spirit who rose again from
the dead. And this anointing is still practised mum-
mies exhumed at Deir el-Bahari show that the faces had been painted
and anointed for burial. “The thick coats of colour which they still
bear are composed of ochre, carmine (or pounded brick) and animal
fat.” (Maspero, Dawn of Civilisation, Eng. Tr., p.amilaroi and Kurnai,
by Fison and Howitt, p.
96 ANCIENT EGYPT, 417-18), “ferroque notatas porlegit examines
Picto moriente figuras.” This is shown by an initial letter in the
Book of Kells—a facsimile of which has been published by the
Palææ trans-
formed con-
stellated in the fields of Heaven as seven Hathors or seven Cows.
These were the Mothers of food, who were givers of life in the form
TOTEMISM , TATTOO AND F ETISHISM. Never-
theless repre-
98 ANCIENT EGYPT
sentation of superhuman phenomena. The human Mother had
brought forth her children in the forest and from the cave in the
rock; in consequence of which, as natural fact, the tree and the hole
in the stone, or the ground, have each continued ever since to repre-
sent sus-
tenance.., p. multimammalian
TOTEMISM , TATTOO AND F ETISHISM,
pp. 154, 155.) Previously the same writer had said “the school of
Nkissi is mainly concerned with the worship of the mystery of the
Power of Earth; Nkissi-nsi.” (Kingsley, West African Studies,
p.iza-
tion, pro-
tections” A,
137 B., wor-
shipped
100 ANCIENT EGYPT, 1), 186). The mount with the cave in it
was a natural figure of the Mother-earth to the Troglodites who
were born and there came to consciousness. When the Navajos
TOTEMISM , TATTOO AND F ETISHISM a likeness., p.ah. whence
102 ANCIENT EGYPT, 11). The wood and stone of
the Australian Churinga, which are Totemic types, are excom-
municated in Israel as idols when they were no longer understood as
symbols. They came to be looked upon as deities in themselves,
set up for worship. Both Cæsar
(p. focussed, p. 263.)
Now, the Great Goddess who was “worshipped” with the gory
TOTEMISM , TATTOO AND F ETISHISM, p.., pp. 175-6). fore-
most as the superhuman suckler, the Sow, the Water-Cow, or Milch-
104 ANCIENT EGYPT
Cow.i- Inoæ repro-
duce itself, so that its race may not die out or food become scarce.
This festival was universal once. It was celebrated all over the
world as a drama of reproduction—first and foremost for the repro-
duction of food. The resurrection of food by reproduction in animal
life is thus enacted at the Inoit festival, as it has been acted in a
hundred other mysteries, Intichiuma, Eucharists, Corroborees, and
religious revels. By the dim glimmer of this distant light we see
the fol-
lowed. From very early times the sacrifice of a victim was solem-
TOTEMISM , TATTOO AND F ETISHISM 105
nized, and followed by the phallic feast, whether in the Corroboree of
the Arunta or the Christian Agapæ. First the sacrificial victim is
slain and eaten, ante lucem, at the evening meal or Last Supper, and
next the festival of reproduction was celebrated in the Agapæ. sacra-
ment repre-
sentative of Mother-earth. She was propitiated as the Mother of
Plenty, like the Inoit repre-
106 ANCIENT EGYPT
sented wor-
shipped, Civilization, p.æ and Courtesans received her devotees in
grottoes and caves that were hollowed out for the purpose in the
Syrian hillsides. The temple of Hathor at Serabit-el-Khadem, dis- 700.)
TOTEMISM , TATTOO AND F ETISHISM of all things being on a level.
This fact is expressed in the names of our Fairs and Evens. Promis-
cuity was a mode of making things fair and even in the sexual
saturnalia. High and low, rich and poor, young and old, “com-
minguity at the phallic festival was designed to
represent the desire for an illimitable supply of food, the boundless-
ness of the Alcheringa (N.T., pp. 96,sexual repro-
duction,
108 ANCIENT EGYPT All-One. Nothing was
recognized but the female, the typical organ of motherhood, which
imaged the earth as mother of sustenance; the mother, who was
propitiated and solicited in various ways, by oblations of blood and
other offerings, was also invoked in the likeness of the human female
to be fertilized in human fashion. She was the Great Mother, the
All-One, and nothing less than the contributions of all could duly,
hugely, adequately represent the oblation. In Drummond’s Œdipus
Judaicus, pl. repre-
sentedkind, p. 134.) In these celebrations the woman took
the place of the goddess. At the time when the begetters were not
yet individualized a single pair of actors would have conveyed but
little meaning. The soul of procreation was tribal, general, pro-
miscuous, and the mode of reproduction in the most primitive
mysteries was in keeping therewith. Reproduction by the soul of
the tribe was rendered by all the members contributing to fecundate
the Great Mother. Hence the phallic saturnalia, in which the repro-
duction
TOTEMISM , TATTOO AND F ETISHISM Danæas
wooing the fertilizing sun. In this saturnalia there was a general
reversion to the practice of an earlier time somewhat analogous to
the throw back of atavism in race, with this difference: the inten-
tionalize shelters by the road-side and stock them
with provisions for their wives, and call upon the passers-by to
“procure the public good and ensure an abundance of bread” (Réclus,
P. F. P., p. 283). A propos of this same festival, Israel is charged by
Hosea with having become a prostitute by letting herself out for hire
upon the corn-floor! “Thou hast gone a-whoring from thy God; thou
hast loved hire upon every corn-floor” (ch. ix, 1). humans resur-
110 ANCIENT EGYPT
rectionæ, lamb, per-
formersi-
fication the animals which are eaten for food are represented by the
Totemic actors in the skins as reproducing themselves for food here-
after. of
F ETISHISM 111æ-language. By Fetishism the present writer means
the reverent regard for amulets, talismans, mascots, charms, and luck-
tokens that were worn or otherwise employed as magical signs of
protecting power. Fetishism has been classified as the primal,
universal religion of mankind. It has also been called “the very
last corruption of religion.” (Max Müller, Nat. Rel., p.-language which supplied a tangible means of laying
112 ANCIENT EGYPTæ in the Egyptian Ritual were repeated as
words of command. In saluting the two lions, the double-uræi and
the two divine sisters, the deceased claims to command and compel
them by his magical art (xxxvii, 1). transforma-
tion, Con-
tinent.
F ETISHISM Trans-
formation, stop-
ping pro-
tection
114 ANCIENT EGYPT develop-
ment, and says: 1. formulæ” (31, 2-3). That is the pre-
cise explanation of the primitive modes of invocation and evocation,
“I pray in magical formulæ.” And these magical formulæ, 1,
formulæ. Urt-Hekau, great in magical words of power, is a title of
Isis, who was considered the very great mistress of spells and
magical incantations. It is said of her: “The beneficent sister re-
peateth the formulæ and provideth thy soul with her conjurations.
Thy person is strengthened by all her formulæ
F ETISHISM. p., p. Γ.
116 ANCIENT EGYPT flourish-
ing,abæus of trans-
formation, propa-
gation and connubium. Necklaces were worn by the Egyptian
women to which the tie-amulet of Isis formed a pendant, and indi-
cated;
F ETISHISM “Mis-
tress in-
tended to be worn as amulets.
Thus the fetish was at first a figure of the entire animal that repre-
sented the protecting power as the superhuman Mother Apt (Proc.
S. of B. A., xxii, parts 4 and 5, p. quali-
ties found in the physics. Hence the fetishes of the black or red
aborigine are his medicine by name as well as by nature. These
things served, like vaccination, traction-buckles, or “tar-water and
118 ANCIENT EGYPT
the Trinity,” as fetishes of belief so long as that belief might last.
They constituted a mental medicine, and an access of strength or
spiritual succour might be derived from the thought. Belief works
wonders. Hence the image of power becomes protective and assist-
ing; repre-
sentative
F ETISHISMskin,5.
ELEMENTAL AND ANCESTRAL SPIRITS, OR THE
GODS AND THE GLORIFIED.
BOOK III pro-
tecting
E LEMENTAL AND A NCESTRAL S PIRITSidæ super-
human, dark-
ness to later folklore and fairyology;
122 ANCIENT EGYPT. Dark-
ness:—Dark-
ness = develop-
E LEMENTAL AND A NCESTRAL S PIRITS 123
ment.emmufold
124 ANCIENT EGYPT variations, astron-
omy.æus-serpent, was furnished by the vivifying solar
Apt, the First Great
heat, the elemental power of which was divinized
Mother. in Ra. Evidence for a soul of life in blood was con-
clusive
E LEMENTAL AND A NCESTRAL S PIRITS-language was changed for the human figure or
any one any one likeness
126 ANCIENT EGYPT
of good works, and primarily of charity. The gods and the
glorified to whom worship was paid are: (1) The Great One God
(Osiris); (2) the Nature-Powers, or Gods; and (3) the Spirits of
the Departed. But the order in development was: (1)-living-language altogether. “Con-
ception”
E LEMENTAL AND A NCESTRAL S PIRITS-
hood M
128 ANCIENT EGYPTclay,
same. Accord-
ingclayclayclay,
E LEMENTAL AND A NCESTRAL S PIRITS begin-
ning, travelling in a whirlwind, and on seeing one of
these approaching a native woman who does not wish to have a child
130 ANCIENT EGYPT, 11).incorporation
E LEMENTAL AND A NCESTRAL S PIRITSless.. 113),
132 ANCIENT EGYPT inter-
fered. relation-
ship,
E LEMENTAL AND A NCESTRAL S PIRITS primæ
134 ANCIENT EGYPT
E LEMENTAL AND A NCESTRAL S PIRITS repre-
sented the same animistic nature power from which the soul that is
imaged by the totem was derived. The soul in common led to the
common interest, the mysterious relationship and bond of unity
betwixt-
lings conse-
quently
136 ANCIENT EGYPT re-
presented folk-
E LEMENTAL AND A NCESTRAL S PIRITS 137
lore.abæ
folk-lore repre-
sented an-
cestral human spirits. The Egyptian Hamemmat
138 ANCIENT EGYPT repre-
sented deriva-
tionnu, or Num, or Hapi, the
descent being traceable at first by the totem, and afterwards by
the name.
Primitive man has been portrayed in modern times as if he were a
E LEMENTAL AND A NCESTRAL S PIRITS mystical
140 ANCIENT EGYPT
E LEMENTAL AND A NCESTRAL S PIRITSinterpre mystical)
142 ANCIENT EGYPTinterpre Ino transformations zoötypes.
E LEMENTAL AND A NCESTRAL S PIRITS daughters
144 ANCIENT EGYPT relation-
ship and were the representatives of powers beyond the human.
Thus, in one case the spirits prayed to are identified by their colours,
and in the other by their totemic zoötypes. If we interpret this
according to Egyptian symbolism, when the sick person was
E LEMENTAL AND A NCESTRAL S PIRITS con-
fusion re-
mained zoötypes. They did not mistake the “souls” of one
category for “spirits” in the other, because they knew the differ-
146 ANCIENT EGYPT
ence. The same distinction that was made by the Egyptians betwixt
the superhuman powers and the Manes, or the gods and the
glorified, is more or less identifiable all the world over.
Thus, the origin of spirits and of religion is twofold., ‘Please,. The
E LEMENTAL AND A NCESTRAL S PIRITS ‘idol’ dis-
tinction., pp. 146-7.) Amongst
148 ANCIENT EGYPT.
E LEMENTAL AND A NCESTRAL S PIRITS in-
corporated or made flesh on earth in both men and animals. In the
Egyptian eschatology these primordial powers finally became the
Lords of Eternity. But from the first they were the ever-living-living,.”
BOOK IV unraveled under-
stand.
T HE B OOK OF THE D EAD torn-
mem to-day
that rose up from the nether world as conqueror of darkness to join
the west and east together on the Mount of Glory, as the connecting
link of continuity in time betwixt yesterday and to-morrow.-
ology he became the god in spirit who is called the holy spirit and
first person in the trinity which consisted of Atum the father god,
Horus the son, and Ra the holy spirit; the three that were also one
188 ANCIENT EGYPT socio-
logical.
T HE B OOK OF THE D EAD 189
re-applied to the human soul in the eschatology. Egyptian myths,
then, are not inventions made to explain the Ritual. Totemic repre-
sentation pri-
m
190 ANCIENT EGYPT
tangible threshold to the other world, the secret but solid earth of
eternity which was opened up by Ptah when he and his seven
Kn-
figured, trans-
former … … = | https://id.scribd.com/document/3010012/Massey-Ancient-Egypt-1 | CC-MAIN-2019-35 | refinedweb | 8,533 | 68.4 |
Continuing with the series of pysanky eggs I've been working on with Python and POV-ray - another week, another egg. This one is a Lemko design. The dots, I believe, represent stars.
Getting the crown and base was relatively easy, as it just required the correct positioning of a single stroke, then rotating it:
def makecrown(baseobj):
crownstrokes = [baseobj]
theta = STEP
while theta < 360.0:
crownstrokes.append(pov.Object(baseobj,
rotate = (0.0, theta, 0.0)))
theta += STEP
return pov.Union(*crownstrokes)
In this case, step = 360.0/24.0.
Likewise, I have a function for the dots that made them easier to place:
def movedot(basedot, xtheta, ytheta):
dot = pov.Object(basedot, rotate = (xtheta, ytheta, 0.0))
return pov.Object(dot, translate = (0.0, 1.0, 0.0))
Ultimately, I'd like to get enough egg designs coded to put together some sort of scene in POV-ray dedicated to the eggs. The following two scenes show what I have so far:
If I can identify a few more attractive, computationally friendly designs, I should be able to come up with enough eggs for a decent POV-ray egg scene. In the meantime, thanks for having a look.
Hey ,
I was googling "povray python" and ended up here....so my comment or question is...is there a one to one correspondence? can I just say I am going to do my scenes just using python to generate the povray files no matter how involved the scene? ....and if so..is there a manual or something...
Cheers
@The Great - you know, I'm really a dilettante. That disclaimer made, AFAIK the Active State recipe and associated project are the readily available Python libraries for POV-ray. I've just been patching that as needed to produce the Pov-ray Scene Description Language (SDL). It's brief and the code itself is the manual.
I have found that there isn't a one to one correspondence. At least not in the ad hoc manner that I've been patching the recipe. Sometimes it's easier just to translate some list into a string of SDL code rather than approach the problem more elegantly. That's what I've been doing a lot of.
The Active State recipe is a starting point. I've never done a highly involved scene, but I think you would have to add a lot to that recipe to cover all your bases.
Hope this helps a little.
Carl T.
Thanks Carl,
Let me showcase my ignorance, I spent a good 15 minutes googling AFAIK and python together till I realized that AFAIK is not some kind of psychedelic python-povray project but "as far as I know" (and they say the Internet is not educational :) ). That resolved this brings me to the next items " The active State recipes and associated projects": where are they? are they these postings? (I know I know, I am slow).
I use povray sporadically to create semi technical illustrations (loops, symmetries etc) and I am always struggling because I would love to use python style loops and conditionals.
In any case thanks for your response and I will try to study the code in your postings
Cheers
Gino "The Great & Confused"
Gino, sorry, I was in a hurry and wanted to get back to you. The POV-Ray recipe link is here:
The project is here (I believe this is related to the recipe):
I added an Object method and a Prism method in a couple of my posts:
I use AFAIK and IIRC a lot. IIRC means "if I recall correctly". Apologies for the shorthand and the confusion.
Carl T. | http://pyright.blogspot.com/2011/02/more-pov-ray-and-pysanky.html | CC-MAIN-2018-22 | refinedweb | 611 | 74.29 |
In this project you’ll create a standalone web server with a Raspberry Pi that displays temperature and humidity readings with a DHT22 sensor that are stored in an SQLite database.
In order to create the web server you will be using a Python microframework called Flask. Here’s the high level overview of the system:
Recommended resources:
- You need a Raspberry Pi board – read Best Raspberry Pi Starter Kits
- ESP8266 Publishing DHT22 Readings with MQTT to Raspberry Pi
- Raspberry Pi Publishing MQTT Messages to ESP8266
- Testing Mosquitto Broker and Client on Raspbbery Pi
- How to Install Mosquitto Broker on Raspberry Pi
- What is MQTT and How It Works:
Creating the Python Script
This is the core script of our application. It sets up the web server, receives the temperature/humidity readings and saves those sensor readings in an SQLite database. import json import sqlite3 app = Flask(__name__) def dict_factory(cursor, row): d = {} for idx, col in enumerate(cursor.description): d[col[0]] = row[idx] return d #/dhtreadings") # The callback for when a PUBLISH message is received from the ESP8266. def on_message(client, userdata, message): if message.topic == "/esp8266/dhtreadings": print("DHT readings update") #print(message.payload.json()) #print(dhtreadings_json['temperature']) #print(dhtreadings_json['humidity']) dhtreadings_json = json.loads(message.payload) # connects to SQLite database. File is named "sensordata.db" without the quotes # WARNING: your database file should be in the same directory of the app.py file or have the correct path conn=sqlite3.connect('sensordata.db') c=conn.cursor() c.execute("""INSERT INTO dhtreadings (temperature, humidity, currentdate, currentime, device) VALUES((?), (?), date('now'), time('now'), (?))""", (dhtreadings_json['temperature'], dhtreadings_json['humidity'], 'esp8266') ) conn.commit() conn.close() mqttc=mqtt.Client() mqttc.on_connect = on_connect mqttc.on_message = on_message mqttc.connect("localhost",1883,60) mqttc.loop_start() @app.route("/") def main(): # connects to SQLite database. File is named "sensordata.db" without the quotes # WARNING: your database file should be in the same directory of the app.py file or have the correct path conn=sqlite3.connect('sensordata.db') conn.row_factory = dict_factory c=conn.cursor() c.execute("SELECT * FROM dhtreadings ORDER BY id DESC LIMIT 20") readings = c.fetchall() #print(readings) return render_template('main.html', readings=readings) if __name__ == "__main__": app.run(host='0.0.0.0', port=8181, debug=True)
Preparing Your SQLite File
Follow this next tutorial to learn how to Install SQLite database on a Raspberry Pi and prepare the database. Having an SQLite database file that has the following schema:
sqlite> .fullschema CREATE TABLE dhtreadings(id INTEGER PRIMARY KEY AUTOINCREMENT, temperature NUMERIC, humidity NUMERIC, currentdate DATE, currentime TIME, device TEXT);
Your SQLite database file should be named “sensordata.db” without the quotes.
WARNING: your database file should be in the same directory of the app.py file or have the correct path in your Python app.py file created in a preceding section (section conn=sqlite3.connect(‘sensordata.db’))."> </head> <body> <h1>RPi Web Server - ESP8266 SQLite Data</h1> <table class="table table-hover"> <tr><th>ID</th> <th>Temperature</th> <th>Humidity</th> <th>Date</th> <th>Time</th> <th>Device</th></tr> {% for entry in readings %} <tr><td>{{ entry.id }}</td> <td>{{ entry.temperature }}</td> <td>{{ entry.humidity }}</td> <td>{{ entry.currentdate }}</td> <td>{{ entry.currentime }}</td> <td>{{ entry.device }}</td></tr> {% endfor %} </table> <); // DHT Sensor const int DHTPin = 14; // Initialize DHT sensor. DHT dht(DHTPin, DHTTYPE); // Timers auxiliar variables long now = millis(); long lastMeasure = 0; char data[80]; //) } else { Serial.print("failed, rc="); Serial.print(client.state()); Serial.println(" try again in 5 seconds"); // Wait 5 seconds before retrying delay(5000); } } } // The setup function sets your DHT sensor, starts the serial communication at a baud rate of 115200 // Sets your mqtt broker and sets the callback function // The callback function is what receives messages and actually controls the LEDs void setup() { dht.begin(); 30); String dhtReadings = "{ \"temperature\": \"" + String(temperatureTemp) + "\", \"humidity\" : \"" + String(humidityTemp) + "\"}"; dhtReadings.toCharArray(data, (dhtReadings.length() + 1)); // Publishes Temperature and Humidity values client.publish("/esp8266/dhtreadings", data); Serial.println(data);
- Jumper wires
- Breadboard
Note: other DHT sensor types will also work with a small change in the code.
You can use the preceding links or go directly to MakerAdvisor.com/tools to find all the parts for your projects at the best price!
Wrapping up
That’s it for now, I hope you can take these examples and build more features that fit your needs..
28 thoughts on “ESP8266 Publishing DHT22 Readings to SQLite Database”
Hi Rui,
Thank you so so much. I need that and you published. You are the best. Thank you again.
You’re welcome,
Thanks for reading!
Just what i needed was trying to work out if i could do this and you give details thank you< you must be a mind reader 🙂
happy to hear that I could help!
Regards,
Rui
Any chance you could do a similar tutorial using ESP8266 but using Node Red with MQTT and SQLite, to add to the ‘$100 Home Automation’ package?
There is a node abailable for Node Red which should make things relatively simple ( flows.nodered.org/node/node-red-node-sqlite ) but, being very new to Node Red, MQTT and Raspberry Pi, I have no real Idea how to actually do anything useful with it.
Thanks
Yes, I should be posting a tutorial on that exact subject in just a couple of days!
Thanks for the suggestion!
Hi ,Rui,
great tutorial,
works perfect. attention the app.py and the sensordata.db must be in the same directory!
I had to remove the sensordata.db file from /home/pi to the web-server directory.
rgds,
frederik
Hi, Rui,
i had a problem with the time of the server, was not exact.
Therefore i put in the app.py for the time ‘now’ also ‘localtime’ and everthing works fine again.
time(‘now’ , ‘localtime’).
regards, frederik
getting the right time now, thanks for the comment.
Hi Rui,
I want to update the HTML web-page without refreshing it,but I have no idea how to do that.
any ideas to suggest,please?
thanks for your help!!!
Yes, you need to use AJAX to receive the values without refreshing the web page. I don’t have any tutorials with that exact example though…
Regards,
Rui
Hy
Thank you very much for the Tutorials on this website.
It is very easy to realize projects with your instructions.
Is there an easy way to continue using the data from the sqlite database?
I can read the data in my python program, but have a problem with the time values. Can I simply convert these string tuples into a float list?
kind regards
Patrik
Hi Patrik,
Thank you for your kind words and I’m glad you found the instructions useful and easy to follow.
You can change any data stored in your database with UPDATE SQL queries.
Python supports all those features that you’ve mentioned, you can use any data type to manipulate the data, but I only have this example with SQLite at the moment.
Thanks for reading,
Rui
thank you for the tutorial. could you point me where the sketch and program template are?
What do you mean by template? The code used in this project is posted in the blog post.
Thanks for asking!
Rui
Hey Great tutorial!!!!
It is all working, but I got two identical entries for each 10 seconds reading in the database table and therefore in the Html web server. Same line of data appears twice. I spend hours looking at the code files lines, but could not find what is wrong… If you could give me any hint why that would be that will be great! I have followed your lines strictly.
Kind Regards,
Nas
I’m having the same problem, I got two identical entries for each reading in the database. One think I may try is to have the readings being posted to diferent MQTT topics and then into two distinct tables.
Hi,
Basically you have to change the end of your python script where “debug=True” to False.
Connected with result code 0
Traceback (most recent call last):
File “app.py”, line 68, in
app.run(host=’0.0.0.0′, port=8181, debug=True)
File “/usr/lib/python2.7/dist-packages/flask/app.py”, line 841, in run
run_simple(host, port, self, **options)
File “/usr/lib/python2.7/dist-packages/werkzeug/serving.py”, line 691, in run_simple
s.bind((hostname, port))
File “/usr/lib/python2.7/socket.py”, line 228, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 98] Address already in use
This is the error I am facing, what to do??
once again restart your raspberry pi turn on and off. because that channel is already running in broker .so reboot your raspberry pi run program once again it will get work
Hello,
Thank you for the tutorial. I was wondering if its possible to connect more than one esp8266 modules to only one raspberry pi. Fetching data from all of them at once and displaying on a slightly modified HTML webpage. I haven’t finished implementing all of this yet but I will try to tweak around and find a solution by myself but I would really appreciate if any of ya’ll can share an example or two as it would make things easier for me.
Thanks
Hi Demir.
It is possible, but we don’t have any resources about what you’re looking for.
Lots of success for your project.
Regards,
Sara 🙂
I am trying to follow the tutorial and i am getting the following error:
/home/john/.arduino15/packages/esp8266/hardware/esp8266/2.5.2/cores/esp8266 -I/home/john/.arduino15/packages/esp8266/hardware/esp8266/2.5.2/variants/nodemcu -I/home/john/.arduino15/packages/esp8266/hardware/esp8266/2.5.2/libraries/ESP8266WiFi/src -I/home/john/Arduino/libraries/PubSubClient/src -I/home/john/Arduino/libraries/DHT /home/john/Arduino/libraries/DHT/DHT_U.cpp -o /dev/null
In file included from /home/john/Arduino/libraries/DHT/DHT_U.cpp:15:0:
/home/john/Arduino/libraries/DHT/DHT_U.h:36:29: fatal error: Adafruit_Sensor.h: No such file or directory
#include
compilation terminated.
Using library ESP8266WiFi at version 1.0 in folder: /home/john/.arduino15/packages/esp8266/hardware/esp8266/2.5.2/libraries/ESP8266WiFi
Using library PubSubClient at version 2.7 in folder: /home/john/Arduino/libraries/PubSubClient
Using library DHT at version 1.3.5 in folder: /home/john/Arduino/libraries/DHT
exit status 1
Error compiling for board NodeMCU 1.0 (ESP-12E Module).
Hi John.
I think your DHT library is not properly installed or it is not installed.
Try to install the latest version of the DHT library.
Also, you also need the Adafruit_sensor driver library. Make sure you install it too:
I hope this helps.
Regards,
Sara
Hi, Rui
Hi, Sara
I like this sketch, but I imagin one “failure”:
Connected with result code 0
* Running on (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
Connected with result code 0
* Debugger pin code: 191-356-481
DHT readings update
DHT readings update
Last sentence is coming up always twice.
When running app.py, I get the message:
Traceback (most recent call last):
File “app.py”, line 7, in
import paho.mqtt.client as mqtt
ImportError: No module named paho.mqtt.client
Yet running ‘pip install paho-mqtt’ shows paho is already installed.
Any ideas?
Jim
Great work, thank you so much.
Just one question, what if the DHT sensor is connected directly to the raspberry pi?
or another example is if you have to connect a GPS modeule directly to the Raspberry pi?
What will it look like? Does one have to go through all these processes or you will skip since you are not using the ESP8266?
Thanks once again for your great efforts
Hi guys,
I have two problems with arduino and mqtt, that is “Attempting MQTT connection… Failed. rc=-4…” and “code 400, message Bad HTTP/0.9 request type…”
How can I fix it? | https://randomnerdtutorials.com/esp8266-publishing-dht22-readings-to-sqlite-database/ | CC-MAIN-2022-21 | refinedweb | 2,002 | 59.3 |
FXG
MXML graphics
FXG is a declarative syntax for defining static graphics. You
typically use a graphics tool such as Adobe Illustrator to export
an FXG document, and then use the FXG document as an optimized component
in your application.
MXML graphics, on the other hand, are a collection of classes
that you use to define interactive graphics. You typically write
MXML graphics code in Flash Builder. You can add interactivity to
MXML graphics code because they use classes in the Flex SDK that
are subclasses of GraphicElement. The result is not as optimized
as FXG.
Graphics
and text primitives
Fills, strokes, gradients, and bitmaps
Support for effects such as filters, masks, alphas, transforms,
and blend modes
FXG and MXML graphics share very similar syntax. For example,
in FXG, you can define a rectangle with the <Rect> tag.
In MXML graphics, you use the <s:Rect> tag.
Most FXG elements have MXML graphics equivalents, although the attributes
supported on FXG elements are only a subset of those supported on
MXML graphics tags.
The amount of interactivity when using FXG and MXML graphics
is different. If you use MXML graphics, the tags are mapped to their
backing ActionScript implementations during compilation. You can
reference the MXML graphic elements and have greater control over
them in your code. If you use FXG, then you cannot reference instances
of objects within the FXG document from the application or other
components. In addition, you cannot add MXML code to it, nor can
you add ActionScript.
FXG and MXML graphics do not share the same namespace. MXML graphics
use the MXML namespace of the containing document. In most cases,
this is the Spark namespace. An FXG document uses its own namespace.
You cannot use fragments of FXG syntax in an MXML file. You must
either use the FXG document as a separate component or convert the
code to MXML graphics syntax.
You can also draw with the ActionScript drawing APIs, described
in Using
the drawing API. | https://help.adobe.com/en_US/Flex/4.0/UsingSDK/WS145DAB0B-A958-423f-8A01-12B679BA0CC7.html | CC-MAIN-2018-13 | refinedweb | 331 | 64.71 |
Flex 4: What is an alternative for mx:html?Paul The Lad Mar 31, 2010 10:03 AM
There used to be a <mx:HTML/> tag in Flex 3 which allowed html to be rendered, what has become of it in Flex 4? What are the alternatives if any?
1. Re: Flex 4: What is an alternative for mx:html?tw12lveam Mar 31, 2010 11:44 AM (in response to Paul The Lad)
I'm pretty sure that mx:HTML is the current way to make a browser.
Adobe encourages you to use MX components and containers along with Spark components. Because Adobe continues to build components atop the same base class (UIComponent), there should be full interoperability between Spark and MX.
The weird thing is that mx.components.html isn't listed with or without a spark equivilent. It's definitely still there though, I built a little "mini browser" while looking into this myself and it seems to work fine. This is what the apache HTTP_USER_AGENT variable reports (I'm on osx with Flex 4 and air 2)
Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en) AppleWebKit/531.9 (KHTML, like Gecko) AdobeAIR/2.0
2. Re: Flex 4: What is an alternative for mx:html?Paul The Lad Mar 31, 2010 2:58 PM (in response to tw12lveam)
But it just won't work (compile or code complete) in Flash Builder so I am forced to think that there isn't any support directly for the HTML tag anymore.
With the following namespaces:
xmlns:fx=""
xmlns:s="library://ns.adobe.com/flex/spark"
xmlns:ns="library://ns.adobe.com/flex/mx"
xmlns:mx="library://ns.adobe.com/flex/halo"
Neither <mx:HTML/> nor <ns:HTML/> worked. So it must be called something else now right? I wonder what is it called now or rather what's the alternative now?
3. Re: Flex 4: What is an alternative for mx:html?Jason San Jose
Mar 31, 2010 2:59 PM (in response to Paul The Lad)
mx:HTML is only available in AIR projects.
Jason San Jose
Software Engineer, Flash Builder
4. Re: Flex 4: What is an alternative for mx:html?tw12lveam Mar 31, 2010 3:20 PM (in response to Paul The Lad)
the namespace for the component is library://ns.adobe.com/flex/mx which is different from the normal mx ns url.
Keep in mind what Jason said though, the mx.html component is only for AIR.
5. Re: Flex 4: What is an alternative for mx:html?Flex harUI
Mar 31, 2010 4:23 PM (in response to tw12lveam)
It is mx.controls.html
6. Re: Flex 4: What is an alternative for mx:html?Paul The Lad Mar 31, 2010 5:00 PM (in response to Jason San Jose)
Hello Jose,
Thanks for the clarification.
So let me rephrase my question I suppose: What should Flex 4 users (not AIR users) do to render html? And please (I'm praying) don't say TextFlow because it can hardly support more than 7 or 8 html tags when converting them over to TLF.
7. Re: Flex 4: What is an alternative for mx:html?GordonSmith
Mar 31, 2010 5:08 PM (in response to Paul The Lad)
The htmlText property of <mx:TextArea>, which simply relies on the htmlText property of the Player's TextField, supports a limited subset of HTML.
If you are using <s:TextArea> (which is based on the new Text Layout Framework rather than on TextField), you can use TLF's TextConverter class to convert a similarly small subset of HTML to a TextFlow to be rendered by the Spark TextArea.
If you need full HTML support outside of AIR, you have to create a browser iframe and float it on top of your Flex app so that it looks like it is part of the app.
The web players do not include an HTML renderering engine like AIR does because it would make the size of these players too large. For example, I believe WebKit in AIR is more than 10MB of code.
Gordon Smith
Adobe Flex SDK Team
8. Re: Flex 4: What is an alternative for mx:html?JeffryHouser Mar 31, 2010 5:11 PM (in response to Paul The Lad)1 person found this helpful
You could google around for the iFame trick which puts places an iFrame above a Flash movie and makes them look integrated. There is even a commercial component from Drumbeat Insight that helps make this stuff work.
My own personal opinion is that Flex 4 Users who want to render HTML should reconsider their technology choice.
9. Re: Flex 4: What is an alternative for mx:html?Brij Kishor rajput Dec 13, 2011 9:24 PM (in response to Paul The Lad)
How I can user <mx.HTML> into Flash mobile project?. I am using Flashbuilder 4.6 for Mobile App development.
My HTML source code having HTML as well as Javascript code. and I want to show it to the view.
Thanks
Brij Kishor | https://forums.adobe.com/message/2704909 | CC-MAIN-2018-05 | refinedweb | 847 | 73.58 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.