text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
My original question stated:
1. Write a function named "digits" that takes an integer argument in the range from 1 to 9 , inclusive, and prints the English name for that integer on the computer screen. No newline character should be sent to the screen following the digit name. The function should not return a value. The cursor should remain on the same line as the name that has been printed. If the argument is not in the required range, then the function should print "digit error" without the quotation marks but followed by the newline character. Thus, for example, the statement digit_name(7); should print seven on the screen; the statement digit_name(0); should print digit error on the screen and place the cursor at the beginning of the next line.
I seen this post:
but not sure why i was asked to do so if its not possible?
#include <cstdio> #include <cstdlib> #include <iostream> using namespace std; void digits(void) { int i; int nvalue; cout << "Enter a Number 1 thru 9: "; for(i=0; i<10; i++) { cin >> nvalue; switch(nvalue) { case 1: cout << "one"; break; case 2: cout << "two"; break; case 3: cout << "three"; break; case 4: cout << "four"; break; case 5: cout << "five"; break; case 6: cout << "six"; break; case 7: cout << "seven"; break; case 8: cout << "eight"; break; case 9: cout << "nine"; break; default: cout << "digit error\n"; } } return; } int main(int nNumberofArgs, char* pszArgs[]) { digits(); system("pause"); return 0; } | https://www.daniweb.com/programming/software-development/threads/336978/c-keep-cursor-on-same-line | CC-MAIN-2018-05 | refinedweb | 244 | 62.01 |
The equation is Beals Conjecture
A^X + B^Y = C^Z
testing the equation over a large range values like 1,000 to 10,000 for A,B,C and 3-10 for X,Y,Z.
Launching kernel with 3D stream and using the index of the stream as values for A,B,C
The Code is well commented .
I dont have the actual hardware so cant test it on a GPU if some one would test the code on their setup that would be great.
Or just go through the code and if there are any suggestion for optimizing it.
I also have a serial C code for reference if needed
Attaching the Brook+ code (kernel + host code)
//////KERNEL CODE/////////////////// #include<stdio.h> /* Equation to solve is A^X + B^Y = C^Z */ kernel int findGcd(int u,int v) { int gcd = 1; int r ; int num1=u; int num2 =v; while (1) { if (num2 == 0) { gcd = num1; break; } else { r = num1 % num2; num1 = num2; num2 = r; } } return gcd; } //Function to keep the values in the range of float kernel float modulusPower(float number,int exponent) { //biggest prime number less than 2^12 //N is taken as less than 2^12 even though float can store 2^24 as max integer //this is because if N is greater it gives wrong result for the mod. float N = 4093.0f; float base = number; int counter = exponent; float result =1.0f; while (counter > 0) { if (counter & 1) { result = fmod((result * base),N); } counter = counter >>1; base = fmod((base * base) ,N); } return result; } kernel void findCounterExample(int startRangeA,int startRangeB,int startRangeC,int X ,int Y,int Z,out int a<>) { int A,B,C; int gcdAB,gcdAC,gcdBC; float N = 4093.0f; //using the index of the output stream as the values for A,B,C A = instance().x+startRangeA; B = instance().y+startRangeB; C = instance().z+startRangeC; // intialising a to 0 so that when we filter the reuslts we can know that 0 means that //location does not have a reuslt. a=0; gcdAB = findGcd(A,B); gcdAC = findGcd(A,C); gcdBC = findGcd(B,C); //if A,B,C are co primes test for the equation if(gcdAB==1 && gcdAC==1 && gcdBC==1) { float sum = modulusPower((float)A,X)+modulusPower((float)B, Y); float cpowerZ = modulusPower((float)C,Z); sum = fmod(sum,N); if(cpowerZ == sum) { // here the possible solution should be stored and returned to host code //have to figure out the way to return the values of A,B,C,X,Y,Z to host // to tackle this problem made the stream equal to 1 so now // on the host code would filter the stream to check if it has value one // the index would be the vavlues of A,B,C a= 1; } } } /// host code ////////////////////////////////////// #include "brookgenfiles\beals.h" #include "conio.h" #include "brook\stream.h" #include <time.h> using namespace brook; //function that filters the results from the stream and write it to the file. void writeResultsToFile(int *counterExample,unsigned int dimension[],int startRange[],int X,int Y,int Z) { int i,j,k; for(i=0;i<dimension[0];i++) for(j=0;j<dimension[1];j++) for(k=0;k<dimension[2];k++) { if(counterExample[i*dimension[1]*dimension[2]+j*dimension[2]+k]!=0) { //write to a file // A^X B^Y C^Z //"(i+startRange[0]^X)+(j+startRange[1]^Y)= (k+startRange[2]^Z)" //this string to be wrten to a file } } } int main(int argc, char ** argv) { int iA,jB,kC,X,Y,Z,range[3]; int startRange =1000; int endRange = 1020; int *counterExample; int exponentRange =10; time_t start, end; unsigned int dimension[3] ={0}; unsigned int dim[] = {10,10,10}; start = time(NULL); //Since the maximum size of stream can be 8192 x 8192. //I am launching kernel with stream size 8192 x 90 x 90 so that max number of threads can be launched. for(iA=0;iA<(endRange - startRange);) { if((endRange - startRange-iA)<8192) dim[0] = endRange - startRange-iA; else dim[0] = 8192; for(jB=0;jB<(endRange - startRange);) { if((endRange - startRange-jB)<90) dim[1] = endRange - startRange-jB; else dim[1] = 90; for(kC=0;kC<(endRange - startRange);) { if((endRange - startRange-kC)<90) dim[2] = endRange - startRange-kC; else dim[2] = 90; // the next loops are for X,Y,Z //the reason for not having the loop in the kernel is then // each thread could possible generate more than one result. // There is no other way to return the result then through streams for( X = 3; X < exponentRange; X++) { for( Y = 3; Y < exponentRange; Y++) { for( Z = 3; Z < exponentRange; Z++) { Stream<int> aStream(3,dim); findCounterExample(startRange+iA,startRange+jB,startRange+kC,X,Y,Z,aStream); //Every pass writes the result of the previous //this check is to see if its the first pass //Not checking for aStream.isSync() since in either case this step needs to be done //By default it will be done in parallel as the control return to host code after //kernel call. if(iA!=0||kC!=0){ writeResultsToFile(counterExample,dimension,range,X,Y,Z); free(counterExample); } counterExample = (int *)malloc(dim[0]*dim[1]*dim[2]*sizeof(int)); streamWrite(aStream,counterExample); // since the result of previous kernel call are filtered need to store the //dimension of the stream with which the previous kernel was called. dimension[0] = dim[0]; dimension[1] = dim[1]; dimension[2] = dim[2]; //Stores the start range for A,B,C for each kernel call. range[0] = startRange+iA; range[1] = startRange+jB; range[2] = startRange+kC; } } } kC+= 90; } jB+=90; } iA+=8192; } // Filtering the results from the stream for the last kernel call . writeResultsToFile(counterExample,dimension,range,X,Y,Z); end = time(NULL); printf("according to difftime()%.2f sec's\n", difftime(end, start)); getch(); return 0; }
Running the code on CPU backhand (thats the only option i have)
for values of A,B,C - from 1000 to 1010
X,Y,Z from 3- 10
it takes 22 second whereas the serial code executes in less than a second | https://community.amd.com/thread/135207 | CC-MAIN-2020-45 | refinedweb | 1,013 | 53.14 |
Ever feel too lazy to get up to turn off THAT one lamp? That lamp which is essential but also irritates you the most. That lamp which after you turn off you race to bed like hell. Well, fear not people I got a perfect solution for you. Clap-O-Switch, a perfect switch which you can control by clapping twice. So no sprinting to bed with the ghost just clap and sleep.Components Required
- ATtiny85 microcontroller / Arduino can also be used
- Sound sensor
- 5 volts relay
- Three pin AC socket
- Three pin C14 input socket with cable
- Pcb board.
- Any colour Leds x2
- A project box
For the project I have used a basic ATtiny85 microcontroller as the brain of the project. A sound sensor is used to sense the clap intensity. An algorithm runs on the microcontroller to sense the particular type of clap. It is then used to actuate a relay, which in turn activates the load (bulb).
Before soldering test the schematics and the code on the breadboard to avoid unnecessary stress.Code
I have attached the code below with the project, but in this section I will describe about the code in detail.
#include <ResponsiveAnalogRead.h>
For this project have included "ResponsiveAnalogRead" library.
ResponsiveAnalogRead is an Arduino library for eliminating noise in analogRead inputs without decreasing responsiveness. You can read more about this library here:
#define sound_pin 5 //use any ADC pin for sound sensor #define relay_pin 3 //Can use any pin for digital logic
Here you can define the pin number where the sensor is attached. Use only ADC pin for sound sensor as we need analog value. Relay can be connected to any pin which will give use digital output.
ResponsiveAnalogRead sound(sound_pin, true);
An object 'sound' of class 'ResponsiveAnalogRead' is created here. This has been done so we could access various functions in the 'ResponsiveAnalogRead' library.
void loop() { sound.update(); //current sound value is updated soundValue = sound.getValue(); currentNoiseTime = millis(); Serial.println(soundValue); if (soundValue > threshold) { // if there is currently a noise if ( (currentNoiseTime > lastNoiseTime + 250) && // to debounce a sound occurring in more than a loop cycle as a single noise (lastSoundValue < 400) && // if it was silent before (currentNoiseTime < lastNoiseTime + 750) && // if current clap is less than 0.75 seconds after the first clap (currentNoiseTime > lastLightChange + 1000) // to avoid taking a third clap as part of a pattern ) { relayStatus = !relayStatus; digitalWrite(relay_pin, relayStatus); delay(300); lastLightChange = currentNoiseTime; } lastNoiseTime = currentNoiseTime; } lastSoundValue = sound.getValue(); }
This is the algorithm for sensing the only two claps and changing the logic level accordingly. The sound value is stored in the variable 'soundValue'. This is updated every loop.
If the 'soundValue' is more than the threshold then it will advance in the if loop. Then it will check four conditions for the clap. These conditions are explained in code itself. When all the given condition are satisfied the relay status will toggle. A delay of 300 milli second is given so that the relay won't make clicking noise.
The algorithm is taken from one of the Instructatbles project. I'll post the the link once I get it.Assemble
I have used a normal project ABS box for my project. I have used 3 pin AC connector C14 as the input socket.
For the output where the load/lamp is connected I have used a normal 3 pin AC female socket where we can easily connect anything we want.
I have also mounted two led in the box. The red led represents if the box(Clap-O-Switch) is ON or OFF. The green led represents the condition of the output.
I have assemble the circuit in ABS project box. Both the AC ground are connected for protection. The ATtiny is powered by a 5v supply which we get by using an adapter circuit to convert 230V AC to 5V DC.Working | https://www.hackster.io/Rushabh/clap-o-switch-4fb036 | CC-MAIN-2022-33 | refinedweb | 645 | 65.93 |
DuPont (Symbol: DD). So this week we highlight one interesting put contract, and one interesting call contract, from the January 2019 expiration for DD. The put contract our YieldBoost algorithm identified as particularly interesting, is at the $50 strike, which has a bid at the time of this writing of $1.27. Collecting that bid as the premium represents a 2.5% return against the $50 commitment, or a 1.4% annualized rate of return (at Stock Options Channel we call this the YieldBoost ).
Turning to the other side of the option chain, we highlight one call contract of particular interest for the January 2019 expiration, for shareholders of DuPont (Symbol: DD) looking to boost their income beyond the stock's 1.9% annualized dividend yield. Selling the covered call at the $95 strike and collecting the premium based on the $2.89 bid, annualizes to an additional 1.9% rate of return against the current stock price (this is what we at Stock Options Channel refer to as the YieldBoost ), for a total of 3.8% annualized rate in the scenario where the stock is not called away. Any upside above $95 would be lost if the stock rises there and is called away, but DD shares would have to advance 16.5% from current levels for that to happen, meaning that in the scenario where the stock is called, the shareholder has earned a 20.1% return from this trading level, in addition to any dividends collected before the stock was called.
Top YieldBoost DD. | http://www.nasdaq.com/article/interesting-january-2019-stock-options-for-dd-cm763047 | CC-MAIN-2017-13 | refinedweb | 257 | 64.3 |
On 2020/10/15 上午7:10, Alex Williamson wrote:
On Wed, 14 Oct 2020 03:08:31 +0000 "Tian, Kevin" <kevin.t...@intel.com> wrote:From: Jason Wang <jasow...@redhat.com> Sent: Tuesday, October 13, 2020 2:22 PM On 2020/10/12 下午4:38, Tian, Kevin wrote:From: Jason Wang <jasow...@redhat.com> Sent: Monday, September 14, 2020 12:20 PM[...] > If it's possible, I would suggest a generic uAPI instead of a VFIOspecific.A question here, is IOASID expected to be the single management interface for PASID?yes(I'm asking since there're already vendor specific IDA based PASID allocator e.g amdgpu_pasid_alloc())That comes before IOASID core was introduced. I think it should be changed to use the new generic interface. Jacob/Jean can better comment if other reason exists for this exception),I think we need a definition of "global" here. It looks to me for vt-d the PASID table is per device.PASID table is per device, thus VT-d could support per-device PASIDs in concept. However on Intel platform we require PASIDs to be managed in system-wide (cross host and guest) when combining vSVA, SIOV, SR-IOV and ENQCMD together. Thus the host creates only one 'global' PASID namespace but do use per-device PASID table to assure isolation between devices on Intel platforms. But ARM does it differently as Jean explained. They have a global namespace for host processes on all host-owned devices (same as Intel), but then per-device namespace when a device (and its PASID table) is assigned to userspace.Another question, is this possible to have two DMAR hardware unit(at least I can see two even in my laptop). In this case, is PASID still a global resource?yeswhile.Yes.One unclear part with this generalization is about the permission. Do we open this interface to any process or only to those which have assigned devices? If the latter, what would be the mechanism to coordinate between this new interface and specific passthrough frameworks?I'm not sure, but if you just want a permission, you probably can introduce new capability (CAP_XXX) for this.I see, so I think the answer is to prepare for the namespace support from the start. (btw, I don't see how namespace is handled in current IOASID module?)The PASID table is based on GPA when nested translation is enabled on ARM SMMU. This design implies that the guest manages PASID table thus PASIDs instead of going through host-side API on assigned device. From this angle we don't need explicit namespace in the host API. Just need a way to control how many PASIDs a process is allowed to allocate in the global namespace. btw IOASID module already has 'set' concept per-process and PASIDs are managed per-set. Then the quota control can be easily introduced in the 'set' level.I'm not sure how such requirement can be unified w/o involving passthrough frameworks, or whether ARM could also switch to global PASID style....So my understanding is that VFIO already: 1) use multiple fds 2) separate IOMMU ops to a dedicated container fd (type1 iommu) 3) provides API to associated devices/group with a containerThis is not really correct, or at least doesn't match my mental model. A vfio container represents a set of groups (one or more devices per group), which share an IOMMU model and context. The user separately opens a vfio container and group device files. A group is associated to the container via ioctl on the group, providing the container fd. The user then sets the IOMMU model on the container, which selects the vfio IOMMU uAPI they'll use. We support multiple IOMMU models where each vfio IOMMU backend registers a set of callbacks with vfio-core.
Yes.
And all the proposal in this series is to reuse the container fd. It should be possible to replace e.g type1 IOMMU with a unified module.yes, this is the alternative option that I raised in the last paragraph."[R]euse the container fd" is where I get lost here. The container is a fundamental part of vfio. Does this instead mean to introduce a new vfio IOMMU backend model?
Yes, a new backend model or allow using external module as its IOMMU backend.Yes, a new backend model or allow using external module as its IOMMU backend.
The module would need to interact with vfio via vfio_iommu_driver_ops callbacks, so this "unified module" requires a vfio interface. I don't understand how this contributes to something that vdpa would also make use of.
If an external module is allowed, then it could be reused by vDPA and any other subsystems that want to do vSVA.If an external module is allowed, then it could be reused by vDPA and any other subsystems that want to do vSVA..Is there any reason that the #PF can not be handled via SVA fd?using per-device FDs or multiplexing all fault info through one sva_FD is just an implementation choice. The key is to mark faults per device/ subdevice thus anyway requires a userspace-visible handle/tag to represent device/subdevice and the domain/device association must be constructed in this new path.The SVA fd is not necessarily opened by userspace. It could be get through subsystem specific uAPIs. E.g for vDPA if a vDPA device contains several vSVA-capable domains, we can: 1) introduce uAPI for userspace to know the number of vSVA-capable domain 2) introduce e.g VDPA_GET_SVA_FD to get the fd for each vSVA-capable domainand also new interface to notify userspace when a domain disappears or a device is detached? Finally looks we are creating a completely set of new subsystem specific uAPIs just for generalizing another set of subsystem specific uAPIs. Remember after separating PASID mgmt. out then most of remaining vSVA uAPIs are simpler wrapper of IOMMU API. Replicating them is much easier logic than developing a new glue mechanism in each subsystem.Right, I don't see the advantage here, subsystem specific uAPIs using common internal interfaces is what was being proposed..). Thoughts?I'm ok with starting with a unified PASID management and consider the unified vSVA/vIOMMU uAPI later.Glad to see that we have consensus here. :)I see the benefit in a common PASID quota mechanism rather than the ad-hoc limits introduced for vfio, but vfio integration does have the benefit of being tied to device access, whereas it seems it seems a user will need to be granted some CAP_SVA capability separate from the device to make use of this interface. It's possible for vfio to honor shared limits, just as we make use of locked memory limits shared by the task, so I'm not sure yet the benefit provided by a separate userspace interface outside of vfio. A separate interface also throws a kink is userspace use of vfio, where we expect the interface is largely self contained, ie. if a user has access to the vfio group and container device files, they can fully make use of their device, up to limits imposed by things like locked memory. I'm concerned that management tools will actually need to understand the intended usage of a device in order to grant new capabilities, file access, and limits to a process making use of these features. Hopefully your prototype will clarify some of those aspects. Thanks, Alex
_______________________________________________ iommu mailing list iommu@lists.linux-foundation.org | https://www.mail-archive.com/iommu@lists.linux-foundation.org/msg45756.html | CC-MAIN-2022-05 | refinedweb | 1,263 | 63.19 |
Hi everyone. I am almost finished with this assignment but I am getting some extra numbers at the very end of my output file after the integers are sorted. the output file looks like:
My main looks like this:My main looks like this:Code:0 1 2 3 . . . 98 99 1297040174
I have to sort 3 different sized arrays:I have to sort 3 different sized arrays:Code:#include "header.h" int main(int argc, const char *argv[]) { int i = 0, N = 0, *array, k, dummy; //hrtime_t start, end; FILE *input; FILE *output; input = fopen("100_random.txt", "r"); if (input == NULL) { fprintf(stderr, "Cannot open file for reading\n"); exit(EXIT_FAILURE); } while (!feof(input)) { fscanf(input, "%d", &dummy); N++; } if (fclose(input)) { fprintf(stderr, "Error closing the file.\n"); exit(EXIT_FAILURE); } array = (int*) malloc(N* sizeof(int)); if(array == NULL) printf("Error - could not allocate an array.\n"); else { input = fopen("100_random.txt", "r"); for(k = 0; k < N; k++) fscanf(input, "%d", &(array[k])); fclose(input); } //start = gethrtime(); //insertion_sort(array,i); //end = gethrtime(); //printf("The average time for the insertion_sort function was: %11d nsec.\n", (end - start)/n); //selection_sort(array, i); bubble_sort(array, k); //mod_bubble_sort(array, i); //quick_sort(array, 0, i - 1); //merge_sort(array, 0, i -1); output = fopen("out.txt", "w+"); for (i = 0; i < N; i++){ fprintf(output, "%d\n", array[i]); } free (array); array = NULL; return (EXIT_SUCCESS); }
1) 100 ints
2) 1000 ints
3) 10000 ints
I noticed I do NOT get the extra numbers at the end of the 10000 int sort. Just the 100 and 1000.
I have to say, that the output is not do to the sorting functions. It happened when I just wrote to the file NOT using the sort funtions.
Anyone have any idea how to solve this and what might be wrong?
The sort functions should be correct as I have tested them. But if you need to see them, let me know.
UPDATE: I ran this on our unix system and I do NOT get the extra line with numbers on it. I'm thinking it might be CodeBlocks. If I am incorrect, I am still willing to listen to reasons and solutions. | http://cboard.cprogramming.com/c-programming/151567-funny-output-end-textfile-after-sort.html | CC-MAIN-2014-52 | refinedweb | 366 | 63.7 |
Earlier this year I talked about my “Bit-Box” – a custom keyboard, program launcher, Stream Deck clone, device.
The box was handmade, but I had purchased a 3d printed plate to hold the switches. A little later I had the idea of making my own plate with wood. Initial tests, chiseling out a square hole for a single switch worked pretty well, but as soon as I tried to cut out several adjacent holes, the wood between the holes kept chipping out.
I started thinking about using a CNC to do this, and eventually picked up a Sainsmart 3018 Prover.
It took a couple of hours to assemble. Pretty easy actually. And it only came with some relatively useless v-shaped engraving bits, so I ordered a set of flat endmills in different sizes. Since then I’ve picked up a bunch of different bits.
In terms of software, I’ve tried a few different options.
One is Inventables Easel. This is a web app made for the Inventables X-Carve cnc machine. But it can export gcode that can be used with the 3018. Easel has some decent features for free, but you have to pay for full functionality.
The other one I’ve used is Carbide Create. This is made for Carbide3D’s Shapeoko machines. It’s desktop software and is totally free. It also exports gcode. I like Carbide Create a lot better.
The basic flow is to create a set of simple 2d vector shapes – rectangles, circles, paths – then apply tool paths to each shape. For example, you’d specify that you want to use this rectangle as an outline shape that is cut 1/4″ deep. Or you want to use this circle as a pocket, 1/8″ deep. A pocket cut cuts the entire inner area of a shape to a certain depth. You can also do boolean operations to combine or subtract different shapes. It’s super basic, but really does most of what you’d need.
If you want to really go crazy, you can get into 3d modeling with something like FreeCAD or Fusion360, and then create tool paths from those models. A much bigger learning curve and probably overkill until you get into some really complex stuff.
I use Candle to send the gcode to the machine itself.
MediaBox
My goal was to create a “MediaBox”. This is just what I call a custom mini keyboard with media keys – play/pause, next, previous tracks, volume up/down/mute. Six keys in all. Here’s an overview of all my attempts from one of the original hand-cut versions, some test cuts, a couple of failed attempts, through the final working build:
The initial holes for the keys worked perfectly. A 0.555 inch square hole is all you need. Spacing is something like 0.205 inches between keys.
The main design issue beyond that was where to fit the Arduino board and how to route the usb cable. I was initially using 1/2″ black walnut. On the top were the holes for the keys. I then flipped it over and created a recess on the bottom. But the half inch depth was really too shallow. And my original design was just too small once I attached the cable.
So I switched over to 3/4″ walnut and made the whole thing just a bit larger.
Wired it up much the same as I did for the BitBox. Did some finish sanding and applied some tung oil, glued on a leather bottom.
The software presented a bit of a problem. The Arduino keyboard library does not provide a way to send media key codes. Luckily there is another 3rd party library, HID-Project.
You can add this library to your project by going to sketch / manage libaries and searching for “hid project”.
Here’s the code I came up with:
#include <hid-settings.h> #include <hid-project.h> // Define Arduino pin numbers for buttons and LEDs #define VOL_DOWN 2 #define VOL_MUTE 4 #define VOL_UP 3 #define PLAY_PREV 5 #define PLAY_PAUSE 6 #define PLAY_NEXT 7 const long debounceTime = 30; unsigned long lastPressed = 0; boolean A, a, B, b, C, c, D, d, E, e, F, f; void setup() { pinMode(VOL_DOWN, INPUT_PULLUP); pinMode(VOL_MUTE, INPUT_PULLUP); pinMode(VOL_UP, INPUT_PULLUP); pinMode(PLAY_PREV, INPUT_PULLUP); pinMode(PLAY_PAUSE, INPUT_PULLUP); pinMode(PLAY_NEXT, INPUT_PULLUP); a = b = c = d = e = f = false; Consumer.begin(); BootKeyboard.begin(); } void loop() { if (millis() - lastPressed <= debounceTime) { return; } lastPressed = millis(); A = digitalRead(VOL_DOWN) == LOW; B = digitalRead(VOL_MUTE) == LOW; C = digitalRead(VOL_UP) == LOW; D = digitalRead(PLAY_PREV) == LOW; E = digitalRead(PLAY_PAUSE) == LOW; F = digitalRead(PLAY_NEXT) == LOW; if (A && !a) { Consumer.write(MEDIA_VOL_DOWN); } if (B && !b) { Consumer.write(MEDIA_VOL_UP); } if (C && !c) { Consumer.write(MEDIA_VOL_MUTE); } if (D && !d) { Consumer.write(MEDIA_PREV); // alternately MEDIA_REWIND } if (E && !e) { Consumer.write(MEDIA_PLAY_PAUSE); } if (F && !f) { Consumer.write(MEDIA_NEXT); // alternately MEDIA_FAST_FORWARD } a = A; b = B; c = C; d = D; e = E; f = F; }
This was adapted from a few other sample projects I found, as well as the code I had for the BitBox. It works great.
Want one?
I made this for myself, but I’d love to make some more. The materials aren’t cheap though. Well over $30 for the wood, leather, Arduino, keys and key caps. Then the time for cutting, finishing, soldering. I’ve got to work out pricing and different options, and the best way to sell them, but contact me if you’re interested.
I’d also be open to selling just the wooden box, either finished or straight off the mill and you can buy the other parts and put it together yourself. It’s a fun project.
Or… if you have a cnc already, I’m going to post the Carbide Create file I used, with instructions, for free. Check back soon for that. | https://www.bit-101.com/blog/2020/12/learning-cnc-and-making-a-mediabox/ | CC-MAIN-2021-25 | refinedweb | 971 | 74.79 |
never tried the HTTPDAdapter previously, but am seeing some problems :
C:\Webware-0.8.1\WebKit\Adapters>HTTPAdapter.py
Traceback (most recent call last):
File "C:\Webware-0.8.1\WebKit\Adapters\HTTPAdapter.py", line 28,
in ?
(host, port) = string.split(open(os.path.join(webKitDir, 'address.text')).read(), ':')
IOError: [Errno 2] No such file or directory: 'C:\\Webware-0.8.1\\WebKit\\address.text'
So I created address.text and added 'localhost:4040'... now I get :
C:\Webware-0.8.1\WebKit\Adapters>HTTPAdapter.py
Traceback (most recent call last):
File "C:\Webware-0.8.1\WebKit\Adapters\HTTPAdapter.py", line 45,
in ?
from WebKit.HTTPServer import HTTPHandler, run
ImportError: cannot import name run
The system path is being setup properly at the beginning of HTTPDAdapter.py,
I set WebwareDire and added a debugging statement to make sure. So I open
HTTPServer.py and find no occurances of 'run'. That's a problem right?
So I wonder if it the 'run' import is a leftover and comment it out.
#from WebKit.HTTPServer import HTTPHandler, run
from WebKit.HTTPServer import HTTPHandler
I try to run and now <NameError: name 'BaseHTTPServer' is not defined>
So I guess I need to add that import, and do... at which point :
D:\_webdev3\Webware-0.8.1\Webware-0.8.1\WebKit\Adapters>HTTPAdapter.py
inserting WebwareDir to path : D:\_webdev3\Webware-0.8.1\Webware-0.8.1\
Traceback (most recent call last):
File "D:\_webdev3\Webware-0.8.1\Webware-0.8.1\WebKit\Adapters\HTTPAdapter.py", line 148,
in ?
main()
File "D:\_webdev3\Webware-0.8.1\Webware-0.8.1\WebKit\Adapters\HTTPAdapter.py", line 141,
in main
if daemon:
UnboundLocalError: local variable 'daemon' referenced before assignment
Looks like daemon is being used with fork which doesn't work in Windows.
Anyway, I properly initialize daemon to 0 before the try there. I run again...
Oops, the last statement in main() is run(), but the run() defined in this
file takes at least one argument, the address, presumably server address.
After a couple tries, I learn what kind of address it wants and use :
run(('127.0.0.1',4040))
I try again, it runs and I am delivered what I guess is a punchline :
"PS: This adapter is experimental and should not be used in
a production environment"
I visit the site, and see a stack trace in the console showing me I need
to 'import threading', so I do and now seem to be getting somewhere.
Exception happened during processing of request from ('127.0.0.1', 2320)
...stacktrace...
AttributeError: HTTPHandler instance has no attribute 'doTransaction'
I give up and go to install mod_webkit as I did originally months ago.
-Tomi
<flamebait>
P.S. This is quite honestly the same sort of thing that happens with
every other open source project I attempt to use, especially ones
implemented in or using scripting language. Yes I understand why they
happen, but I would rather they not, as an end-user. Coincidentally,
I often prefer software which is commercially funded. Any excuse can
be made for this, but they do not matter to me, only one thing does.
As a programmer, I think I leave projects in a well defined state
when they are being developed and a break or interruption occurs.
I suppose occurrances such as these make business-by-service using open
source successful. Personally, I prefer to sell products, not services.
Service of products, to me, is a degradation of selling products you
support basically for free.
I'd like to understand what the hype has always been about, but have
never read anything that does this other than those which say service
based business is the one true way. It seems almost a conspiracy. You
horde of 2nd party developers must provide service to customers and must
give all the other developers your products for free. It feels fuedal
and restrictive. I suppose I could go on forever...
I just don't understand; and feel ESR is a psychological misfit.
</flamebait>
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/webware/mailman/webware-discuss/thread/732418059.20030820130843@comcast.net/ | CC-MAIN-2017-22 | refinedweb | 718 | 59.3 |
katip
A structured logging framework.
See all snapshots
katip appears in
Module documentation for 0.8.2.0
- Katip
- Katip.Core
- Katip.Format
- Katip.Monadic
- Katip.Scribes
Katip
Katip is a structured logging framework for Haskell.
Kâtip (pronounced kah-tip) is the Turkish word for scribe.
Features
Structured: Logs are structured, meaning they can be individually tagged with key value data (JSON Objects). This helps you add critical details to log messages before you need them so that when you do, they are available. Katip exposes a typeclass for log payloads so that you can use rich, domain-specific Haskell types to add context that will be automatically merged in with existing log context.
Easy to Integration: Katip was designed to be easily integrated into existing monads. By using typeclasses for logging facilities, individual subsystems and even libraries can easily add their own namespacing and context without having any knowledge of their logging environment.
Practical Use: Katip comes with a set of convenience facilities built-in, so it can be used without much headache even in small projects.
A
Handlebackend for logging to files in simple settings.
A
AnyLogPayloadkey-value type that makes it easy to log structured columns on the fly without having to define new data types.
A
Monadicinterface where logging namespace can be obtained from the monad context.
Multiple variants of the fundamental logging functions for optionally including fields and line-number information.
Extensible: Can be easily extended (even at runtime) to output to multiple backends at once (known as scribes). See
katip-elasticsearchas an example. Backends for other forms of storage are trivial to write, including both hosted database systems and SaaS logging providers.
Debug-Friendly: Critical details for monitoring production systems such as host, PID, thread id, module and line location are automatically captured. User-specified attributes such as environment (e.g. Production, Test, Dev) and system name are also captured.
Configurable: Can be adjusted on a per-scribe basis both with verbosity and severity.
Verbosity dictates how much of the log structure should actually get logged. In code you can capture highly detailed metadata and decide how much of that gets emitted to each backend.
Severity AKA “log level” is specified with each message and individual scribes can decide whether or not to record that severity. It is even possible to at runtime swap out and replace loggers, allowing for swapping in verbose debug logging at runtime if you want.
Battle-Tested: Katip has been integrated into several production systems since 2015 and has logged hundreds of millions of messages to files and ElasticSearch.
Examples
Be sure to look in the examples directory for some examples of how to integrate Katip into your own stack.
Contributors
- Ozgun Ataman
- Michael Xavier
- Doug Beardsley
- Leonid Onokhov
- Alexander Vershilov
- Chris Martin
- Domen Kožar
- Tristan Bull
- Aleksey Khudyakov
Changes
0.8.2.0
0.8.1.0
- Export
logLoc. Credit to Brian McKennaStr.
0.3.1.4
- Loosen deps on aeson to allow 1.1.0.0
0.3.1.3
- Fix build on windows
0.3.1.2
- Add some missing test files
0.3.1.1
- Fix some example code that wasn’t building
- Make FromJSON instance for Severity case insensitive.
0.3.1.0
- Add support for aeson 1.0.x
- Add Katip.Format.Time module and use much more efficient time formatting code in the Handle scribe.
0.3.0.0
- Switch from
regex-tdfa-rcto
regex-tdfa.
- Add
katipNoLoggingcombinator.
- Add
Semigroupinstances.
- Drop
ToJSONsuperclass requirement fro
ToObject. Instead,
ToObjectwill provide a default instance for types with an instance for
ToJSON. This gets us to the same place as before without having to add a broader instance for something that’s only going to show up in logs as an Object.
- Add a simple MVar lock for file handle scribes to avoid interleaved log lines from concurrent inputs.
0.2.0.0
- Add GHC implicit callstack support, add logLoc.
- Drop lens in favor of type-compatible, lighter microlens.
- Renamed
logEnvNsto clearer
logEnvApp
- Added
katipAddNamespaceand
katipAddContext
- Fixed nested objects not rendering in Handle scribe.
- LogContexts Monoid instance is now right-biased rather than left biased. This better fits the use case. For instance
ctx1 <> ctx2will prefer keys in
ctx2if there are conflicts. This makes the most sense because functions like
katipAddContextwill
mappendon the right side.
- LogContext internally uses a
Seqinstead of a list for better complexity on context add.
- Improved documentation.
0.1.1.0
- Set upper bounds for a few dependencies.
- Add ExceptT instance for Katip typeclass
0.1.0.0
- Initial release | https://www.stackage.org/nightly-2019-06-08/package/katip-0.8.2.0 | CC-MAIN-2019-26 | refinedweb | 758 | 58.48 |
23 May 2012 21:08 [Source: ICIS news]
HOUSTON (ICIS)--?xml:namespace>
The current IPA contract range is 88-90 cents/lb, as assessed by ICIS, but the range likely will be assessed down by at least 5 cents/lb before the end of May.
Buyers said IPA contract prices recently had drifted downwards by 3 cents/lb, following indications that values had weakened by an average of 5 cents/lb earlier in the month.
Price softening continued to stem primarily from sharply weaker May feedstock propylene values but also from lacklustre demand and the threat of cheaper imports.
Monthly contract values typically move in the same direction as the previous month’s chemical-grade propylene (CGP) contract, which settled flat in April but dropped by 10 cents/lb for May.
Although market conditions are putting downward pressure on IPA, most sources said the market was essentially balanced.
US IPA suppliers include Shell Chemicals, Dow Chemical, Sasol, LyondellBasell and ExxonMobil.
($1 = €0.79)
For more on IP | http://www.icis.com/Articles/2012/05/23/9563101/us-ipa-values-continue-to-slip-on-softer-propylene-sources.html | CC-MAIN-2015-18 | refinedweb | 167 | 60.24 |
Live Plot data by reading COM port data
Hi all,
I'm using LoPy with Pysense connected to the Pc via USB.
Is there a way to read the accelerometer data sent by the Pysense and plot it with Spyder for example ?
I would like to use matplotlib library and use the data from the COM port to make a live plot.
This is the code I use :
import serial import time ser=serial.Serial(port='COM4',baudrate=9600) while True: print("try") time.sleep(10) s=ser.read(100) #reading up to 100 bytes print(s) ser.close()
And I get "access denied". When the Pysense is sending data, I can't read it because the port is already in use.
SerialException: could not open port 'COM4': PermissionError(13, 'Access denied.', None, 5)
So I guess while Atom is using the port Spyder can't access to it ? Am I right ?
How should I do that ?
Is there an easier way to print data ? I don't have SD card and I don't want to use one.
Cheers,
ThomasP
That was quick.
I just wanted to make sure I was doing it properly.
Works great. Thank you.
@thomasp said in Live Plot data by reading COM port data:
So I guess while Atom is using the port Spyder can't access to it ? Am I right ?
Close Atom, when you want to read form the serial port on the PC. You may also use a WiFi connection and sockets. | https://forum.pycom.io/topic/2810/live-plot-data-by-reading-com-port-data | CC-MAIN-2020-40 | refinedweb | 251 | 85.18 |
Okay, so, working through this tutorial, hoping someone here can give me a better understanding of what code I'm ripping from the tutorial and mashing into my own setup..
First thing I added was...
Okay, so, working through this tutorial, hoping someone here can give me a better understanding of what code I'm ripping from the tutorial and mashing into my own setup..
First thing I added was...
So, my simple (not actually simple) goal is to create a junky pong game. I'm not really interested in a third dimension (3d) yet, but I was questioning whether or not to bother with ortho view or...
For my first trick, try to reproduce whatever the heck I was doing right 13 years ago:
Collision testing and response, for all active objects, in one function
...
What is it like 12 years later now and I was able to log in here like it was yesterday?
I'm starting to get old...
Time to start programming again!
This sounds way more fun to do than all of the other suggestions. I can just figure a way to send a stringstream to the ofstream, and create the stringstream from the data members in the employee...
Well, I do not want to use a text file because ultimately I want my code to be a bit more re-usable.
And honestly boost seems to be the best thing for this, as it is minimally intrusive, as I've...
Do I really need to install boost to do serialization, isn't there a standard?
Like this?
class employee
{
public:
employee(std::string, float);
const std::string& get_name();
I've stumbled on this little example: vector::reserve - C++ Reference
What I'm trying to do is stick a bunch of employee objects in a vector, save it to a file, and load the file and use the...
I think I'm getting ahead of myself, I need to simply streamline my functions for managing a vector of employees. I can't worry about making a generic object database that is super ooped up and all...
Ohhh, I see.
Would it be a bad idea to overload an edit function that is in a class called interface?
I would have to relearn how to overload functions D:
could you give me an easy example?
Well here's the thing, I think.
If I take edit out of employee, then some other system needs to have edit.
And if another system needs to have edit, it needs to know how to edit whatever we...
Okay, so right now, menu handles the object itself, but interface needs to remember where it is in the program.
For instance, begin the program with a context menu which allows us to get...
#include <vector>
#include <string>
#include <sstream>
#include <iostream>
using namespace std;
class Menu
{
public:
void display_options()
I want something like this:
interface.generate_menu()
Menu.add_context(object.get_context())
Menu.display_content();
Menu.get_input();
Menu.execute_selection();
Ultimately:...
Maybe I could use a class, which has generic responses, like cancel, exit, or what have you.
And use overloaded operators to add choices, Hmmmm.... The wheels are turning.
Perhaps I would need...
I'm looking for an alternative solution to this mess:
int main()
{
int selection;
string input;
bool goodinput = false;
bool programexit = false;
I've made some adjustments according to the responses to the thread, thanks for your input everyone!
Employee.h:
#ifndef EMPLOYEE_H
#define EMPLOYEE_H
#include <string>
using namespace...
Lol Elysia, I don't know the difference between employee* and *employee, I may have at one point.
I'm not sure if polymorphism would really be applicable in this project, simply because of it's...
Employee is a struct, that I think I will make a class. Elysia, in my vector of employees should i store employee or *employee. Should I let the vector construct manage my memory in the heap?
If...
That's just my way to differentiate between local variables and passed variables, any other suggestions for that?
Also, I'm wondering... I want to put my functions that edit and create employees in a single header. Should I wrap them in a class, what would I call it? What would I call the header?
void edit(employee * _employee)
{
assert (_employee !=NULL);
printf ("%d\n",*_employee);
string input;
int selection;
bool goodinput = false; //loop conditional, input must match...
employee* get_employee(string _name)
{
for (unsigned int i = 0; i < employees.size();)
{
if (_name.compare(employees[i]->name) == 1)
{
return employees[i];
}
else
{ | https://cboard.cprogramming.com/search.php?s=9cf5b7e9eb3012ec98b04820bfcc9d7f&searchid=1782358 | CC-MAIN-2019-26 | refinedweb | 752 | 64.91 |
)
Christiaan Baes (chrissie1)
Summary
- Active from December 2007 to September 2017
- 568 posts published
Table of Contents
2017 (2 posts)
- Testing your resx files to see if all languages have the same items. on September 7, 2017
- Elasticsearch and my setup. Part 1: the why and the how. on June 20, 2017
2016 (4 posts)
- Running Nunit tests from your code on November 9, 2016
- Jetbrains rider the VB.Net IDE we have all been waiting for on February 11, 2016
- Using Heka to forward logs to the elasticsearch on February 10, 2016
- Putting files into year folders using powershell on February 10, 2016
2015 (7 posts)
- Nancy, Get, Post, Bind and javascript on August 18, 2015
- Nancy and the mystery of the 405: method not allowed on March 18, 2015
- Nancy and localization: Testing on March 6, 2015
- qunit and my nancy project. on February 27, 2015
- Nancy and localization: The better cookie approach on February 26, 2015
- Nancy and localization: The cookie approach on February 26, 2015
- Nancy and localization: how I think it works on February 26, 2015
2014 (7 posts)
- Monodevelop and Nancy and VB.Net on June 24, 2014
- Monodevelop and VB.Net and Ubuntu: how to install on June 24, 2014
- Nancy razorview to PDF on June 23, 2014
- Nancy and the case of the camel on March 20, 2014
- Dapper, Gallus and SQl server compact edition on January 26, 2014
- Gallus, a slightly different dapper on January 25, 2014
- SynchronizationContext the difference between Post and Send on January 18, 2014
2013 (41 posts)
- Elasticsearch and .Net on December 17, 2013
- Elastic HQ on December 16, 2013
- Elasticsearch on December 14, 2013
- NDC London 2013 on December 8, 2013
- Ncrunch V2 beta is here on November 26, 2013
- Adding PEVerify as a unittest on October 28, 2013
- VB.Net is moving up again on October 16, 2013
- Wasabi a sinatra/nancyfx inspired webframework for kotlin on October 6, 2013
- Belgian Visug event about VB11 on October 4, 2013
- NUnit Assert.AreEqual and decimals and doubles on September 18, 2013
- WIX toolset – Putting a file in a folder in your installfolder on September 17, 2013
- Balance on September 12, 2013
- Jetbrainsday September 7th 2013 in Malmö Sweden. on September 9, 2013
- Microsoft IS evil on August 20, 2013
- Teamcity and the Nunit build step on July 12, 2013
- One task running and one only. on July 3, 2013
- Encrypting identity section of the web.config for a Nancy website on July 3, 2013
- Using RenderView to render my custom errorpages in Nancy on June 26, 2013
- TinyIoC/Nancy and registering multiple implementations of the same interface the easy way. on June 24, 2013
- Publishing with teamcity on June 24, 2013
- Lessthandot turned 5 today on June 1, 2013
- There are now Visual studio project templates for Nancy on May 1, 2013
- Fody Anotar weaving and logging combined into one package. on April 25, 2013
- Trying out Gibraltar’s Loupe with Nancy on March 23, 2013
- Lessthandot has a new server on March 21, 2013
- Crazy stuff I do with Nancy on March 15, 2013
- Nancy, IIS 7 and the PUT command on March 13, 2013
- Another solution for my caching problem with servicestack.text, dapper and sql server. on March 9, 2013
- Redis and VB.Net on March 3, 2013
- All new ChocolateyGUI on February 9, 2013
- Making your own Local root CA and using it as an SSL certificate for intranet websites. on February 6, 2013
- Nancy and custom error pages. on February 5, 2013
- Nancy and adding the windows authenticated user to your page. on February 4, 2013
- Angularjs and ng-grid on February 3, 2013
- Trying out Angularjs on February 3, 2013
- Nancy and jquery datatables; formatting data on February 3, 2013
- Nancy and jquery datatables on February 2, 2013
- Nancy and jtable: formatting your columns on January 28, 2013
- Nancy and jtable: paging and sorting on January 27, 2013
- Nancy and jtable on January 26, 2013
- Making an interface for plugwise with nancy on January 6, 2013
2012 (102 posts)
- ServiceStack.Text has a nice extension method called Dump and it has a few friends on December 31, 2012
- Nancy and returning a pdf on December 30, 2012
- Nancy and C# uploading and showing an image on December 28, 2012
- Batch renaming files with powershell on December 27, 2012
- Support for segments in easyhttp on December 27, 2012
- Nancy and VB.Net: the login/logout thing on December 22, 2012
- Nancy and VB.Net: Using easyhttp as our client on December 22, 2012
- 500 on December 21, 2012
- Nancy and VB.Net: Forms authentication on December 20, 2012
- Nancy and VB.Net: The bootstrapper on December 15, 2012
- Nancy and VB.Net: testing your modules. on December 13, 2012
- Nancy and VB.Net: the razor view engine on December 12, 2012
- Nancy and VB.Net: getting data in your page on December 11, 2012
- Nancy and VB.Net on December 11, 2012
- Smalltalk on December 8, 2012
- Kotlin and Maven on December 3, 2012
- Kotlin and testing on December 2, 2012
- Your queue is not threadsafe and sometimes that can give unwanted sideeffects on November 28, 2012
- Kotlin: val and var on November 25, 2012
- Kotlin the data class and the class on November 24, 2012
- SignalR, Quartz.Net and ASP.Net: part 2 the webclient on November 19, 2012
- SignalR, Quartz.Net and ASP.Net on November 19, 2012
- The backup thing on November 17, 2012
- Json.Net ‘s JObject to dynamic in VB.Net on November 11, 2012
- SignalR and VB.Net: Hubs and objects on November 11, 2012
- SignalR and VB.Net on November 10, 2012
- Another thing you should never do in VB.Net on November 7, 2012
- The conditional attribute on November 5, 2012
- Does Async-Await make the simple things easier? on November 4, 2012
- Threads are always tricky, even for simple things, or you might think they are simple anyway. on November 3, 2012
- Learning how to program also means you have to try what you learned. on November 1, 2012
- VB.Net default properties or indexers in C# on October 31, 2012
- An update on the Piglet perfomance tests. on October 24, 2012
- Parsing text with Piglet: making all the tests pass on October 24, 2012
- Parsing text with Piglet making the first tests pass on October 23, 2012
- Why is it important to know that using has no empty catch and check for null in the finally? on October 21, 2012
- What does the Using statement do? on October 21, 2012
- VB.Net make one thread wait for the next now with Async Await on October 17, 2012
- VB.Net make one thread wait for the next on October 17, 2012
- Ncrunch goes commercial on October 15, 2012
- Another obscure difference between C# and VB.Net you probably don’t know about on October 12, 2012
- Full outer join requires some thinking before use on October 10, 2012
- Easyhttp and servicestack, making the mspec tests better on October 7, 2012
- Using servicestack for the easyhttp integration tests on October 7, 2012
- Servicestack CredentialsAuthentication and easyhtpp of course: Part 3 on October 5, 2012
- Servicestack CredentialsAuthentication and easyhtpp of course: Part 2 on October 4, 2012
- Servicestack CredentialsAuthentication and easyhtpp of course: Part 1 on October 4, 2012
- Making automated tests for my resharper plugin on September 29, 2012
- My wish for the next update of VS2012: more options for the preview tab on September 23, 2012
- I made a big mess of my yield blogpost and I apologize to my readers for that. on September 21, 2012
- VB.Net is rapidly becoming the most popular language in the world. on September 19, 2012
- I blog to learn on September 19, 2012
- New in VB11, the yield keyword on September 18, 2012
- New In VS2012: New Solution Explorer View on September 18, 2012
- Resharper plugin: rename class to filename on September 16, 2012
- Topshelf and VB.Net on September 16, 2012
- Good service is nice, not needing service is better. on September 13, 2012
- Visual studio 2012 has no more Setup project and the alternative they offer is crap. on September 4, 2012
- Microsoft Report Viewer 2012 Runtime redistributable package for Visual studio 2012 RTM on September 4, 2012
- Using Start8 in windows 8 on September 2, 2012
- VB10 dynamic bug is gone in VB11 on August 16, 2012
- VB.Net: Guess the result – The explanation on July 30, 2012
- VB.Net: Guess the result on July 29, 2012
- My very first pull request on July 22, 2012
- servicestack, restservice and easyhttp on July 21, 2012
- Servicestack, VB.Net and some easyhttp on July 12, 2012
- An extension method for getting rid of those cross thread UI calls in winforms on July 10, 2012
- Patterns are a disease on July 6, 2012
- Agent mulder for resharper or how to see if a type is registered by your DI container on July 2, 2012
- Trying out RavenHQ with VB.Net on May 24, 2012
- Androidgame using andengine version 1 on May 16, 2012
- Simple.Data with VB.Net sample solution up on Github on May 9, 2012
- Simple.Data and VB.Net The final queries on May 8, 2012
- Simple.Data and complex types: one to many on May 6, 2012
- Simple.Data and complex types: many to many on May 4, 2012
- Simple.Data and complex types: many to one on May 4, 2012
- More Simple.Data with VB.Net: adding fields and tables on May 1, 2012
- Seeing the sql Simple.Data generates on April 29, 2012
- Simple.Data and VB.Net the beginning on April 29, 2012
- 200 VB.Net video tutorials and many other free tutorials on April 15, 2012
- Printing to a zebra printer that uses ZPL on windows using VB.Net on April 11, 2012
- Refactor code by OdeToCode on April 9, 2012
- Another decompiler: ILSpy. on March 27, 2012
- Justdecompile or dotpeek on March 27, 2012
- Initialize arrays in VB.Net – follow up on March 27, 2012
- Initialize arrays in VB.Net on March 27, 2012
- Why one DefaultPageSettings is not the same as the next. on March 12, 2012
- Sorry, I am not a programmer. Sorry, then you should not be writing code for me. on March 6, 2012
- Writing helpfiles on March 6, 2012
- DNA, security and privacy on February 29, 2012
- Using the posh-git code in my own scripts to make my git experience with Visual studio even better. on February 16, 2012
- What does the script: do in powershell when used with a function. on February 16, 2012
- Powershell and tabexpansion on February 15, 2012
- Studioshell more powershell for Visual studio on February 10, 2012
- posh-git in the nuget Package manager console on February 2, 2012
- To interface or not to interface that is the question on January 27, 2012
- Multithreading pings and pausing and resuming and grouping them. on January 23, 2012
- Basic4Android first try on January 14, 2012
- Saturday SQL nugget #1 on January 13, 2012
- I only learn from my own mistakes and I have fun doing it. on January 12, 2012
- The this developer’s life player for android 2.2 and above is now on the market and open source. on January 8, 2012
- So many saxparser examples are just wrong and here is why. on January 8, 2012
2011 (150 posts)
- Hanselminutes player on the market and for android 2.2 and above on December 30, 2011
- The android alertdialog is not the same as a messagebox in winforms. on December 24, 2011
- The Hanselminutes podcast player for Android 3.2 version 0.2 on December 10, 2011
- The Hanselminutes podcast player for Android 3.2 on December 5, 2011
- Android and barcode scanning on December 3, 2011
- People about VB.Net on November 29, 2011
- Teamcitysharp v0.1 released on November 24, 2011
- The EndlessLoopQueue on November 21, 2011
- Differences between Autofac and Structuremap with VB.Net the use of Lazy(Of T) on November 20, 2011
- Differences between Autofac and Structuremap with VB.Net on November 20, 2011
- Picking a new IoC container/Replacing structuremap on November 20, 2011
- Occupy VB on November 18, 2011
- Vb.Net: Addhandler and lambdas, watch out. on November 18, 2011
- SQLCop is now available via chocolatey on November 16, 2011
- No need for a "for loop" when you have linq on November 13, 2011
- use pdfviewernet in a project for .net framework 4.0 on November 11, 2011
- Codemash ticket give away on November 10, 2011
- pdfviewernet, where have you been all my life? on November 10, 2011
- How stupid can you be: me and threads on November 9, 2011
- RoboGuice 2.0 beta 2 and a notes application on November 5, 2011
- VB.Net, bitwise operations and Enums (Enumerations) on November 5, 2011
- A case for PEVerify and a bug in the VB.Net compiler on October 30, 2011
- The roslyn syntax visualizer tool window on October 23, 2011
- First try with Roslyn CTP: Refactoring – Move class to its own file part 2 on October 23, 2011
- First try with Roslyn CTP: Refactoring – Move class to its own file on October 23, 2011
- First try with Roslyn CTP: Reading sourcefiles.part 2 on October 22, 2011
- First try with Roslyn CTP: Reading sourcefiles. on October 22, 2011
- We don’t need another hero… or do we? on October 15, 2011
- The new features in VB.Net for the .net framework 4.5 or VB11, I’m not impressed. on October 8, 2011
- Massive (the micro-ORM), VB.Net and Named argument cannot match a paramarray parameter on October 4, 2011
- Using Roboguice to inject your dependencies in an Android app on October 2, 2011
- Massive (The Micro-ORM), winforms and VB.Net on September 29, 2011
- VBCop or project Roslyn on September 28, 2011
- Conferences in Belgium: SQL Server days 2011 and Agile .Net 2011 on September 28, 2011
- In VB11 the namespace difference between VB and C# will be … smaller. on September 28, 2011
- Android: Testing if an activity is finished when pressing a button. on September 25, 2011
- How to make your VB.Net code completely unreadable but still compile. on September 23, 2011
- Another hidden gem: CopyFromScreen on September 19, 2011
- Join and SkipWhile on September 19, 2011
- I use MSTest instead of Nunit when doing winforms testing. on September 18, 2011
- Moving a git repository hosted on a share on September 15, 2011
- Making a chocolatey package for chocolateyGUI. on September 10, 2011
- Chocolatey GUI on September 7, 2011
- Plaintextoffenders.com on September 6, 2011
- When is a LambdaExpression Body a UnaryExpression in C#? on August 22, 2011
- MSDN subscriber downloads is getting a new look on August 16, 2011
- Multithreading pings now with Visual studio Async CTP SP1 refresh on August 14, 2011
- Visual Studio Async CTP videos on August 14, 2011
- How a VB.Net programmer can annoy his C# colleague: The curious case of case insensitivity on August 14, 2011
- The Like keyword in VB.Net and the equivalent in C# on August 10, 2011
- VB.Net isn’t C# on August 7, 2011
- Book review: jQuery Mobile First Look on August 4, 2011
- Running sandcastle project from teamcity on August 4, 2011
- Some more string concatenation weirdness this time with Enums in VB.Net and a difference with C# on August 2, 2011
- Why you should never concatenate string with + in VB.Net and how resharper can help. on August 1, 2011
- Learning Ruby on windows: First steps in rails – adding an image to our plants on July 31, 2011
- Learning Ruby on windows: First steps in rails scaffolding on July 30, 2011
- Learning Ruby on windows: First steps in rails using Rubymine. on July 30, 2011
- Getting chocolatey or other powershell scripts to work when behind a proxy on July 27, 2011
- Don’t forget to clean up your teamcity mess on July 25, 2011
- My ruby posts on July 25, 2011
- Chocolatey: apt-get for windows on July 24, 2011
- Learning Ruby on windows: step 1.3 inheritance on July 24, 2011
- Learning Ruby on windows: step 1.2 interfaces on July 24, 2011
- Learning Ruby on windows: step 1.1 properties/accessors on July 24, 2011
- Learning Ruby on windows: step 1 classes on July 24, 2011
- Rubymine getting the test-unit to work better. on July 23, 2011
- Learning Ruby on windows: step 0.2 running ruby interactively on July 23, 2011
- Learning Ruby on windows: Step 0.1 testing the hello world on July 23, 2011
- Learning Ruby on windows: step 0 on July 23, 2011
- Fluent security with VB.Net on July 22, 2011
- fluent security and making it work in VB.Net on July 22, 2011
- Project Kotlin a competitor for Scala on July 20, 2011
- DevDirective a new kid on the block on July 20, 2011
- Me and Open source on July 8, 2011
- Security: Don’t blame the victim on July 8, 2011
- Resharper 6 Custom patterns for Nunit on July 1, 2011
- Resharper 6 is out on July 1, 2011
- Should you be explicit or just use the defaults (beware contains VB.Net) on June 29, 2011
- impromptu-interface instead of clay for VB.Net on June 28, 2011
- Making a jQuery Mobile style for a PHPBB forum on June 26, 2011
- Trying out jQuery Mobile on our blog on June 20, 2011
- Pure randomness or bad caching on June 18, 2011
- node.js using mustache.js for templating on June 13, 2011
- node.js making it a bit more readable on June 13, 2011
- Trying out node.js on June 13, 2011
- Stop moving my buttons. on June 8, 2011
- Do not concatenate strings String.Format instead and a bit of resharper magic to help us. on June 7, 2011
- Handling of events a small difference between C# and VB.Net on May 31, 2011
- VB.Net and the past: The return thing and C# on May 31, 2011
- Resharper 6 EAP and VB.Net: First impressions on May 31, 2011
- Upgrading from teamcity 6.0 to 6.5 on May 27, 2011
- Resharper is useful even for VB.Net on May 25, 2011
- Automatic properties and their backing fields big difference between C# and VB.Net on May 25, 2011
- Happy 20th birthday Visual Basic. on May 21, 2011
- My selection of videos to watch from teched 2011 NA on May 19, 2011
- Using WNetGetConnectionA to get the UNCPath on May 19, 2011
- Did you notice there is a new .Net framework version? on May 18, 2011
- Winforms richtextbox and a memoryleak on May 13, 2011
- Printing to a zebra printer in VB.Net even when on Windows 7 on May 10, 2011
- The mystery of Clay and VB.Net a better explanation on May 8, 2011
- The mystery of Clay and Anonymous types in VB.Net on May 8, 2011
- Using clay in VB.Net or how to be dynamic on May 7, 2011
- Nuget makes life so much easier on May 1, 2011
- Religious battles in IT-land on May 1, 2011
- 300 on April 30, 2011
- Techdays 2011 Belgium on April 30, 2011
- Window Clippings 3.0 versus Gadwin on April 25, 2011
- RavenDB and changing our model on April 25, 2011
- Trying out RavenDB with VB.Net and images on April 24, 2011
- Trying out RavenDB with VB.Net on April 23, 2011
- Unit tests are not enough for validation of your method in scientific applications on April 21, 2011
- Parallel.For and a difference between C# and VB.Net on April 15, 2011
- Multithreaded ping shown in a grid C# version on April 15, 2011
- Multithreading pings and showing them in a grid in VB.Net on April 14, 2011
- NuGet stats on April 11, 2011
- Lessthandot is moving to a new server on March 29, 2011
- Powershell in your VB.Net application the better way on March 21, 2011
- Powershell in your VB.Net application on March 20, 2011
- Git trouble because I screwed up something on March 16, 2011
- Gadwin Printscreen if you need to take lots of screenshots of your application. on March 8, 2011
- When do you have enough documentation? on March 7, 2011
- The best way to keep your settings? on March 7, 2011
- Should I abandon VB.Net? on February 28, 2011
- Git checkout and Visual studio a solution with powerconsole on February 27, 2011
- powerconsole and removing the git from my git routine on February 26, 2011
- Power console for Visual Studio 2010 on February 26, 2011
- Dynamic or static languages on February 25, 2011
- Powershell, WmiObject and a corrupt repository on February 25, 2011
- My first steps in powershell and stopping and starting a process on February 25, 2011
- MailMessage and changing the name of an attachment on February 14, 2011
- Collection and array initializers in VB.Net and the difference with C# on February 11, 2011
- Making an XMLLayout for Log4Net with an xsl file to make it human readable. on February 9, 2011
- Find NoLock in an sql statement and remove it. on February 9, 2011
- Why do you make unreadable logfiles? on February 9, 2011
- Design for Colour Blindness on February 2, 2011
- Mailmessage and the Newline that was being ignored on February 1, 2011
- Writing a simple website with webmatrix, razor and VB.Net on February 1, 2011
- Trying out Webmatrix and Razor with VB.Net on January 31, 2011
- The black wallpaper problem in windows 7 and a group policy on January 28, 2011
- Rdlc reports with images and setting the backgroundcolor of a textbox on January 27, 2011
- Belgian webcamp 2011 on January 24, 2011
- Encoding to UTF-8 in php on January 23, 2011
- Forking nuget to solve the proxy problem on January 13, 2011
- Setting up a home mediaserver on January 10, 2011
- Craftsmanship on January 10, 2011
- Improving the code for determining the average color of a fiber on January 7, 2011
- Determining the average color of a fiber on January 5, 2011
- Aforge.Net and deciding the average color of a fiber on January 4, 2011
- Cropping a zoomed image in VB.Net on January 3, 2011
2010 (69 posts)
- Fighting the automatic spam on December 31, 2010
- I know you can do better than that on December 28, 2010
- Digsby and how to do it right on December 13, 2010
- TeamCity 6 and Dotcover on December 9, 2010
- Teamcity 6 upgrade from teamcity 5 on December 8, 2010
- The last one on October 6, 2010
- Writersblock on October 2, 2010
- Sometimes I don’t want to be DRY on September 29, 2010
- A few reasons why I’m not yet moving to WPF on September 27, 2010
- nHibernate making updates and inserts much faster on September 27, 2010
- TeamCity and deployement on September 24, 2010
- Moving to TeamCity on September 23, 2010
- How I work with git on September 23, 2010
- Working with Git my first impressions on September 10, 2010
- NSubstituteAutoMocker for StructureMap on August 16, 2010
- Git, TortoiseGit, Github and the rest on August 15, 2010
- Don’t test your mocking framework. on August 11, 2010
- System.IO.Path.GetFileName does not do what it advertises on August 10, 2010
- Refactoring to a pattern. Theming in WPF. on August 4, 2010
- Thank you VB 10 for adding better anonymous delegate support on August 2, 2010
- You need to know a lot when you want to be a developer. on July 30, 2010
- T4 template to make multiple factories on July 28, 2010
- T4 template to make a factory class. on July 24, 2010
- WPF and the splashscreen on July 23, 2010
- WPF and wordwrap/wordtrimming in a listbox on July 16, 2010
- WPF and Silverlight difference on July 15, 2010
- dotCover 1.0 Beta on July 12, 2010
- Service locator pattern, why and why not. on July 6, 2010
- StructureMap and Generics on July 5, 2010
- AndAlso and OrElse on June 22, 2010
- VB.Net List.Select did not give me the result I expected on June 18, 2010
- Review: NHibernate 2 – Beginner’s Guide on June 15, 2010
- Tuples in VB10 on June 7, 2010
- Not all memoryleaks are the same and USER objects. on June 4, 2010
- .Net 4 And the reportviewer on June 2, 2010
- Happy birthday LessThandot. on June 1, 2010
- .Net 4 And Nunit on May 28, 2010
- Microsoft reporting weirdness on May 5, 2010
- Winforms ReportViewer not working in VS2010 on May 3, 2010
- Adding a menuItem to the NArrangeExtension. on April 30, 2010
- My first VS2010 Extension on April 29, 2010
- Multithreaded FizzBuzz made easy on April 27, 2010
- Extension manager in VS2010 on April 27, 2010
- Calculating Fibonaci numbers with .Net 4.0’s BigInteger on April 26, 2010
- Linq to nHibernate why you should always check the sql it produces. on April 22, 2010
- Why is Equals considered advanced? on April 20, 2010
- Things I can’t seem to do with Linq to NHibernate on April 19, 2010
- Converting a VMWare 7 VM to run on ESXi on April 16, 2010
- Look, I made VS2010 crash. on April 14, 2010
- linq to nhibernate and subqueries. on April 13, 2010
- Linq to nHibernate on April 13, 2010
- Techdays 2010 on April 2, 2010
- MongoDB persisting System.Drawing.Image on March 29, 2010
- Trying out MongoDB persisting objects on March 28, 2010
- Time to try MongoDB on March 27, 2010
- Use the right collection part 2 on March 16, 2010
- Use the right collection on March 15, 2010
- Conditional formating in a DataGridView with databinding on March 11, 2010
- Finalbuilder, Nunit and the AppDomainUnloadedException on March 8, 2010
- StructureMap, Windows Forms, User controls and BuildUp on February 24, 2010
- VMWare Workstation 6.5 and the weird network problem. on February 23, 2010
- SVN and treeconflicts on February 16, 2010
- InternalsVisibleTo, VB.Net, unittesting and windows forms components on February 15, 2010
- Raising an event on a parametered event with Rhino mocks and returning null on February 15, 2010
- Th QNap TS-419P Turbo NAS on February 3, 2010
- Some spammers deserve a bit of respect on January 17, 2010
- Changing the app.config on install on January 14, 2010
- Running IE6, IE7 and IE8 on win7 on January 10, 2010
- jTwitter, jQuery and getting the number of followers on January 2, 2010
2009 (90 posts)
- Jquery a tooltip and the need for some ajax on December 31, 2009
- jquery and a pinch of ajax on December 30, 2009
- php and making a summary and closing tags on December 12, 2009
- Some more Google wave invites on November 25, 2009
- Google Wave Invitations up for grabs on November 11, 2009
- Validating a domain model/objects on November 5, 2009
- Little exam question on November 3, 2009
- Installing SQL Server Express 2008 Management Studio on November 2, 2009
- Installing sql server express 2008 is easier than it used to be on November 2, 2009
- Opening jp2 (jpeg2000) files in .Net on October 30, 2009
- Setting trust between 2 windows server 2003 domains results in "The specified user already exist" on October 21, 2009
- Application.StartupPath and Nunit on October 21, 2009
- Reportviewer, winforms and objectdatasources on September 23, 2009
- Adding barcodes to rdlc reports on September 21, 2009
- Firefox and Lessthandot on September 16, 2009
- Microsoft reportviewer and setting displaymode to printlayout on September 15, 2009
- Microsoft reportviewer: domainobjects and ReportDataSource needs a collection. on September 14, 2009
- Firefox and having multiple tabs as your homepage on September 6, 2009
- ITextSharp and the HeaderFooter on the first page on August 19, 2009
- A Firefox extension I’ve been wanting for a long time on August 18, 2009
- How To: Savitsky-Golay smoothing on August 13, 2009
- Upgrading to win 7 can take a while on August 11, 2009
- Arranging your VB.Net and C# code with NArrange on July 31, 2009
- Expanding a VMWare disk the easy way on July 30, 2009
- TTDing an interview question on July 30, 2009
- Reading from a HD that was formatted with ext3 from Windows Vista on July 2, 2009
- The use of descriptive variable names is forbidden on July 2, 2009
- Using Byref on July 1, 2009
- A great use of Goto on July 1, 2009
- Some VMware weirdness on June 30, 2009
- Installing windows 7 on an Acer aspire One 751h on June 24, 2009
- The Fujitsu Siemens T1010 A first impression on June 14, 2009
- Hiding the mouse pointer for the tablet pen on June 13, 2009
- What so special about Optional/named parameters on June 11, 2009
- Creating a Sequential Guid in .Net on June 10, 2009
- An arrowhead anti-pattern on June 10, 2009
- So close and yet… on June 8, 2009
- Finding a bug I read about but forgot on June 5, 2009
- An nUnit testfixture file template for resharper that also conforms to stylecop laws on May 28, 2009
- Getting the UnderLyingType of a Generic Object in VB.Net on May 27, 2009
- Using VB10 to make a spectral library on May 22, 2009
- Where to write my ini files in Vista and Windows 7 on May 20, 2009
- Someone ate my gtalk iGoogle plugin on May 20, 2009
- Excel and the divide by 100 problem on May 16, 2009
- I really like StyleCOp and why it’s only for C# on May 14, 2009
- Exceptions can send you on the wrong path on May 13, 2009
- Announcing PostSharp 1.5 on May 12, 2009
- VB.Net: declaring events a different way on May 11, 2009
- How a profiler can help resolve performance problems. on May 10, 2009
- An Aha moment and NUnit categories on May 9, 2009
- NHProf, bags, the select N+1 problem and solving it on May 8, 2009
- Me and NHProf: the beginning on May 7, 2009
- nhibernate, nunit 2.4.6 and log4net on May 6, 2009
- How to make your VMware VM crash on May 6, 2009
- The language thing on May 4, 2009
- ADO.Net use the interfaces on May 3, 2009
- VB.net checking the performance of Linq and Max on May 2, 2009
- How to make a spamfree paypal donate button on April 15, 2009
- A good book and the observer pattern for VB.Net on April 13, 2009
- IE8 doesn’t like me, it wants me to guess what to click. on March 21, 2009
- IE8 thinks Google is designed for older browsers on March 21, 2009
- Asking the right question on March 21, 2009
- Me and ASP.Net MVC: even less magic strings but opinionated on March 20, 2009
- Me and ASP.Net MVC: getting to grips with the ActionLink on March 20, 2009
- Me and ASP.NET MVC: Less magic strings. on March 19, 2009
- Threading isn’t easy on March 18, 2009
- Me and ASP.Net MVC on March 17, 2009
- Better exception handling on March 6, 2009
- A new machine and installing it on March 5, 2009
- Windows forms and structuremap the single instance form on February 21, 2009
- A Recipe for Success: The Command Pattern, StructureMap and a dash of Generics on February 15, 2009
- VB.Net: Reflection and a private field marked with WithEvents on February 9, 2009
- Installing SQL Server 2008 express on Windows XP SP 2 without reading the manual on February 8, 2009
- Using the structuremap Automocker on February 4, 2009
- Making the simplest of domainmodels can be harder than you think on February 2, 2009
- Another little difference between VB.Net and C# on January 29, 2009
- Jon Skeet talks about being a micro-celebrity on January 24, 2009
- Rhino mocks and raising an event that has parameters and the AAA syntax the VB.Net version on January 20, 2009
- Rhino mocks and raising an event that has parameters and the AAA syntax on January 20, 2009
- StructureMap is way cool even in VB.Net on January 17, 2009
- StructureMap is way cool. on January 16, 2009
- The internet is scary on January 14, 2009
- automapping fluent nhibernate and VB.Net on January 12, 2009
- B2evo and the tagcloud on January 11, 2009
- SQL server Linked server between 2005 64bits and a 2000 32 bits server. on January 9, 2009
- My new developer machine on January 7, 2009
- Altering the Schema of a stored procedure in SQL-server 2005 on January 6, 2009
- DDD is not all or nothing on January 6, 2009
- My todo list on January 5, 2009
- Writing a plugin for b2evo on January 5, 2009
2008 (95 posts)
- Challenges on December 10, 2008
- VB.Net and C# – the difference in OO syntax part 4 on December 8, 2008
- VB.Net and C# – the difference in OO syntax part 3 on December 3, 2008
- VB.Net and C# – the difference in OO syntax part 2 on December 2, 2008
- VB.Net and C# – the difference in OO syntax part 1 on November 30, 2008
- VB.Net: drawing a string on a panel that is created at runtime. on November 28, 2008
- Some bugs just never go away on November 27, 2008
- Isolator For SharePoint on November 25, 2008
- How threads can mess up the order of things. on November 24, 2008
- How to export an outlook calendar to word on November 24, 2008
- The donkey and unit testing on November 21, 2008
- Sometimes I could hit myself. on November 21, 2008
- Why English is really important to us non-english speakers. on November 18, 2008
- VisualSVN on November 13, 2008
- iGoogle and a bit too much Ajax or javascript. on November 12, 2008
- Rhino mocks AAA syntax test if an event was raised on November 4, 2008
- New features in Visual Basic 10 on October 30, 2008
- .Net and a problem with unicode and ToLower and ToUpper on October 28, 2008
- How to write a simple dependency resolver webcast on October 27, 2008
- The difference between Invoke and BeginInvoke on October 23, 2008
- Silverlight 2 is cool. Or is it? on October 20, 2008
- Why isn’t DDD normal in .Net? on October 19, 2008
- Another great Firefox 3 extension on October 18, 2008
- The law of demeter on October 14, 2008
- An interview with Roy Osherove author of "The Art of Unit Testing" on October 12, 2008
- VB.Net: Rhino mocks 3.5 and Lambda expressions and AssertWasCalled not always working on October 7, 2008
- VB.Net: Visual studio intellisense doesn’t like lambda expressions on October 6, 2008
- VB.Net: Rhino Mocks and the AAA syntax on October 6, 2008
- The Microsoft Extensibility framework on September 30, 2008
- tricking the visual studio windows forms designer on September 26, 2008
- VB.Net: Configure StructureMap defaults and non-defaults on September 23, 2008
- VB.Net: Setter injection with StructureMap and the FI on September 22, 2008
- VB.Net: A better configuration for structureMap on September 21, 2008
- Why singletons are bad on September 20, 2008
- Behaviour-driven development and User stories on September 17, 2008
- How to flash your bios with an Asus P5E MB and NO floppy disk on September 12, 2008
- VB.Net: Some fun with enums on September 12, 2008
- Nunit: StatThread and WindowsForms Controls on September 11, 2008
- VB.Net: Adding ContainsAny and ContainsAll to ICollection(of T) part 2 on September 10, 2008
- VB.Net: Adding ContainsAny and ContainsAll to ICollection(of T) on September 9, 2008
- VB.Net: Linq and I keep forgetting on September 9, 2008
- Google chrome the new browser, my first impressions. on September 3, 2008
- VB.Net: Resharper New version 4.1 on September 3, 2008
- VB.Net: Resharper SurroundTemplate for Try Catch on September 3, 2008
- Google chrome in Beta on September 3, 2008
- VB.Net: Print a DataGridView with RustemSoft’s Print Class and an Extension method on September 2, 2008
- VB.Net: Printing the content of a RichtextBox on September 2, 2008
- VB.Net: Check if an Event Gets Raised By a Method on August 29, 2008
- VB.Net and Resharper and the Property generation thing on August 27, 2008
- Let Resharper do the heavy lifting for you. on August 26, 2008
- .Net has a Set implementation since the 3.5 framework. Use it. on August 20, 2008
- VB.Net: Using PostSharp and Log4PostSharp on August 19, 2008
- Mediawiki and how to get the latest articles on August 16, 2008
- VB.Net: Single Instance application the better way on August 14, 2008
- Meeting up with Oren Eini aka Ayende Rahien on August 7, 2008
- Visual Studio: Working with the designer on August 2, 2008
- StructureMap: Configure everything before using Objectfactory.GetInstance on July 28, 2008
- Some of the blogs I read on July 27, 2008
- nHibernate performance against stored procedures: Conclusion on July 25, 2008
- nHibernate performance against Stored procedures part 3 on July 25, 2008
- nHibernate performance against Stored procedures part 2 on July 24, 2008
- nHibernate performance against Stored procedures part 1 on July 23, 2008
- nHibernate performance against Stored procedures on July 22, 2008
- VB.Net: Binding a complex interface(inherited) to a datagridview on July 17, 2008
- VB.Net: Extending the image to create a thumbnail the easy way. on July 15, 2008
- VB.Net: Extending String with extension methods for VB9 on July 15, 2008
- VB.Net: creating an instance of a class using reflection on July 9, 2008
- C# Sorting winforms controls using linq on June 24, 2008
- PHP: Writing a DAL the DAO way part 2 on June 21, 2008
- Solution folders are not being sorted alphabeticaly in Visual studio 2008 on June 20, 2008
- Nice articles on The Codeproject this week on June 17, 2008
- Printing to a zebra printer from VB.Net on June 16, 2008
- Nice article on ASP.NET MVC on June 15, 2008
- Interview/Exam questions part 3 on June 14, 2008
- PHP: writing a dal the DAO way part 1 on June 14, 2008
- Interview/Exam questions part 2 on June 12, 2008
- Dear PHP-God(s)(ess)(esses) on June 11, 2008
- Interview/exam questions on June 11, 2008
- SQL Dependency tracker and pretty pictures on June 10, 2008
- Visual studio got tired on June 9, 2008
- What I test in a DomainObject in VB.NET using nunit on June 9, 2008
- Wrong mapping Component in nHibernate using VB.Net on June 8, 2008
- Domainentity and IsDirty on June 6, 2008
- Why does Adobe make it hard to distribute Adobe reader? on June 5, 2008
- Yes, I do make stupid mistakes on June 4, 2008
- Use solution folders on June 3, 2008
- Php sucks on May 21, 2008
- Unused references on May 17, 2008
- Resharper Template Nunit TestFixture for VB.Net on May 17, 2008
- Resharper File template for Nunit testing with Structuremap support for VB.Net on May 17, 2008
- Find the last Backup taken in SQL Server on May 17, 2008
- Do automated restore tests on your SQL-Backups. on May 17, 2008
- File unlocker for windows on May 17, 2008
- Listen to yourself on February 29, 2008
- VS 2008 ErrorMessage on February 28, 2008
2007 (1 posts)
- The concepts of OOP on December 21, 2007 | https://blogs.lessthandot.com/author/christiaan-baes-chrissie1 | CC-MAIN-2021-21 | refinedweb | 6,536 | 57.4 |
omfg
if you make a part 2
1.make it non-hateful
2.take SBCs .FLA from clock crews website,bc the SBC in this game sucked
3.add sound
4.animate it normally.
Rated 0 / 5 stars
wtf!!!
what the poop was up wit dat! totally gay, man your gay!!! quit online movies FOREVER
Rated 0.5 / 5 stars
Ugh.
Look, if this is your first movie, then don't embarrass yourself by contributing to the shit pile of anti-clock movies. You don't know the first thing about animation. This horrible looking.. thing (seeing as how I couldn't rightly refer to it as a movie or a game,) deserves a negative 2. I was being kind by giving it a one. Get more animation skills. Get a sence of humor. learn more about the clock crew.
Rated 3 / 5 stars
Not bad for what it is.
Things like this (Kill the Old Man From Zelda, etc) used to be loved by people - I'll never understand why, always thought they frankly sucked, but this is as good as any of those, except that the gfx could use a little work, but you siad that yourself.
Good luck for other projects. :)
Rated 2.5 / 5 stars
def a first work
needs alot of work but sence it is your first project i wont try to hurt your feelings. im not a wizz at the whole flash-movie-makin prosess. i just watch alot of it. id talk to people that are really good at it and get tips from them. i didnt blam it cuz it is ur first work. keep at it and hopefully we'll some better more improved version of it. good luck man | http://www.newgrounds.com/portal/view/56645 | CC-MAIN-2015-27 | refinedweb | 290 | 95.37 |
To indicate that a Worker has accepted or rejected a Task, you make an HTTP
POST request to a Reservation instance resource. To do that, we need the
TaskSid, which is available via the web portal, and the ReservationSid. The
ReservationSid was passed to our Assignment Callback URL when a Worker was
reserved for our Task. Using the ngrok inspector page at, we can easily find the request parameters sent from
TaskRouter and copy the ReservationSid to our clipboard. **
The Reservation API resource is ephemeral and exists only within the context of a Task. As such, it doesn't have its own primary API resource and you'll find it documented in the Tasks resource section of the reference documentation.
With our trusty TaskSid and ReservationSid in hand, let's make another REST API request to accept our Task Reservation. We'll add on to our run.py to accept a reservation with our web server. Remember to substitute your own account details in place of the curly braces.
from flask import Flask, request, Response from twilio.rest import Client empty 200 response""" resp = Response({}, status=200, mimetype='application/json') return resp @app.route("/create_task", methods=['GET', 'POST']) def create_task(): """Creating a Task""" task = client.taskrouter.taskrouter.workspaces(workspace_sid) \ .tasks(task_sid) \ .reservations(reservation_sid) \ .update(reservation_status='accepted') print(reservation.reservation_status) print(reservation.worker_name) resp = Response({}, status=200, mimetype='application/json') return resp if __name__ == "__main__": app.run(debug=True)
If you'd like to use curl instead, put the following into your terminal:
curl -X POST{WorkspaceSid}/Tasks/{TaskSid}/Reservations/{ReservationSid} -d ReservationStatus=accepted -u {AccountSid}:{AuthToken}
Examining the response from TaskRouter, we see that the Task Reservation has been accepted, and the Task has been assigned to the our Worker Alice:
{... "worker_name": "Alice", "reservation_status": "accepted", ...}
If you don't see this, it's possible that your Reservation has timed out. If this is the case, set your Worker back to an available Activity state and create another Task. To prevent this occuring, you can increase the 'Task Reservation Timeout' value in your Workflow configuration.
With your Workspace open in the TaskRouter web portal, click 'Workers' and you'll see that Alice has been transitioned to the 'Assignment Activity' of the TaskQueue that assigned the Task. In this case, "Busy":
Hurrah! We've made it to the end of the Task lifecycle:
Task Created → eligible Worker becomes available → Worker reserved → Reservation accepted → Task assigned to Worker.
In the next steps, we'll examine more ways to perform common Task acception and rejection workflows.
Next: Accept a Reservation using Assignment Instructions »
** If you're not using ngrok or a similar tool, you can modify run.py to print the value of ReservationSid to your web server log. Or, you can use the Tasks REST API instance resource to look up the ReservationSid based on the TaskSid.
We all do sometimes; code is hard. Get help now from our support team, or lean on the wisdom of the crowd browsing the Twilio tag on Stack Overflow. | https://www.twilio.com/docs/taskrouter/quickstart/python/reservations-accept-rest | CC-MAIN-2018-22 | refinedweb | 503 | 54.83 |
Chapter 2 Elementary Programming
Motivations In the preceding chapter, you learned how to create, compile, and run a Java program. Starting from this chapter, you will learn how to solve practical problems programmatically. Through these problems, you will learn Java primitive data types and related subjects, such as variables, constants, data types, operators, expressions, and input and output.
Objectives • To write Java programs to perform simple calculations (§2.2). • To obtain input from the console using the Scanner class (§2.3). • To use identifiers to name variables, constants, methods, and classes (§2.4). • To use variables to store data (§§2.5-2.6). • To program with assignment statements and assignment expressions (§2.6). • To use constants to store permanent data (§2.7). • To declare Java primitive data types: byte, short, int, long, float, double, and char (§§2.8.1). • To use Java operators to write numeric expressions (§§2.8.2–2.8.3). • To display current time (§2.9). • To use short hand operators (§2.10). • To cast value of one type to another type (§2.11). • To compute loan payment (§2.12). • To represent characters using the char type (§2.13). • To compute monetary changes (§2.14). • To represent a string using the String type (§2.15). • To become familiar with Java documentation, programming style, and naming conventions (§2.16). • To distinguish syntax errors, runtime errors, and logic errors and debug errors (§2.17). • (GUI) To obtain input using the JOptionPane input dialog boxes (§2.18).
Introducing Programming with an Example Listing 2.1 Computing the Area of a Circle • The algorithm for calculating the area of a circle can be described as follows: 1. Read in the circle’s radius. 2. Compute the area using the following formula: area = radius * radius * p 3. Display the result.
animation Trace a Program Execution declaring variables public class ComputeArea { /** Main method */ public static void main(String[] args) { double radius; double area; // Assign a radius radius = 20; // Compute area area = radius * radius * 3.14159; // Display results System.out.println("The area for the circle of radius " + radius + " is " + area); } } memory radius no value area no value allocate memory for radius and area Java provides simple (primitive) data types for representing integers, real numbers, characters, and Boolean types.
animation Trace a Program Execution assign 20 to radius public class ComputeArea { /** Main method */ public static void main(String[] args) { double radius; double area; // Assign a radius radius = 20; // Compute area area = radius * radius * 3.14159; // Display results System.out.println("The area for the circle of radius " + radius + " is " + area); } } radius 20 area no compute area and assign it to variable print a message to the console
Reading Input from the Console 1. Create a Scanner object Scanner input = new Scanner(System.in); 2. Use the methods next(), nextByte(), nextShort(), nextInt(), nextLong(), nextFloat(), nextDouble(), or nextBoolean() to obtain to a string, byte, short, int, long, float, double, or boolean value. For example, System.out.print("Enter a double value: "); Scanner input = new Scanner(System.in); double d = input.nextDouble(); ComputeAreaWithConsoleInput ComputeAverage Run Run
Prompt Line 9 displays a string "Enter a number for radius: " to the console. This is known as a prompt, because it directs the user to enter an input. Your program should always tell the user what to enter when expecting input from the keyboard. Note: print vs. println
Identifiers • An identifier is a sequence of characters that consist of letters, digits, underscores (_), and dollar signs ($). • An identifier must start with a letter, an underscore (_), or a dollar sign ($). It cannot start with a digit. • An identifier cannot be a reserved word. (See Appendix A, “Java Keywords,” for a list of reserved words). For example An identifier cannot be true, false, ornull. (they are reserved words) • An identifier can be of any length. • Use descriptive names
Example • Which of the following identifiers are valid? Which are Java keywords? miles, Test, a++, ––a, 4#R, $4, #44, apps class, public, int, x, y, radius
Variables • Variables are used to store values to be used later in a program. • They are called variables because their values can be changed. // Compute the first area radius = 1.0; area = radius * radius * 3.14159; System.out.println("The area is “ + area + " for radius "+radius); // Compute the first area radius = 1.0; area = radius * radius * 3.14159; System.out.println("The area is “ + area + " for radius "+radius);
Declaring Variables • To use a variable, you declare it by telling the compiler its name as well as what type of data it can store. • Variable declaration tells the compiler to allocate appropriate memory space for the variable based on its data type. • The syntax for declaring a variable is: • datatype variableName; • Later you will be introduced to additional data types int x; // Declare x to be an integer variable; double area; // Declare area to be a double variable; char a; // Declare a to be a character variable;
Declaring Variables • If variables are of the same type, they can be declared together, as follows: • datatype variable1, variable2, ..., variablen; • The variables are separated by commas. • You can declare a variable and initialize it in one step. inti, j, k; // Declare i, j, and k as int variables intcount; count = 1; int count = 1; inti=5 , j=3 , k=0 ; // note the comas and semicolon a variable must be declared and initialized before it can be used.
Assignment Statements • After a variable is declared, you can assign a value to it by using an assignment statement • The syntax for assignment statements is as follows: • variable = expression; • An expression represents a computation involving values, variables, and operators int x = 1, z; // Assign 1 to x; double radius = 1.0; // Assign 1.0 to radius; char a = 'A'; // Assign 'A' to a; int y = 5 * 10; x = y + 4; y = y + 1; 1 = x; // Wrong x=y=z=10;
Constants final double PI = 3.14159; • constant, represents permanent data that never changes (syntax as follows): final datatype CONSTANTNAME = VALUE; • There are three benefits of using constants: • you don’t have to repeatedly type the same value if it is used multiple times; • if you have to change the constant value you need to change it only in a single location in the source code • a descriptive name for a constant makes the program easy to read.
Reading Numbers from the Keyboard
Integer Division +, -, *, /, and % 5 / 2 yields an integer 2. 5.0 / 2 yields a double value 2.5 5 % 2 yields 1 (the remainder of the division):
Problem: Displaying Time Write a program that obtains hours and minutes from seconds.
Number Literals A literal is a constant value that appears directly in the program. For example, 34, 1,000,000, and 5.0 are literals in the following statements: inti = 34; long x = 1000000; double d = 5.0;
Arithmetic Expressions is translated to (3+4*x)/5 – 10*(y-5)*(a+b+c)/x + 9*(4/x + (9+x)/y)
How to Evaluate an Expression You can safely apply the arithmetic rule for evaluating a Java expression.
Problem: Converting Temperatures Write a program that converts a Fahrenheit degree to Celsius using the formula:
Shortcut Assignment Operators Operator Example Equivalent += i += 8i= i + 8 -= f -= 8.0 f = f - 8.0 *= i *= 8i= i * 8 /= i /= 8 i = i / 8 %= i %= 8i= i % 8
Increment and Decrement Operators Operator Name Description ++varpreincrement The expression (++var) increments var by 1 and evaluates to the new value in varafter the increment. var++postincrement The expression (var++) evaluates to the original value in var and increments var by 1. --varpredecrement The expression (--var) decrements var by 1 and evaluates to the new value in varafter the decrement. var--postdecrement The expression (var--) evaluates to the original value in var and decrements var by 1.
Increment and Decrement Operators, cont.
Increment and Decrement Operators, cont. Using increment and decrement operators makes expressions short, but it also makes them complex and difficult to read. Avoid using these operators in expressions that modify multiple variables, or the same variable for multiple times such as this: intk = ++i + i.;
Conversion Rules When performing a binary operation involving two operands of different types, Java automatically converts the operand based on the following rules: 1. If one of the operands is double, the other is converted into double. 2. Otherwise, if one of the operands is float, the other is converted into float. 3. Otherwise, if one of the operands is long, the other is converted into long. 4. Otherwise, both operands are converted into int.
Type Casting Implicit casting double d = 3; (type widening) Explicit casting int i = (int)3.0; (type narrowing) int i = (int)3.9; (Fraction part is truncated) What is wrong? int x = 5 / 2.0;
ECE 448 – FPGA and ASIC Design with VHDL | https://www.slideserve.com/naoko/types-operators-expressions | CC-MAIN-2021-49 | refinedweb | 1,468 | 56.45 |
.
@mikael , both your links are pointing to the raw contents of scripter.py
Syntax error in line 343
f' not supported in my Pythonista
@cvp, as said, this relies on new update method of ui.View, which is only available in beta, but should be part of the soon-to-be-released App Store version.
@cvp, thanks for trying and hopefully it will work for everyone soon. Or check out if you could still get the beta - I have had no hassles with it.
Added fly_out effect with a direction option, see demo.
Syntax error in line 343
f' not supported in my Pythonista
f'strings' are a Python 3.6+ feature which is why they work in the Pythonista beta but not in the App Store version.
I saw a really cool module that allows all Pythons (even 2.7!!) to do f-strings just by adding:
# -*- coding: future_fstrings -*-as the first or second line of the file. No import or anything. Not sure if it will work in Pythonista but pretty mindbending nevertheless.
With some more practical experience with this I decided to change the way effects are used, to remove the need to define any classes or functions just to use simple effects.
There’s also now some proper API docs.
From the Quick Start:
Quick start
In order to start using the animation effects, just import scripter and call the effects as functions:
from scripter import * hide(my_button)
Effects expect an active UI view as the first argument. This can well be
selfor
sender
where applicable.
If you want to create a more complex animation from the effects provided, combine them in a
script:
@script def my_script(): move(my_button, 50, 200) pulse(my_button, 'red') yield hide(my_button)
Scripts control the order of execution with
yieldstatements. Here movement and a red
pulsing highlight happen at the same time. After both actions are completed,
my_buttonfades
away.
Run scripter.py in Pythonista to see a demo of most of the available effects.. | https://forum.omz-software.com/topic/4355/scripter-pythonista-ui-animation-framework-built-on-ui-view-update/1 | CC-MAIN-2021-17 | refinedweb | 333 | 72.26 |
This article presents the CDSSD3DView8 class, a CView-derived class that provides Direct3D support for use in MFC Single Document Interface (SDI) applications. The CDSSD3DView8 class is designed to be used instead of CView as the base class for a developer's primary SDI View class.
CDSSD3DView8
CView
The goal of this class is to provide functionality similar to the familiar Direct3D SDK CD3DApplication class, but still allow the user the ease-of-use provided by a Visual C++ AppWizard-generated SDI application. The class is also designed so that users don't have to "tack on" Direct3D support in their SDI applications. This support is built right into the View class itself.
CD3DApplication
Searching the internet, I've found a few examples describing how to use OpenGL with SDI or how to mingle Direct3D with MFC, but nothing providing a generic CView-derived class that can serve as a launching point for developing Direct3D applications using Visual Studio's Single Document Interface functionality. So I thought I'd present a class I've been using for my own development.
The pros and cons are a direct result of using SDI to build a Direct3D application. 3D rendering will be slightly slower than a "normal" Direct3D SDK application. There is no full-screen support, since SDI applications don't run full-screen. There's also no enumeration of adapters and display modes. But since an SDI application built with the CDSSD3DView8 class will always use the desktop as its target, there's no need for such support.
On the positive side, by developing an SDI application, you have available all of the AppWizard and ClassWizard support for ease of development. Although MFC/SDI may not be desired for game development, it makes development of tools for those games much easier. For example, I use the CDSSD3DView8 class as the base class for my main View for a DirectX Mesh editor that I'm diddling around with. It's far easier to use Visual C++ to create the supporting dialogs, menus, toolbars, message handlers, etc. etc. than it would be if using the DirectX SDK interface. I shudder to think of coding these resources by hand...
The CDSSD3DView8 class is defined in files DSSD3DView8.h and DSSD3DView8.cpp.
Also included is a sample program that renders a simple cube (the "Hello, world" of 3D development) under a flickering light to show how the CDSS3DView8 class can be used as a base class (instead of CView) for an SDI application. All source code for this sample program has been included.
CDSS3DView8
This class and the included sample program use DirectX 8.0. The program was written using Visual C++ 6.0, and tested on Windows XP platform with an NVIDIA GeForce4 MX440-SE video card. The class isn't very complex, so it can easily be adapted for later versions of DirectX. In addition, the MX440-SE is a bit dated, so there should be no trouble running the sample on a more advanced card.
This article assumes that the reader has a basic understanding of Direct3D and how it is accessed using the MSSDK DirectX toolkit. This isn't meant as a Direct3D primer, since there are mountains of articles and books available on the subject.
As mentioned, the CDSSD3DView8 class uses DirectX 8.0. So, you'll obviously need to have the DirectX 8.0 SDK toolkit and a compatible 3D video card installed. In addition, since the application was built using Visual C++ 6.0, I'd recommend that as the development platform. I haven't tested this in any other Visual Studio environment.
To create your own SDI application using the CDSSD3DView8 class, just run through the standard AppWizard pages to generate your SDI application's source code. When selecting the base View class, you'll have to select CView. All other SDI options are entirely up to the user.
Once the application code has been created, the user will have to copy the CDSSD3DView8 header and source files to the application directory and make a few simple manual changes to the generated View Header and Source files to support CDSSD3DView8 as the base class. In addition, a few changes to the Project Settings is required.
At the start of the AppWizard-generated View header file, you'll have to add the reference to CDSSD3DView8's header file:
#include "DSSD3DView8.h"
Next, change the View class' declaration so that it derives from CDSSD3DView8 rather than CView. CDSSD3DView8 derives directly from CView, so all CView methods and data will still be available to the user. The example program's view class is shown here:
class CD3D8SDIView : public CDSSD3DView8
Finally, you'll want to delete the declaration for OnDraw that was created by AppWizard. A full implementation of OnDraw is contained in the CDSSD3DView8 class. Details of this implementation are discussed below. Suffice to say, however, that derived View classes won't need their own implementation of OnDraw.
OnDraw
A few more manual changes are required in the generated .cpp file. The bulk of these are simple search-and-replace operations.
The first two changes are to the ClassWizard macros. Note that once these two changes are made, any handlers added via ClassWizard will automatically include a call to the corresponding member function in CDSSD3DView8.
IMPLEMENT_DYNCREATE(CD3D8SDIView, CDSSD3DView8)
BEGIN_MESSAGE_MAP(CD3D8SDIView, CDSSD3DView8)
Next, you'll want to delete the implementation of OnDraw. As mentioned above, View classes derived from CDSSD3DView8 will not need to provide their own OnDraw implementation.
Finally, you'll want to search for all instances of "CView::" base class calls and replace them with "CDSSD3DView8::". This redirects the AppWizard-generated handlers to call the CDSSD3DView8 base class instead of CView.
CView::
CDSSD3DView8::
Finally, you may have to make a few changes to your Project->Settings. Under the C++ tab, check the Preprocessor category. If you haven't already specified the paths to the DirectX 8.0 SDK under Tool->Options, then you'll want to add those paths here.
Next, under Project->Settings, select the Link tab and add "d3dx8.lib d3dxof.lib d3d8.lib dxerr8.lib winmm.lib dxguid.lib" to the list of Object/Library modules.
That should take care of setting up your SDI project to include the CDSSD3DView8 class. You're ready to start coding! If this is a new application, build and run it. You'll be presented with the usual SDI frame, menus, toolbar, and status bar. The client area, however, should be pitch black. That's the underlying CDSS3DView8 class clearing the backbuffer and presenting the scene to the client area.
These next few sections go into more details on how to use the CDSSD3DView8 class' simple API for your own application development.
This section discusses various aspects of the CDSSD3DView8 interface that is exposed to the application programmer as well as some of the internal private data and methods that provide the underlying Direct3D support. First, we list the API that is exposed for use by derived classes. This API is pretty simple, and many parts will be familiar to users of the Direct3D SDK's CD3DApplication class. After that, the "guts" of the class are touched upon... those data members and methods that provide the underlying 3D support.
private
Here we discuss the simpler aspects of the CDSSD3DView8 interface. The various prototypes defined in the header file are covered here, and any additional notes of clarification are included.
The core developer API methods follow the familiar Direct3D SDK prototypes. Like their counterparts in the SDK's CD3DApplication class, these base class implementations do very little.
virtual HRESULT InitDeviceObjects() { return S_OK; }
virtual HRESULT InvalidateDeviceObjects() { return S_OK; }
virtual HRESULT RestoreDeviceObjects() { return S_OK; }
virtual HRESULT DeleteDeviceObjects() { return S_OK; }
virtual HRESULT Render() { return S_OK; }
virtual HRESULT FrameMove();
InitDeviceObjects is called once, just after the Direct3D Device is obtained. Its counterpart, DeleteDeviceObjects is called once, just before the Direct3D Device is released when the view is destroyed.
InitDeviceObjects
DeleteDeviceObjects
InvalidateDeviceObjects is called when the device is lost or when the view is resized. RestoreDeviceObject is called once the device has been restored, and after internal data has been recalculated due to the View being resized.
InvalidateDeviceObjects
RestoreDeviceObject
Render is called from the CDSSD3DView8 class' OnDraw method once OnDraw has determined that the 3D device is stable. Render differs slightly from the SDK version. OnDraw brackets the call to the derived class' Render method with Direct 3D's BeginScene/EndScene pair. This is followed immediately by the Present call to present the backbuffer to the View. This relieves the derived class' Render from worrying about these details. Rendering will be discussed in more detail later.
Render
BeginScene
EndScene
Present
Finally, there's FrameMove. This is the only method of the bunch that doesn't have a one-line implementation. FrameMove is presented as an inline at the end of the Header file. FrameMove operates a bit differently than the CD3DApplication implementation, and is also covered in more detail later.
FrameMove
inline
For the most part, the application won't have to call these base class versions. The only possible exception is FrameMove, but its implementation is so simple that it can be included in the derived class' FrameMove methods.
According to the documentation, it's unsafe to call Windows GDI functions between the BeginScene/EndScene pair. To allow for GDI support, two helper virtuals have been included, both of which receive the DeviceContext passed to OnDraw:
virtual void PreRender(CDC *) { }
virtual void PostRender(CDC *) { }
PreRender is called by the OnDraw method prior to clearing the 3D backbuffer and prior to the call to BeginScene. This allows the user to perform any necessary pre-Render tasks. An example may include establishing one or more transform matrices. Actually, although this method is listed under "GDI Support", it wouldn't be wise to perform any GDI rendering here since the client area will be occupied by the backbuffer very soon.
PreRender
PostRender is called by OnDraw after it has called the Direct3D interface's EndScene and Present functions. It is here that any GDI function can be called to add any additional sprites and goodies to the scene. When PostRender is called, the backbuffer has already been moved to the View and Direct3D no longer "owns" the client area.
PostRender
All but one data member are kept private. This ensures that a sloppy developer won't accidentally stomp on the master Direct3D Device interface or any other internal data. Access to the device and related data are through "getter" methods. The comments for these methods are included as well for basic explanation. Further details are discussed below.
public:
// 3D DEVICE ACCESS
//
// Get3DDevice is called to return the IDirect3DDevice8 interface
// pointer. This will be NULL if the device hasn't been created
// or if Open3D failed creating a 3D device.
//
// GetDeviceState returns the current 3D device state. This will
// be one of the following values (as returned by
// TestCooperativeLevel):
//
// - S_OK if everything is running smoothly
// - D3DERR_DEVICELOST if the device is in a lost state
// - D3DERR_DEVICENOTRESET if device focus has been regained but
// hasn't been reset yet.
//
// The error cases are handled internally, so callers don't have
// to take any specific action.
//
LPDIRECT3DDEVICE8 Get3DDevice() const { return m_p3DDevice; }
HRESULT GetDeviceState() const { return m_hDeviceState; }
// MEMBER DATA ACCESS
//
// GetBackBuffer returns the backbuffer width and height as a 2D
// vector.
//
// GetHalfBackBuffer returns the half-size backbuffer coordinate pair.
//
// GetViewport returns the current Viewport.
//
// GetHALCaps returns the HAL capability set for the current 3D
// adapter.
//
// GetREFCaps returns the REF capability set for the current 3D
// adapter.
//
// GetCurrentCaps returns a pointer to either the HAL or REF
// capabilities, depending on which was selected when the 3D
// Device was created. This will be NULL if the 3D device hasn't
// been obtained yet, or if Open3D method failed to create the
// device.
//
const D3DXVECTOR2 & GetBackBuffer() const { return m_ptBackBuffer; }
const D3DXVECTOR2 & GetHalfBackBuffer() const { return m_ptHalfBackBuffer; }
const D3DVIEWPORT8 & GetViewport() const { return m_Viewport; }
const D3DCAPS8 & GetHALCaps() const { return m_capsHAL; }
const D3DCAPS8 & GetREFCaps() const { return m_capsREF; }
const D3DCAPS8 * GetCurrentCaps() const { return m_pCaps; }
protected:
// m_cClearColor is the background color we'll apply when we
// clear the backbuffer. This can be set by derived classes
// to their own color. The default is black.
//
D3DCOLOR m_cClearColor;
Get3DDevice is how derived classes gain access to the Direct3D interface. Unlike the CD3DApplication class, the Direct3D interface is not exposed. As mentioned above, the device pointer is kept private for protection.
Get3DDevice
Note that GetDeviceState is included primarily for reference. The user doesn't have to take any action if the 3D Device isn't in the S_OK state. This is handled automatically by the CDSSD3DView8 class as it periodically attempts to regain a lost 3D Device.
GetDeviceState
S_OK
The GetHalfBackBuffer function may seem odd. But I use it when converting a mouse point to a normalized (e.g., -1.0f to 1.0f) screen coordinate as the first stage of a 3D hit test.
GetHalfBackBuffer
m_cClearColor is the only 3D-related data member that is accessible to derived classes. This establishes the background color when OnDraw clears the backbuffer. The sample application makes use of this at the start of the program to set a dim gray background. The default m_cClearColor is black.
m_cClearColor
Two more methods are included to assist in application debugging:
// LogDXDebug is called to log some application debugging message
// to the TRACE0 output. The function works just like the
// printf() statement, allowing variable argument lists.
//
// LogDXError is called to log an error message based on the
// specified HRESULT value. The message logged will be of
// the format: <FN>: <ACTION> returned 0x<RESULT> (<STR>).
// <FN>, <ACTION>, and <RESULT> are provided by the caller.
// <STR> will be filled in by the DX utility method.
//
protected:
void LogDXDebug( const char * szSpec, ... );
void LogDXError( const char * szFn,
const char * szAction,
HRESULT hResult );
These functions simply log strings to an internal buffer that is then dumped to TRACE0. As a result, these messages will be disabled in Release builds. LogDXDebug works just like a printf() statement, allowing variable argument lists. LogDXError logs a message with a specific format. Both of these methods are used in the base CDSSD3DView8 class as it starts up, and the output they generate are shown here:
LogDXDebug
printf()
LogDXError
For my own use, I have a separate debugging thread that logs debugging and error information to a set of files. But that debugger is too complex for this simple application, and I didn't want to clog up this article with too much fluff. So I opted for this simple TRACE0 approach.
To allow for proper operation, CDSSD3DView8 handles a small set of Windows messages. To ensure proper operation, it is vital that these base class versions be called if any of these methods are overridden in derived classes. The command handlers are defined here and discussed briefly below.
// Overrides
public:
// ClassWizard generated virtual function overrides
//{{AFX_VIRTUAL(CDSSD3DView8)
public:
virtual void OnInitialUpdate();
protected:
virtual void OnDraw(CDC* pDC);
//}}AFX_VIRTUAL
// Generated message map functions
protected:
//{{AFX_MSG(CDSSD3DView8)
afx_msg void OnDestroy();
afx_msg void OnSize(UINT nType, int cx, int cy);
afx_msg void OnTimer(UINT nIDEvent);
afx_msg BOOL OnEraseBkgnd(CDC* pDC);
//}}AFX_MSG
DECLARE_MESSAGE_MAP()
OnInitialUpdate is where the Direct3D interface is established. This implementation calls the internal method Open3D() to initialize the 3D windowed environment. Open3D will be discussed in more detail later.
OnInitialUpdate
Open3D()
Open3D
OnDraw is the primary client paint entry point. This will be discussed in more detail below.
OnDestroy is called in response to the WM_DESTROY command received as the View is being destroyed. This method calls the internal Close3D method, which cleans up the 3D environment. Close3D will make a final call to InvalidateDeviceObjects followed immediately by a call to DeleteDeviceObjects. After that, all timers are shut down and the Direct3D interfaces are released.
OnDestroy
WM_DESTROY
Close3D
OnSize is called in response to a user resize of the view. If a Direct3D device has been established (and if the client area's size is actually changing), OnSize will call the internal Reset3D method to reestablish the 3D environment using the new view size.
OnSize
Reset3D
OnTimer is called in response to WM_TIMER messages. The CDSSD3DView8 class manages two timers: The FrameMove timer, which is discussed below, and a 3D Device state timer. This second timer is started when the 3D Device has been established but has been determined to be in an invalid state. This timer will fire every half-second, and call the internal TestDeviceState method to see if the Device can be reset. This timer runs until the Device is reestablished or the View is destroyed.
OnTimer
WM_TIMER
TestDeviceState
OnEraseBkgnd is a do-nothing stub, as shown in the implementation:
OnEraseBkgnd
BOOL CDSSD3DView8::OnEraseBkgnd(CDC* pDC)
{
return TRUE;
}
Your program won't be able to control all paths that lead to an invalidation of portions of the client area. An example would be opening a context-sensitive popup menu with a right-click of the mouse. When the menu closes, Windows sometimes invalidates your client area, which can cause flicker if the default CView::OnEraseBkgnd is called. All of these uncontrolled client invalidation paths, however, eventually lead to OnDraw, which calls Direct3DDevice's Clear function to set the backbuffer to the current m_ClearColor. So flicker is eliminated by not letting CView handle background erasure.
CView::OnEraseBkgnd
Clear
m_ClearColor
Here we discuss some of the code in a bit more detail. FrameMove is described, as is OnDraw's relationship with Render. Following that, some of the internal 3D control methods are examined.
FrameMove is a bit different than the CD3DApplication version. Since this is just a View class, it doesn't control the Run loop in the application. So FrameMove is handled by using an internal Windows Timer.
FrameMove doesn't fire automatically as it does in the SDK's CD3DApplication. It has to be manually configured in derived classes via a call to:
BOOL StartFrameTimer(DWORD dwTimeoutMS);
The function accepts a millisecond count as the timeout value. It returns TRUE if successful, FALSE if the call to SetTimer fails. Once this call is made, the timer will start running. As expected, FrameMove is called by this class' OnTimer implementation. As such, derived classes that include their own timers should make sure to call CDSSD3DView8::OnTimer to ensure that the Frame timer runs correctly.
TRUE
FALSE
SetTimer
CDSSD3DView8::OnTimer
FrameMove is implemented as an inline in the header file, as follows:
inline HRESULT CDSSD3DView8::FrameMove()
{
Invalidate(FALSE);
return S_OK;
}
Note that FrameMove invalidates the View, which will ultimately lead to a call to OnDraw and, hence, Render. The Invalidate call specifies a bRepaint flag of FALSE. However, since the CDSSD3DView8 class stubs OnEraseBkgnd (c.f. above), you won't see any flicker if you accidentally pass the default TRUE as the bRepaint flag.
Invalidate
bRepaint
The CDSSD3DView8 class also doesn't maintain any internal time counters. A derived class' FrameMove should call:
FLOAT GetElapsedTime() const;
to get the elapsed time since the last frame. However, GetElapsedTime simply returns the original millisecond count specified in StartFrameTimer divided by 1000 to convert it to a floating-point value with a unit of seconds. This can lead to small timing errors if a derived class relies on precise to-the-millisecond timing.
GetElapsedTime
StartFrameTimer
When the application is done with the Frame timer, a simple call will disable it:
void StopFrameTimer();
This function is called during cleanup, so the derived class doesn't have to worry about this detail.
The example application makes use of FrameMove and its supporting functions to apply a random flickering Point light to the scene. The light is located at the viewer's Eye Point.
If you'd like a more robust implementation of FrameMove, you can try adding timing support in the main CWinApp-derived application class' OnIdle method.
CWinApp
OnIdle
As mentioned, there is no need for derived classes to provide their own OnDraw implementation. The CDSSD3DView8 class provides this, calling out virtual hooks as it progresses. For ease of explaining where the various virtual hooks are called, the code for OnDraw is presented here:
void CDSSD3DView8::OnDraw(CDC* pDC)
{
// Test the device state. If we're not good to go, then we have
// to exit.
TestDeviceState();
if( m_hDeviceState != S_OK ) return;
// Allow derived class pre-rendering.
PreRender(pDC);
// Clear the backbuffer.
m_p3DDevice->Clear( 0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER,
m_cClearColor, 1.0f, 0 );
// BeginScene...
const char * szAction = "BeginScene";
HRESULT hResult = m_p3DDevice->BeginScene();
if( hResult == S_OK )
{
szAction = "Render";
hResult = Render();
m_p3DDevice->EndScene();
m_p3DDevice->Present(NULL, NULL, NULL, NULL);
}
// Log any errors.
if( hResult != S_OK ) LogDXError("OnDraw", szAction, hResult);
// And now post-rendering.
PostRender(pDC);
}
At the outset, OnDraw calls the internal TestDeviceState method. TestDeviceState tests the current state of the 3D device and will take appropriate actions if an unexpected condition develops. The result of this test is stored in the m_hDeviceState member which can be accessed via GetDeviceState(). The implementation of TestDeviceState is pretty straightforward, and can be examined in the source file DSSD3DView8.cpp.
m_hDeviceState
GetDeviceState()
Following TestDeviceState comes the call to PreRender. This method has been mentioned above and needs no further explanation.
After that, OnDraw moves into the 3D rendering section. As mentioned above, note that OnDraw brackets the Render call with the BeginScene/EndScene pair, and follows up with the call to Present. This differs from the Direct3D SDK's CD3DApplication class implementation, where the user's Render method is expected to handle all of these details. If you're uncomfortable about the way Render is handled in CDSSD3DView8, then feel free to remove the calls to BeginScene, EndScene, and Present, and move them to your own Render method. This sequence is included in OnDraw for simplicity.
OnDraw finishes up with a call to PostRender. This has also been described earlier.
This section defines the internal methods and data used to manage the master Direct3D interface and the Direct3DDevice obtained from there. These methods and data are all private, and are inaccessible to the application programmer. The specific declarations are in CDSSD3DView8.h, and are displayed below. Code comments have been included to provide explanation for most of this information.
First, we have the data. The comments are descriptive enough so that no further explanation should be needed.
private:
/////////////////////////////////////////////////////////////////////
// 3D DEVICE DATA
//
// m_pD3D is the master 3D interface object. This object provides
// the entry point into Direct3D services.
//
// m_D3DDisplayMode is the current adapter display mode, and is
// established as soon as the master 3D interface is created.
//
// m_capsHAL is the HAL adapter capabilities, as loaded from the
// master 3D object.
//
// m_capsREF is the REF adapter capabilities, as loaded from the
// master 3D object.
//
// m_pCaps will point to the currently active capability set (either
// m_capsHAL or m_capsREF). This pointer is established depending
// on whether or not the 3D Device was created using a HAL device
// (preferable) or a REF device.
//
// m_d3dPresentParams is the presentation parameter structure used
// to initialize the 3D device. This is maintained in case the
// device needs reset (e.g. when the window is resized).
//
// m_p3DDevice is the 3D device interface created for this View.
//
// m_Viewport is the current viewport. This will be similar to the
// local Backbuffer definition, but will be loaded from the 3D
// device.
//
// m_ptBackBuffer is a 2-D point that defines the width (x) and
// height (y) of the backbuffer. This will always reflect the
// current width and height of the view's client area.
//
// m_ptHalfBackBuffer is a point that simply holds the backbuffer
// width/2 in the X field, height/2 in the Y field. This value
// is used when converting a mouse point to model space.
//
// m_hDeviceState holds the current 3D device state, as returned
// by TestCooperativeLevel. If no 3D device exists, then this
// will hold D3DERR_INVALIDDEVICE.
//
// m_bInitDevicesNeeded is a one-shot flag that indicates whether
// or not a call to InitDeviceObjects is needed.
//
LPDIRECT3D8 m_pDirect3D;
D3DDISPLAYMODE m_d3dDisplayMode;
D3DCAPS8 m_capsHAL;
D3DCAPS8 m_capsREF;
D3DCAPS8 * m_pCaps;
D3DPRESENT_PARAMETERS m_d3dPresentParams;
LPDIRECT3DDEVICE8 m_p3DDevice;
D3DVIEWPORT8 m_Viewport;
D3DXVECTOR2 m_ptBackBuffer;
D3DXVECTOR2 m_ptHalfBackBuffer;
HRESULT m_hDeviceState;
BOOL m_bInitDevicesNeeded;
The member functions that directly manage the 3D environment are also private and inaccessible to the application programmer. They perform all the "behind the scenes" operations to maintain the 3D environment including startup, operations, and shutdown. Since they aren't part of the formal CDSSD3DView8 API exposed to the application developer, it is up to the reader to investigate the implementation of these methods. The header file comments have been included to provide a brief explanation of these methods.
// Open3D is called to create the 3D device and perform any one-time
// initialization. The function returns S_OK if successful, or
// some error code if an error occurs. The result of this test
// is stored in m_hDeviceState, which can be accessed via the
// GetDeviceState() method.
//
// Create3DDevice is called to create a 3D device of the requested
// type.
//
// Close3D is called to shut down all 3D objects and the master
// 3D device.
//
// Reset3D is called whenever the size of the view changes or
// whenever it has been determined that the 3D device has been
// lost. If the reset fails and the device is lost, the function
// will start the 1 second timer to keep polling the 3D Device's
// Reset method.
//
// TestDeviceState is called internally to test the 3D device state
// and either reset the device or start the reset timer if the
// device isn't ready. When the function exits, m_hDeviceState
// will hold the current state.
//
HRESULT Open3D();
HRESULT Create3DDevice(D3DDEVTYPE d3dDevType);
void Close3D();
HRESULT Reset3D();
void TestDeviceState();
// StartResetTimer is called whenever a "device lost" state is
// detected. If the timer isn't already running, then a new
// timer is started. Timeouts are handled in OnTimer, which will
// attempt to reset the device if control hasn't been regained.
//
// StopResetTimer is called to kill any current reset timer. If it's
// not running, then the call is ignored.
//
// m_nResetTimerId is the timer Id value created when the Reset
// timer is running.
//
UINT m_nResetTimerId;
void StartResetTimer();
inline void StopResetTimer();
These functions are all driven by various MFC events via the six ClassWizard overrides described above (OnInitialUpdate through OnEraseBkgnd). Thus, it is imperative that the CDSSD3DView8 version of these event handlers be called if a user's View class provides its own override of these On... methods.
On...
I've tested this class in a sample MDI application and have had no trouble. Using the sample code, I created an MDI application and was able to open multiple simultaneously flickering cube views within the MDI frame. Although this wasn't a full test of the CDSSD3DView8 class in an MDI environment (I was just checking for leaks and whatnot), I'd suspect that the class is quite suitable for MDI applications as well.
The bulk of the classes and files include the "DSS" prefix. This code was developed by me for my one-man company, Donut Shop Software (hence, the DSS prefix). I've removed all copyright notices in the code to release it here for this article. There are no explicit or implicit license issues involved in using this software other than those defined by this website.
Thanks to all who took the time to read this article and diddle around with the code. This is my first article, so I'm thankful that you've been able to bear with me to this point. Have fun!
Additional thanks to Obliterator for pointing out a flaw when resizing the View. This has been corrected and the downloads updated.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
// The projection matrix needs to be rebuilt since the backbuffer
// size may have changed. Use the base class' BackBuffer accessors to
// get the aspect ratio, and create the projection matrix.
FLOAT fAspect = GetBackBuffer().x / GetBackBuffer().y;
D3DXMatrixPerspectiveFovLH( &m_matProj,
D3DX_PI/4.0f,
fAspect,
NEAR_CLIP, FAR_CLIP );
hResult = p3DDevice->SetTransform(D3DTS_PROJECTION, &m_matProj);
if( hResult != S_OK ) return hResult;
CSplitterWnd m_mainSplitter, m_viewportSplitter;
BOOL m_bInitSplitter;
CRect cr;
GetClientRect( &cr);
if ( !m_mainSplitter.CreateStatic( this, 1, 2 ) )
{
//Error
return FALSE;
}
if ( !m_viewportSplitter.CreateStatic( &m_mainSplitter, 2, 2, WS_CHILD | WS_VISIBLE, m_mainSplitter.IdFromRowCol( 0, 0 ) ) )
{
//Error
return FALSE;
}
if ( !m_mainSplitter.CreateView( 0, 1, RUNTIME_CLASS(CMyProjectView),
CSize( 150, cr.Height() ), pContext ) )
{
//Error
return FALSE;
}
if ( !m_viewportSplitter.CreateView( 0, 0, RUNTIME_CLASS(CPerspective),
CSize( ( cr.Width() - 150 ) / 2, cr.Height() / 2 ), pContext ) )
{
//Error
return FALSE;
}
if ( !m_viewportSplitter.CreateView( 0, 1, RUNTIME_CLASS(CMyProjectView),
CSize( ( cr.Width() - 150 ) / 2, cr.Height() / 2 ), pContext ) )
{
//Error
return FALSE;
}
if ( !m_viewportSplitter.CreateView( 1, 0, RUNTIME_CLASS(CMyProjectView),
CSize( ( cr.Width() - 150 ) / 2, cr.Height() / 2 ), pContext ) )
{
//Error
return FALSE;
}
if ( !m_viewportSplitter.CreateView( 1, 1, RUNTIME_CLASS(CMyProjectView),
CSize( ( cr.Width() - 150 ) / 2, cr.Height() / 2 ), pContext ) )
{
//Error
return FALSE;
}
m_bInitSplitter = TRUE;
return TRUE;
CFrameWnd::OnSize(nType, cx, cy);
CRect cr;
GetWindowRect(&cr);
if ( m_bInitSplitter && nType != SIZE_MINIMIZED )
{
m_mainSplitter.SetRowInfo( 0, cy, 0 );
m_mainSplitter.SetColumnInfo( 0, cr.Width() / 2, 50);
m_mainSplitter.SetColumnInfo( 1, cr.Width() / 2, 50);
m_mainSplitter.RecalcLayout();
}
new [AnyValue]
CDirect3DSDISampleView
ERROR: Reset3D: IDirect3DDevice8::Reset returned 0x88760827 'D3DERR_DRIVERINTERNALERROR')
RestoreDeviceObjects
Device::Reset
TestCooperativeLevel
MoveWindow()
3DDevice::Reset()
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/9423/A-3D-Enabled-View-Base-Class-for-SDI-Direct3D-Deve?fid=147523&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None | CC-MAIN-2014-52 | refinedweb | 4,868 | 56.35 |
Hello folks, I hope you are enjoying working on AppGyver and I know how curious you would be to know how to integrate AppGyver to your CAP applications and BTP?
In this blog post, I will try to explain key integration points via a business use case of Water purifier as a service. You will learn about how to consume OData services in AppGyver and build an end-to-end SAP-specific use case.
Before we start, here is the quick recap from my last blog post and what we want to achieve-
A simple mobile and web application where users can subscribe to the water purifier on a need basis and also can monitor the health and other aspects of the filter in real-time via the application.
This has CAP module, IOT services as a backend, and OData which gets connected to AppGyver.
Refer Outstretching SAP BTP with AppGyver for the business use case explanation and motivation.
Develop CAP Application
Lets’ start by developing a CAP Project and OData services which we will consume in Appgyver (If this is your first CAP application, please refer to this for better understanding: ) –
Steps to develop a quick data model and expose the entities-
- Use BAS/VS studio for a development environment.
- Create a CAP project from a template.
- Add data model file under “db” folder and define the entities, in our case the entities are related to WPaaS.
using { managed, cuid } from '@sap/cds/common'; namespace wpaas; entity WPModel { key SubsWPModeID: String(100); ModelName : String(100); Capacity : Integer; ManufacturingDate : Date; Price : Integer; } entity PlanDetails { key PlanTypeID: String(10); PlanTypeName : String(20); } entity SubscriptionDetails { key DeviceId : String(100); UserId : String(100); UserName : String(100); SubsWPModelD : String(10); PlanTypeID : String(10); SubsStartDate :Date; SubsEndDate : Date; InstallationDate : Date; SubsStatus : String(10); EngineerID : String(10); InstallationStatus : String(10); SubsPrice : String(10); CreditCardNumber : String(20); expiryDate : Date; CVV : String(10); } entity UserDetails { Key Id : String(100); Key UserName : String(100); Name : String(20); ContactNumber : String(20); EmailId : String(20); Address : String (100); } // Data as provided by SAP IoT service entity IoTData { key tenantId : String(40); key capabilityId : String(40); key sensorId : String(40); key timestamp : Double; // Unix timestamps gatewayTimestamp : Double; measures: Composition of many { literconsumed :Integer; temperature : Integer; } }
- Create a service file under “srv” and expose the entities.
Service file –
using wpaas from '../db/data-model'; service CatalogService @(path:'/api/v1') { //@(requires:'admin') entity WPModel as projection on wpaas.WPModel; entity PlanDetails as projection on wpaas.PlanDetails; entity SubscriptionDetails as projection on wpaas.SubscriptionDetails; entity UserDetails as projection on wpaas.UserDetails; entity IoTData as projection on wpaas.IoTData; }
- Build mta.yml file and deploy.
- Once you have your code ready, deploy the application and open the service endpoint.
Consume OData service in AppGyver – POST and GET
I would focus more on data binding topics and less on the editor and basic Appgyver functionalities in the below steps. If this is your first Appgyver application development, I would suggest referring to my Introduction to AppGyver blog post.
Create a new project in Appgyver via trial account and start the project-
Screen 1: Register/Login screen
This screen will be one of our entry points to the app –
Add the buttons and put navigation logic on “Register” button to open a new page.
Screen 2: Registration Screen – this screen will POST the user details to our HANA database
Place all the input controls and a “Register” button.
Open “Data” tab to start the POST functionality and start the integration!
Please follow the below screenshots-
After adding the URL info of our OData service, time to test the connection.
Open the “test” tab and press on “test connection”. Oops!! CORS Error!
Fix for CORS issue-
Create a “server.js” file in your CAP application and add the following code –
"use strict"; const cds = require("@sap/cds"); const cors = require("cors"); //const proxy = require("@sap/cds-odata-v2-adapter-proxy"); cds.on("bootstrap", app => app.use(cors())); module.exports = cds.server;
Please refer blog post by Shibaji for more information on this issue and solution.
After adding the server.js file, deploy your CAP project again and test the connection.
Yes, it’s working!!! Now let’s set the schema.
Open the preview on your phone/web and check the registration logic.
Hurray! Record is saved in our HANA database.
This is how our service looks after creating multiple records-
Screen 3: Now let’s move to create a simple page with a list of available water purifiers by getting the data from our HANA database via GET –
- Drag and drop the list and let’s start building this screen!
- Open the “data” panel
- Select “REST API Integration”
- Enter all the details as shown in the screenshot after adding the BASE details same as we did for the POST request.
Screen 4: Subscription plan choose screen – POST data
In this screen, we will create a POST request to another service to store the plan details and also will use formulas and variables to fill subscription start and end dates based on the plan selected.
Screen 5: confirmation screen
On the success of creating the record, we will just navigate to a new page with background image and success text.
And with this screen, our consumer flow is completed!
After the user is subscribed to a water purifier, the request will go to the technician or service engineer to process the request and install the device at the User’s address.
Let’s build a quick and simple app for that as well-
Service Engineer Application
Start another project in AppGyver and build these screens-
Screen 1 – Login and navigate to the list of subscriptions.
Screen 2 – Detail page with all the info about installment, QR Code functionality, and installation.
Please refer to the screenshots to understand the flow.
The installation flow looks complete!
Now the user is all set to use the water purifier dashboard screen. Open our first page from the consumer app we developed at the start-
Screen 1: User clicks on Login – navigate to a new page for login with username
Enter username and store in a variable that we will use to pass the filter to our service to show details of the logged-in user.
Screen 7: Dashboard screen
In this screen, we will show the IoT data, create an IoT device in the SAP IoT cockpit, and post the data to our HANA database via the data simulator.
Please check the entity code I added at the start of this blog post and focus on the IoT data entity, this will be the service that will have the IoT data.
IoT Data connection and IoT Data simulator – Refer to IoT Blog Post by Jay Adnure
POST from IoT to our HANA database – Refer to Blog post by Gunter Albrecht
After the data is pushed to our OData service, Let’s build the dashboard to show the data-
I have used two container control for dashboard screen layout, one is a simple container and a row container.
Hurray, this brings us to the end of this blog post!
We covered the WPaaS(Water Purifier as a Service) use case with 2 end to end applications connected with SAP BTP, all developed without coding!
I hope you all got an overview with a working prototype and like me, you too are amazed how quickly we can develop a POC in AppGyver!
Please leave your questions/feedback in the comments section, Thanks for reading!
Reference Links –
-
-
-
-
-
- | https://blogs.sap.com/2021/05/03/outstretching-sap-btp-with-appgyver-part-ii/ | CC-MAIN-2021-21 | refinedweb | 1,250 | 58.11 |
The QTextBoundaryFinder class provides a way of finding Unicode text boundaries in a string. More...
#include <QTextBoundaryFinder>
Note: All functions in this class are reentrant.
This class was introduced in Qt 4.4..
The BoundaryReasons type is a typedef for QFlags<BoundaryReason>. It stores an OR combination of BoundaryReason values.
Constructs an invalid QTextBoundaryFinder object.
Copies the QTextBoundaryFinder object, other.
Creates a QTextBoundaryFinder object of type operating on string..
Destructs the QTextBoundaryFinder object.
Returns the reasons for the boundary finder to have chosen the current position as a boundary.
Returns true if the object's position() is currently at a valid text boundary.
Returns true if the text boundary finder is valid; otherwise returns false. A default QTextBoundaryFinder is invalid.
Returns the current position of the QTextBoundaryFinder.
The range is from 0 (the beginning of the string) to the length of the string inclusive.
See also setPosition().
Sets the current position of the QTextBoundaryFinder to position.
If position is out of bounds, it will be bound to only valid positions. In this case, valid positions are from 0 to the length of the string inclusive.
Returns the string the QTextBoundaryFinder object operates on.
Moves the finder to the end of the string. This is equivalent to setPosition(string.length()).
See also setPosition() and position().
Moves the QTextBoundaryFinder to the next boundary position and returns that position.
Returns -1 if there is no next boundary.
Moves the QTextBoundaryFinder to the previous boundary position and returns that position.
Returns -1 if there is no previous boundary.
Moves the finder to the start of the string. This is equivalent to setPosition(0).
See also setPosition() and position().
Returns the type of the QTextBoundaryFinder.
Assigns the object, other, to another QTextBoundaryFinder object. | https://doc.qt.io/archives/qt-4.7/qtextboundaryfinder.html | CC-MAIN-2021-17 | refinedweb | 288 | 62.54 |
WSUS - Doesnt appear to be working..a little background on this..
1. Created a WSUS VM using Virtual Box to download updates from Microsoft.com
2.Then followed procedure for using WSUS on a disconnected LAN per MS Article on WSUS Disconnected networks.
3.Used xcopy /J /F /E /S c:\WSUS\WSusContent\* to destination(by the way, i tried the /O switch to preserve permissions/ACL and it didnt like that).
4. Ran wsusexport on exporting server/vm (imported .cab file with content and made a log file of transaction)
5. Ran wsusimport on importing server giving it the exported(export.cab file from export server) and create an imprt.log file
6. Walked away from VM for awhile(as there is no indication other than a blinking cursor telling you that updates are being imported.
7. Came back and i noticed the VM seemed to endlessly try to start windows..killed/powered down VM.
8. Upon reboot, noticed that 1: all files in wsuscontent were there per xcopy. 2: that upon opening up the wsus console to look at all updates it reports that upon trying to approve updates, that once they are downloaded they can then be applied..
9. Assuming that my WSUS content folder is preserved and it does show the same size in GB as my exporting server as well as size and amount of files/folders, why would they still say they needed to be downloaded?
10. I tried or am in the process of re-importing the wsus export.cab file from the export server maybe hoping this is why i am seeing the updates as not downloaded?
any thoughts/ideas on this? does this method even work? and if so, are permissions on copied wsuscontent folder/files beneath that folder, in need of their original permission sets?
A couple of observations.
Using XCOPY direct from the 'connected' to 'disconnected' server is not typical, since usually there's going to be some intermediate source, typically a portable USB drive. You're right that /O won't work. You cannot copy ACLs from the connected to the disconnected,
they must be inherited on the disconnected server. (This is due to the local SID for the WSUS Administrators group.)
There's essentially three reasons a 'disconnected' server will report that files need to be downloaded:
After using the /O switch and having it fail, did you completely purge the ~\WSUSContent\* folder tree on the 'disconnected' server before the subsequent copy?
I'd focus on why the VM was repeatedly rebooting ... given that you weren't observing the machine, it's possible that a reboot corrupted the import reconciliation and what you're getting is based on bad results from the import. I agree that re-running the
import is probably the best 'next move'. ? | http://social.technet.microsoft.com/Forums/windowsserver/fr-FR/ca0acbb9-b8bd-4fa9-9320-864d0ffd641b/wsus-disconnected-network?forum=winserverwsus | CC-MAIN-2014-35 | refinedweb | 469 | 65.01 |
the code compiles but when i try running the .exe it closes. It looks liek for the 2 seconds its up it works, but then it closes itself. What am i doing wrong? heres the code:
#include <iostream> using namespace std; int main() { // sets variables to be set in the program later on double hours_worked, hourly_pay, gross_pay, ss_wh, wh_tax, net_pay, gross_pay_total, ss_wh_total, wh_tax_total, net_pay_total, checks; cout << "\nInput hours worked(0 to quit):"; cin >> hours_worked; // Loop to calculate all info while (hours_worked > 0) { // Inputs hourly rate per hour cout << "\nInput pay rate per an hour :"; cin >> hourly_pay; if (hours_worked > 40) gross_pay = (hours_worked * hourly_pay) + (hours_worked - 40) * (hourly_pay * 1.5); if (hours_worked = 0) cout << "\nTOTALS\n"; cout << "Total Payroll: " << gross_pay_total; cout << "\nNumber of Check: " << checks; cout << "\nAverage Paycheck: " << net_pay_total; cout << "\n\nTotal SS Withheld:" << ss_wh_total; cout << "\nTotal Witholding: " << wh_tax_total; // Does all calc for taxes and net worth if (hours_worked < 40, hours_worked > 0) gross_pay = hours_worked * hourly_pay; ss_wh = gross_pay * .08; wh_tax = gross_pay * .10; net_pay = gross_pay - ss_wh - wh_tax; // counters checks++; gross_pay_total = gross_pay_total + gross_pay; ss_wh_total = ss_wh_total + wh_tax; wh_tax_total = wh_tax_total + wh_tax; net_pay_total = net_pay_total + net_pay; // Prints summary info cout << "\nGross pay :" << gross_pay; cout << "\nSS Witheld :" << ss_wh; cout << "\nWitholding tax :" << wh_tax; cout << "\nNet Pay :" << net_pay; } return 0; }
I just read the info about students posting requests for answers to their HW questions. This IS a homework problem, but i have legit. tried it and just CANNOT for the life of me figure out where a mistake is. Also for some reason when it didnt end before displaying the answer, it wouldnt accept a 0 to quit the program, im not sure how to add that, any ideas? Cause the while is there, then the 2 if's for the pay differences for overtime etc, any ideas would be truely awesome. thanks! | https://www.daniweb.com/programming/software-development/threads/18012/simple-c-program-terminate-prematurely | CC-MAIN-2021-17 | refinedweb | 291 | 55.78 |
Lurking
CodeProject:
Here is part of the Settings table:
Here are three divs I connected to three events to display the results:);
Then I deleted all but one function in the implementation class…to get it really lean and mean.
Here is the final WCF Service CS code:
using System;
using System.ServiceModel;
using System.ServiceModel.Activation;
using System.ServiceModel.Web;
public class MyService
.
You should see this:
A;
My holy grail has been found..
At one of the local golf courses I frequent, there is an open grass field next to the course. It is about eight acres in size and mowed regularly. It is permissible to hit golf balls there—you bring and shag our own balls. My golf colleagues and I spend hours there practicing, chatting and in general just wasting time.
One of the guys brings Ginger, the amazing, incredible, wonder dog.
Ginger is a Hungarian Vizlas (or Hungarian pointer). She chases squirrels, begs for snacks and supervises us closely to make sure we don't misbehave.
Anyway, I decided to make a dedicated web page to measure distances on the field in yards using online mapping services. I started with Google maps and then did the same application with Bing maps. It is a good way to become familiar with the APIs.
Here are images of the final two maps:
Google:
Bing:
To start with online mapping services, you need to visit the respective websites and get a developers key.
I pared the code down to the minimum to make it easier to compare the APIs. Google maps required this CSS (or it wouldn't work):
<style type="text/css">
html
body
Here is how the map scripts are included. Google requires the developer Key when loading the JavaScript, Bing requires it when the map object is created:
Google:
<script type="text/javascript" src="" > </script>
Bing:
<script type="text/javascript" src=""> </script>
Note: I use jQuery to manipulate the DOM elements which may be overkill, but I may add more stuff to this application and I didn't want to have to add it later. Plus, I really like jQuery.
Here is how the maps are created:
Common Code (the same for both Google and Bing Maps):
var gTheMap;
var gMarker1;
var gMarker2;
$(document).ready(DocLoaded);
function DocLoaded()
// golf course coordinates
var StartLat = 44.924254;
var StartLng = -93.366859;
// what element to display the map in
var mapdiv = $("#map_div")[0];
// where on earth the map should display
var StartPoint = new google.maps.LatLng(StartLat, StartLng);
// create the map
gTheMap = new google.maps.Map(mapdiv,
{
center: StartPoint,
zoom: 18,
mapTypeId: google.maps.MapTypeId.SATELLITE
// place two markers
marker1 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng + .0001));
marker2 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng - .0001));
DragEnd(null);
Bing:
var StartPoint = new Microsoft.Maps.Location(StartLat, StartLng);
gTheMap = new Microsoft.Maps.Map(mapdiv,
credentials: 'XXXXXXXXXXXXXXXXXXX',
mapTypeId: Microsoft.Maps.MapTypeId.aerial
marker1 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng + .0001));
marker2 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng - .0001));
Note: In the Bing documentation, mapTypeId: was missing from the list of options even though the sample code included it.
Note: When creating the Bing map, use the developer Key for the credentials property.
I immediately place two markers/pins on the map which is simpler that creating them on the fly with mouse clicks (as I first tried). The markers/pins are draggable and I capture the DragEnd event to calculate and display the distance in yards and draw a line when the user finishes dragging.
Here is the code to place a marker:
// ---- PlaceMarker ------------------------------------
function PlaceMarker(location)
var marker = new google.maps.Marker(
position: location,
map: gTheMap,
draggable: true
});
marker.addListener('dragend', DragEnd);
return marker;
var marker = new Microsoft.Maps.Pushpin(location,
draggable : true
});
Microsoft.Maps.Events.addHandler(marker, 'dragend', DragEnd);
gTheMap.entities.push(marker);
Here is the code than runs when the user stops dragging a marker:
// ---- DragEnd -------------------------------------------
var gLine = null;
function DragEnd(Event)
var meters = google.maps.geometry.spherical.computeDistanceBetween(marker1.position, marker2.position);
var yards = meters * 1.0936133;
$("#message").text(yards.toFixed(1) + ' yards');
// draw a line connecting the points
var Endpoints = [marker1.position, marker2.position];
if (gLine == null)
gLine = new google.maps.Polyline({
path: Endpoints,
strokeColor: "#FFFF00",
strokeOpacity: 1.0,
strokeWeight: 2,
map: gTheMap
else
gLine.setPath(Endpoints);
function DragEnd(Args)
var Distance = CalculateDistance(marker1._location, marker2._location);
$("#message").text(Distance.toFixed(1) + ' yards');
// draw a line connecting the points
var Endpoints = [marker1._location, marker2._location];
if (gLine == null)
{
gLine = new Microsoft.Maps.Polyline(Endpoints,
{
strokeColor: new Microsoft.Maps.Color(0xFF, 0xFF, 0xFF, 0), // aRGB
strokeThickness : 2
});
gTheMap.entities.push(gLine);
}
else
gLine.setLocations(Endpoints);
}
Note: I couldn't find a function to calculate the distance between points in the Bing API, so I wrote my own (CalculateDistance). If you want to see the source for it, you can pick it off the web page.
Note: I was able to verify the accuracy of the measurements by using the golf hole next to the field. I put a pin/marker on the center of the green, and then by zooming in, I was able to see the 150 markers on the fairway and put the other pin/marker on one of them.
Final Notes:
All in all, the APIs are very similar. Both made it easy to accomplish a lot with a minimum amount of code.
In one aerial view, there are leaves on the tree, in the other, the trees are bare. I don't know which service has the newer data.
Here are links to working pages:
Bing Map Demo
Google Map Demo
I hope someone finds this useful.
Steve Wellens
"A Overflow('<div class="section">');}); and in the source element (this) to the collection $this = $this.nextUntil('.heading').andSelf(); // wrap the elements with a div $this.wrapAll('<div class="section" >'); });}')
The problem is we need a way to tell the nextUntil function when to stop. CSS selectors to the rescue!
nextUntil('.heading, a'));
Here's a link to a jsFiddle if you want to play with it.
I hope someone finds this useful
CSS:.
One of the great features of log4net is how easy it is to route the logs to multiple outputs. Here is an incomplete list of outputs…or 'Appenders' in the log4net vernacular:
Wow.
I've been doing a lot of work with jQuery/JavaScript and it dawned on me that seeing server side logging strings in a JavaScript Console could be useful.
So I wrote a log4net JavaScript Console Appender. Strings logged at the server will show up in the browser's console window. Note: For IE, you need to have the "Developer Tools" window active.
I'm not going to describe how to setup log4net in an Asp.Net web site; there are many step-by-step tutorials around. But I'll give you some hints:
I built the Appender and a test Asp.Net site in .Net Framework 4.0. Here's the jsConsoleAppender.cs file:
using System.Collections.Generic;
using System.Text;
using log4net;
using log4net.Core;
using log4net.Appender;
using log4net.Layout;
using System.Web;
using System.Web.UI;
namespace log4net.Appender
// log4net JSConsoleAppender
// Writes log strings to client's javascript console if available
public class JSConsoleAppender : AppenderSkeleton
// each JavaScript emitted requires a unique id, this counter provides it
private int m_IDCounter = 0;
// what to do if no HttpContext is found
private bool m_ExceptionOnNoHttpContext = true;
public bool ExceptionOnNoHttpContext
get { return m_ExceptionOnNoHttpContext; }
set { m_ExceptionOnNoHttpContext = value; }
// The meat of the Appender
override protected void Append(LoggingEvent loggingEvent)
// optional test for HttpContext, set in config file.
// default is true
if (ExceptionOnNoHttpContext == true)
if (HttpContext.Current == null)
{
ErrorHandler.Error("JSConsoleAppender: No HttpContext to write javascript to.");
return;
}
}
// newlines mess up JavaScript...check for them in the pattern
PatternLayout Layout = this.Layout as PatternLayout;
if (Layout.ConversionPattern.Contains("%newline"))
ErrorHandler.Error("JSConsoleAppender: Pattern may not contain %newline.");
return;
// format the Log string
String LogStr = this.RenderLoggingEvent(loggingEvent);
// single quotes in the log message will mess up our JavaScript
LogStr = LogStr.Replace("'", "\\'");
// Check if console exists before writing to it
String OutputScript = String.Format("if (window.console) console.log('{0}');", LogStr);
// This sends the script to the bottom of the page
Page page = HttpContext.Current.CurrentHandler as Page;
page.ClientScript.RegisterStartupScript(page.GetType(), m_IDCounter++.ToString(), OutputScript, true);
// There is no default layout
override protected bool RequiresLayout
get { return true; }
From the Asp.Net test application, here's the web.config file. In the pattern for the JSConsoleAppender, I added the word SERVER: to differentiate the lines from client logging. Note there are two other Appenders in the log…just for fun!
<?xml version="1.0"?>
<configuration>
<!--BEGIN log4net configuration-->
<configSections >
<section name="log4net"
type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/>
</configSections>
<log4net>
<appender name="LogFileAppender"
type="log4net.Appender.FileAppender">
<param name="File" value="C:\Log4Net.log"/>
<layout type="log4net.Layout.PatternLayout">
<param name="ConversionPattern" value="%d %-5p %c %m%n"/>
</layout>
</appender>
<appender name="TraceAppender"
type="log4net.Appender.TraceAppender">
<conversionPattern value="%date %-5level [%property{NDC}] - %message%newline" />
<appender name="JSConsoleAppender"
type="log4net.Appender.JSConsoleAppender">
<!--Note JSConsoleAppender cannot have %newline-->
<conversionPattern value="SERVER: %date %-5level %logger: %message SRC: %location" />
<logger name="MyLogger">
<level value="ALL" />
<appender-ref
<appender-ref
<appender-ref
</logger>
</log4net>
<!--END log4net configuration-->
<system.web>
<compilation debug="true"
targetFramework="4.0"/>
</system.web>
</configuration>
Here's the default.aspx file from the test program, I added a bit of JavaScript and jQuery:
<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
<html xmlns="">
<head id="Head1" runat="server">
<title>Log4Net Test</title>
<script src="Scripts/jquery-1.7.js" type="text/javascript"></script>
$(document).ready(DocReady);
function DocReady()
if (window.console)
console.log("CLIENT: Doc Ready!");
<form id="form1" runat="server">
<div>
<h5>Log4Net Test</h5>
<asp:Button
</div>
</form>
Here's the default.cs file from the test program with some logging strings:
using System.Linq;
using System.Web.UI.WebControls;
public partial class _Default : System.Web.UI.Page
private static readonly ILog log = LogManager.GetLogger("MyLogger");
protected void Page_Load(object sender, EventArgs e)
log.Info("Page_Load!");
protected void ButtonLog_Click(object sender, EventArgs e)
log.Info("Soft kitty, warm kitty");
log.Warn("Little ball of fur");
log.Error("Happy kitty, sleepy kitty");
log.Fatal("Purr, purr, purr.");
And finally, here are the three output logs:
From the JSConsoleAppender (IE Developer Tools), it includes a client log call with the server log calls:
From the TraceAppender, the Visual Studio Output window:
From the LogFileAppender (using Notepad):
You can download the solution/projects/code here. | http://weblogs.asp.net/stevewellens/default.aspx | CC-MAIN-2014-15 | refinedweb | 1,758 | 51.95 |
Trouble with introductory programming with C program regarding division and modulus
modulus and division in java
modulus operator in c examples
integer division programming
mod c
division operator in c
c modulus negative
integer division calculator
I apologize for the poor title, and I wish to be more specific, however I am in an introductory programming course, and I don't have much programming experience. The problem I have been given states that I need to find the total amount of rope to climb a mountain. It has to accurately put out how many pieces of 100 feet rope are needed, and how many 10 feet pieces of rope are needed. The issue I'm facing is that when, say for example, the mountain is 611 feet, I'm unsure how to get the program to display that 6 pieces of 100 feet rope are needed and 2 pieces of 10 feet rope are needed. My code allows for simple figures like, 600, or 610, or 10 feet, but I don't know how to compensate for a figure that is in between tens. My code is placed below- Again, I apologize for not being able to make this more specific.
#include <stdio.h> int main() { // declaration of variables int total_height; int hundred_feet_rope; int ten_feet_rope; //prompt user for input information printf("How tall is the mountain?\n"); scanf("%d", &total_height); //calculations for how many skeins of 100 feet rope are needed, and how many 10 feet rope is needed hundred_feet_rope = total_height / 100; ten_feet_rope = total_height % 100 / 10; //output results printf("You will need %d skeins of 100 feet rope and %d skeins of 10 feet rope!\n", hundred_feet_rope, ten_feet_rope); return 0; }
Modulo Operator (%) in C/C++ with Examples, The modulo division operator produces the remainder of an integer division. Compilation Error in C code :- prog.c: In function 'main': prog.c:19:16: error: invalid The sign of the result for modulo operator is machine-dependent for negative Please write to us at contribute@geeksforgeeks.org to report any issue with the Trouble with introductory programming with C program regarding division and modulus I apologize for the poor title, and I wish to be more specific, however I am in an introductory programming course, and I don't have much programming experience.
To consume the additional 10-foot section of rope you require for anything between multiples of 10, you must push your existing number to account for that additional length of rope prior to the final division.
ten_feet_rope = (total_height % 100 + 9) / 10; // here ============================^^^
This is common in computer science. Ex: symmetric encryption of an arbitrary amount of data using a specified block size .
At least I think that's what you're asking. I leave the task of accounting for this potentially bleeding into a 100 foot multiple to you (it can happen. suppose you had a 95 foot height; in that case you would want one 100ft length, and no 10ft lengths).
An Introduction to C Programming for First-time Programmers, Let's begin by writing our first C program that prints the message "Hello, world!" on the On Text Editor with GNU GCC compiler, issue these command from CMD Shell Addition, subtraction, multiplication, division and remainder are binary For the Love of Physics - Walter Lewin - May 16, 2011 - Duration: 1:01:26. Lectures by Walter Lewin. They will make you ♥ Physics. Recommended for you
You could use the modulus operator to see if the remaining number divided by 10 is bigger than 0, if so, it adds one 10 feet rope:
if (total_height % 10 > 0) ten_feet_rope++;
Integer Division and Modulus – Programming Fundamentals, Practice: Introduction to Programming In integer division and modulus, the dividend is divided by the divisor into an integer quotient and a remainder. differently, depending on the programming language, and is called: integer division. cnx.org: Programming Fundamentals – A Modular Structured Approach using C++ The C and C++ language provides a built-in mechanism, the modulus operator (‘%’), that computes the remainder that results from performing integer division. Consider the following program which takes a number from user and calculates the remainder of the number with divided by 3.
A Natural Introduction to Computer Programming with C++, Program evenodd.cpp is another example of the use of an if-else construct. As there are only two possible values for the remainder in by-two divisions, the The modulus operator in C gives us the remainder. Thus, a%b gives us the remainder after dividing a by b. When we divide a smaller number by a larger number, the quotient is zero, while the remainder is the dividend i.e. the smaller number itself. So, the result is the smaller number itself, no matter how much larger the divisor is.
Modulus Operator in C and C++, How to use the modulus operator when in C and C++ to get remainders. Take a simple arithmetic problem: what's left over when you divide 11 by 3? But how would you compute this in a programming language like C or C++? It's not generated number and reduce that number to a random number on a smaller range, C PROGRAMMING: INTEGER DIVISION AND MODULO (%) Whentwointegersaredivided,theresultistruncated. Thatis,whenthecomputercalculates 23/4, instead of getting 5.75 it gets 5. The computer literally asks how many times 4 goes into 23, and doesn’t care anything about the remainder.
Introduction to Numerical Programming: A Practical Guide for , A Practical Guide for Scientists and Engineers Using Python and C/C++ Titus A. Beu According to the polynomial remainder theorem, by dividing Pn (x) by the while Listing 6.11 provides an example of a main program that makes use of Check out for more free engineering tutorials and math lessons! C++ Programming Tutorial: Integer division and remainders using
total_height % 10is zero when the height is divisible by 10 and not zero if the height isn’t divisible by 10. Does that help?
- 5 100 and 9 10 will not be enough anyway. Did you mean 5 100 and 10 10?
- @Broman: yes, good catch! I was fixing it as you were commenting :)
- This will fail an input on the form
x99where
xis zero or more digits.
- @Broman Did you read this answer? Specifically the last sentence ?
- Nope, I did not. I stopped reading when I saw the bug. Good enough then.
- This was it, thank you so much. I feel i overthought the problem a lot looking back on it. Again, i apologize if the question was worded poorly or if the code wasn't formatted accurately.
- @MorganHilton Read the last sentence. You still have work to do. But at least you know how to round up to a nearest block, which I think was the root of your question.This answer isn't intended to solve your homework; only show you how to do that round-up. | https://thetopsites.net/article/52141770.shtml | CC-MAIN-2021-25 | refinedweb | 1,145 | 53.51 |
This th...DWHEELER/Text-Markup-0.23 - 21 May 2015 06:24:15 GMT - Search in distribution
- Text::Markup::HTML - HTML parser for Text::Markup
- Text::Markup::Pod - Pod parser for Text::Markup
- Text::Markup::None - Turn a file with no known markup into HTML
- 9 more results from Text-Markup »
Text::Markup::Any is Common Lightweight Markup Language Interface. Currently supported modules are Text::Markdown, Text::MultiMarkdown, Text::Markdown::Discount, Text::Markdown::GitHubAPI, Text::Markdown::Hoedown, Text::Xatena and Text::Textile....SONGMU/Text-Markup-Any-0.04 - 23 Nov 2013 06:32:02 GMT - Search in distribution
TEI XML is a wonderful thing. The elements defined therein allow a transcriber to record and represent just about any feature of a text that he or she encounters. The problem is the transcription itself. When I am transcribing a manuscript, especiall...AURUM/Text-TEI-Markup-1.9 - 16 May 2014 14:48:17 GMT - Search in distribution
MetaMarkup was inspired by POD, Wiki and PerlMonks. I created it because I wanted a simple format to write documents for my site quickly. A document consists of paragraphs. Paragraphs are separated by blank lines, which may contain whitespace. A para...JUERD/Text-MetaMarkup-0.01 - 13 Jun 2003 07:40:24 GMT - Search in distribution
- Text::MetaMarkup::HTML - MM-to-HTML conversion
- Text::MetaMarkup::AddOn::Perl - Add-on for MM to support embedded Perl
- Text::MetaMarkup::AddOn::Raw - Add-on for MM to support raw code
- 2 more results from Text-MetaMarkup »
Provides formatting to HTML for the *Caffeinated Markup Language*. Implemented using the Text::CaffeinatedMarkup::PullParser. For details on the syntax that CML implements, please see the Github wiki < - 04 Jan 2014 22:55:24 GMT - Search in distribution
TODO...ABW/Kite-0.4 - 28 Feb 2001 15:12:52 GMT - Search in distribution
- Kite - collection of modules useful in Kite design and construction.
- Kite::XML::Parser - XML parser for kite related markup
Blatte is a very powerful text markup and transformation language with a very simple syntax. A Blatte document can be translated into a Perl program that, when executed, produces a transformed version of the input document. This module itself contain...BOBG/Blatte-0.9.4 - 28 Jul 2001 21:05:53
- Text::Smart::HTML - Smart text outputter for HTML
This module simply strips HTML-like markup from text rapidly and brutally. It could easily be used to strip XML or SGML markup instead; but as removing HTML is a much more common problem, this module lives in the HTML:: namespace. It is written in XS...KILINRAX/HTML-Strip-2.10 4.5 (3 reviews) - 22 Apr 2016 11:21:38 GMT - Search in distribution
ZOUL/Text-FindLinks-0.04 - 27 Sep 2009 07:45:44 GMT - Search in distribution
This module is a thin wrapper for John Gruber's SmartyPants plugin for various CMSs. SmartyPants is a web publishing utility that translates plain ASCII punctuation characters into "smart" typographic punctuation HTML entities. SmartyPants can perfor...TSIBLEY/Text-Typography-0.01 5 (1 review) - 10 Jan 2006 04:33:49 GMT - Search in distribution
Provides a simple means of parsing XML to return a selection of information based on a markup profile describing the XML structure and how the structure relates to a logical grouping of information ( a dataset )....SPURIN/XML-Dataset-0.006 - 04 Apr 2014 15:22
Spork lets you create HTML slideshow presentations easily. It comes with a sample slideshow. All you need is a text editor, a browser and a topic. Spork allows you create an entire slideshow by editing a single file called "Spork.slides" (by default)...INGY/Spork-0.21 - 10 Jun 2011 16:29:05 GMT - Search in distribution
This class represents an XPC request or response. It uses XML::Parser to parse XML passed to its constructor....GREGOR/XPC-0.2 - 13 Apr 2001 11:35:13 GMT - Search in distribution
As these items are completed, move them down into Recently Completed Items, make sure to date and initial. When we have a version release, all of the recently completed items should be moved into changelog.pod....TPEDERSE/WordNet-Similarity-2.07 - 04 Oct 2015 16:19:03 GMT - Search in distribution
This script uses "Pod::POM" to convert a Pod document into text, HTML, back into Pod (e.g. to normalise a document to fix any markup errors), or any other format for which you have a view module. If the viewer is not one of the viewers bundled with "...NEILB/Pod-POM-2.01 3 (2 reviews) - 07 Nov 2015 21:05:42 3.5 (10 reviews) - 18 Apr 2015 15:04:42 GMT - Search in distribution
DBR/App-PDoc-0.10.0 - 21 Mar 2013 18:09:39 GMT - Search in distribution | https://metacpan.org/search?q=Text-Markup | CC-MAIN-2016-18 | refinedweb | 793 | 55.24 |
java.lang.Object
org.netlib.lapack.Sgeesorg.netlib.lapack.Sgees
public class Sgees
Following is the description from the original Fortran source. For each array argument, the Java version will include an integer offset parameter, so the arguments may not match the description exactly. Contact seymour@cs.utk.edu with any questions.
* .. * *). * *) (input) INTEGER * The order of the matrix A. N >= 0. * * A (input/output) REAL array, dimension (LDA,N) * On entry, the N-by-N matrix A. * On exit, A has been) (output) REAL array, dimension (LDVS,N) * If JOBVS = 'V', VS contains the orthogonal matrix Z of Schur * vectors. * If JOBVS = 'N', VS is not referenced. * * LDVS (input) INTEGER * The leading dimension of the array VS. LDVS >= 1; if * JOBVS = 'V', LDVS >= N. * * WORK (workspace/output) REAL array, dimension (LWORK) * On exit, if INFO = 0, WORK(1) contains the optimal LWORK. * * LWORK (input) (workspace) LOGICAL array, dimension (N) * Not referenced if SORT = 'N'. * * INFO (output). * * ===================================================================== * * .. Parameters ..
public Sgees()
public static void sgees(java.lang.String jobvs, java.lang.String sort, java.lang.Object select, int n, float[] a, int _a_offset, int lda, intW sdim, float[] wr, int _wr_offset, float[] wi, int _wi_offset, float[] vs, int _vs_offset, int ldvs, float[] work, int _work_offset, int lwork, boolean[] bwork, int _bwork_offset, intW info) | http://icl.cs.utk.edu/projectsfiles/f2j/javadoc/org/netlib/lapack/Sgees.html | CC-MAIN-2017-51 | refinedweb | 212 | 50.33 |
*
A friendly place for programming greenhorns!
Big Moose Saloon
Search
|
Java FAQ
|
Recent Topics
Register / Login
JavaRanch
»
Java Forums
»
Frameworks
»
Struts
Author
Probably Easy Struts2 Question - SessionAware
Aaron Wilt
Ranch Hand
Joined: Sep 26, 2001
Posts: 49
posted
Jul 12, 2007 18:50:00
0
I'm not sure what I'm doing wrong... but I have a feeling I'm making a stupid error I cannot find. I'm trying to use the SessionAware interface to give access to the Session object in my Struts 2 action class.
When I inspect the session map in the execute() method, it's an empty Map (non-null). Shouldn't this map contain session context information in addition to the any session attributes, such as the host, server name, port, etc ? Or does it only hold attributes?
Thanks for any help you can provide.
Here's my struts.xml and package.xml:="arson" /> <include file="packages.xml"/> </struts> package.xml <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE struts PUBLIC "-//Apache Software Foundation//DTD Struts Configuration 2.0//EN" ""> <struts> <package name="arson" namespace="/" extends="struts-default"> <interceptors> <interceptor name="servlet-config" class="org.apache.struts2.interceptor.ServletConfigInterceptor"/> </interceptors> <action name="Welcome" class="arson.action.WelcomeAction"> <result>pages/welcome.jsp</result> </action> <action name="ProductSearch" class="arson.action.ProductSearchAction"> <interceptor-ref <result>pages/productSearchResults.jsp</result> </action> </package> </struts>
Here's my Action class:
package arson.action; import java.util.List; import java.util.Map; import org.apache.struts2.interceptor.SessionAware; import com.opensymphony.xwork2.ActionSupport; import arson.biz.ProductManager; /** * Primary Action to perform a search for a List of Product entity objects */ public class ProductSearchAction extends ActionSupport implements SessionAware { private List products; private String criteria; public String execute() throws Exception { return performSearch(); } private Map session; public void setSession(Map map) { session = map; } public Map getSession() { return session; } /** * Performs a search of products based on the given criteria * @return SUCCESS if results are returned, ERROR otherwise */ public String performSearch() { System.out.println("Performing a search"); products = ProductManager.performProductSearch(criteria); return SUCCESS; } public List getProducts() { return products; } public void setProducts(List products) { this.products = products; } public String getCriteria() { return criteria; } public void setCriteria(String criteria) { this.criteria = criteria; } }
[ July 12, 2007: Message edited by: Aaron Wilt ]
[ July 12, 2007: Message edited by: Aaron Wilt ]
Aaron Wilt
Ranch Hand
Joined: Sep 26, 2001
Posts: 49
posted
Jul 13, 2007 04:21:00
0
I figured out what I was doing wrong... I was expecting too much from the session map. Somehow I got confused into thinking that the session map had request data in it. dumb mistake.
What I really should have done was implement ServletRequestAware, which allows me to get a hold of the
HttpServletRequest
, which of course is the same request object as
struts
1, which has the data that I needed.
Hopefully this post will save someone some trouble in the future. None of the docs I've read on struts2 spend much time talking about ServletRequestAware, which is why I probably made this oversight.
At least my code works now
Wes Wannemacher
Greenhorn
Joined: Jul 07, 2007
Posts: 13
posted
Jul 13, 2007 06:53:00
0
The problem with ServletRequestAware is that your action is now dependent on the JSP/Servlet API. Writing a
jUnit
test
is going to be very difficult. In your situation, I would ask the following - What is it exactly that you need out of the session object and is there another way to get it? Next, I would ask whether or not you might be better off placing this logic in an interceptor... It would seem that you may be trying to get this sort of info just to keep track of where a user came from, etc. By placing this logic in an interceptor, you could then configure it to work on any number of actions.
-Wes
Discussion of Java, Struts, Spring, Hibernate, etc. <a href="" target="_blank" rel="nofollow"></a>
Wes Wannemacher
Greenhorn
Joined: Jul 07, 2007
Posts: 13
posted
Jul 13, 2007 06:56:00
0
another thing...
I am pretty sure that servlet-config interceptor is in the default stack. It looks like your package extends "struts-default," so unless you changed the struts-default.xml in struts2-core-2.x.jar (which I would recommend against), then your attempt to insert the interceptor in your action configuration is redundant.
Nick Williamson
Ranch Hand
Joined: Jan 06, 2007
Posts: 73
posted
Jul 16, 2007 22:54:00
0
testing
wont be that hard, you can include spring and use their mockhttpservletresponse, they have a bunch of mock objects that are really good for junit testing with j2ee dependencies. And truthfully (not defending it, or saying you shouldn't), but how many people
unit
test their actions, much less anything they write. A
Vishal Sinha
Greenhorn
Joined: Jul 17, 2008
Posts: 6
posted
Jul 17, 2008 12:11:00
0
Hey,
I am really breaking my head to get session object in my action class.
When I login to a certain portal, the portal sets the user name to the session object. Now on the portal user clicks on my application which is written in struts2. When I get the session map using sessionaware interface, I get an empty map.
I also tried using ServeletrequestAware interfacae to get httprequest and called getSession() on it. Still I am getting empty session.
Can someone tell me how can I get the session which contains username set by the portal? Do I need to make any changes to struts-config.xml for getting the session.
I agree. Here's the link:
subject: Probably Easy Struts2 Question - SessionAware
Similar Threads
ModelDriver Interceptor + not getting my object
Struts2 Validation
values are not populated in browser
Struts 2 : Authentication & Authorization
Error DefaultActionInvocation
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/56607/Struts/Easy-Struts-SessionAware | CC-MAIN-2013-48 | refinedweb | 982 | 55.84 |
Announcements
5MinuteGamingMembers
Content count529
Joined
Last visited
Community Reputation274 Neutral
About 5MinuteGaming
- RankAdvanced Member
Need some advice to begin in gaming development..
5MinuteGaming replied to Odysseus's topic in For Beginners[quote name='Odysseus' timestamp='1337131023' post='4940554'] Right now I'm 22 and I would like to get in the game industry but I'm worried because I used to think that everything I needed I was going to learn it at university... now I've graduated after 4 years (engineering in software development). And realize that I don't know anything about developing games! I'm not even a really good programmer . I may know all the concepts but when it comes to start programming I find it very difficult. [/quote] I know exactly where your coming from with that. I too thought that I would learn all I needed to know that Uni, now I do website support and development. I have developed copious amounts of engine code but have yet to put things together into a game. The one thing I wish I had done more was to practice designing small games. I do believe that having finished games is the largest hurtle to being a successful game maker and ultimately making a living out of it. [quote name='Odysseus' timestamp='1337131023' post='4940554'] What I want is to get some experience, maybe work with people that develop games I don't even care if I don't get paid at moment. [/quote] Again I am in the same situation, I am focusing on creating and designing games now instead of on the coding aspect though it's currently really hard to switch gears after years and years of focusing on the endlessly evolving technology. That there is the crux it doesn't matter what the technology is games made in any language with any technology can still be enjoyable as long as the gameplay is good. Look at the classics for proof that this is true, pacman, tetris, mario, zelda. These games have stood the test of time the code has been ported and emulated on newer technology but the game design hasn't been altered. [quote name='Odysseus' timestamp='1337131023' post='4940554'] I've been reading other posts and I know that I need to choose a language (I want C#), read and practice a lot... I know that now (and you can be sure that I will start doing it). But I want more complete answers because what I want is to be in the game development enviroment to get experience! [/quote] Yes the general advice for beginners is to choose a technology, and I think everyone whose tried can agree that any language or engine can work even with limitations. I would say the best way to practice is to remake the classics use the existing design use freely posted artwork from the internet for your graphics or placeholder images or just render text. You want to get to the point where you've handled some of the coding problems related to these classic styles of gameplay. Even something as seemingly simple as pacman AI can be challenging if you've never coded it before. So my suggestion to you is to pick a classic, some that a lot of tutorials and beginners go to are Old School brickout - you will learn basic 2D collision detection how to design scoring systems and replayability asteroids - will give you more complex collision coding experience and how to handle a changing environment and a little free roam player movement. pacman - will teach you some AI and how to make a game more fun by tweaking those little ghosts top-down shooter - will give you a little bit of level design experience, and will teach you the values of a level editor or scripting support whether you use them or not. side-scrolling platformer - will teach you how to develop and create a realistic world and story elements, these can become complex but start with one weapon simple enemies that just move back and forth. puzzle - will teach you how to think about player difficulty and how to design game progression effectively. As well as get you thinking outside the box since puzzles games can create genres each in their own right. The good thing about these is that they have a lot of attention their basically the classical computer games and elements from these are seen in tons of AAA games. You can add other elements to these games and find a lot of information about them which is helpful when your stuck and could get disheartened. New age tower defense - these are simple and pretty set game designs which is good cause you don't have to design too much but still have some interesting choices to make. Making one of these can teach you a lot about balancing and how the little things effect gameplay in the long term (very important) [i](I can't think of any others right now, that don't fall into the above designs)[/i] The sad truth to me is that you can read and study all you want about making video games, you can even design complex and creative worlds, items, levels, and other game elements but until you've actually implemented the gameplay in code and made it work and polished a game with all the bells and whistles (saved games, sound, options menus, high scores, etc...) you really aren't a game programmer/designer. Having a portfolio of games you've created whether sold or just for personal use will be something that will attest to your abilities infinitely better than going to an interview or contacting a game company looking to break into the industry. I hope this helps steer you on the right path, good luck!
- Back to making games
Is OOP better for developing games?
5MinuteGaming replied to Mafioso's topic in General and Gameplay ProgrammingOne of the first usages and primary goals of OOP is encapsulation! The data-oriented programming in the article is more of a data oriented design and does not discard any OOP principles or design. The article is mainly about cache misses as a cause of bad performance. He blames it on OOP and wrongly so, as he states it's more of the general way OOP is taught and how a lot of new OO programmers tend to design using the classic desktop app approach and not focusing on performance. The article is more of a reminder that the underlining CPU architecture is important and should be taking into account the way your data is being stored in memory is important for performance. I am curious to see how he came to the conclusion that cache misses were the direct cause of all performance issues on projects he worked on over 10 years. There are other ways to improve performance in your design and managing memory localization in your design if cache misses is determined to be the root cause of performance issues, but usually it is something much different like non-optimized usage of the GPU which causes more performance problems than anything else I've ever seen. Concurrent programming comes with it's own perils, including deterministic functionality which is hard to do with multi-threaded applications and even more so with multi-cores as one tries to optimize code on a lower level. Threading in game design is the topic of a lot of experimentation and development, especially on multi-cores. OOP is both cost effective and productive for larger teams and projects. Proper abstraction can help make your code reusable, extensible, and easier to debug. Smaller projects like those most are familiar with here, may be easier to maintain, debug, and optimize with a functional language or a simpler OO design. @Antheus Simulations and time critical applications are not impossible to design effectively and efficiently using OO. I noticed your examples are referencing common design practices used in game OO programming wherein GameObjects are a base for everything in a game world and are responsible for updating itself. That is an issue in design not in any aspect of OOP. The GoF patterns can be successfully and effectively applied to Games and especially to large sets of related objects. Case and point, the Flyweight pattern is very effective and more efficient than common approaches to particle systems. But that is a matter of poor OO design for games. The state pattern is easily the most useful to switch between different game screens, also used in AI and makes extending them much easier during development. The mediator pattern can be used in place of each GameObject updating itself. The visitor pattern can make optimizing collision detection algorithms easier with no impact on any other code. The strategy pattern can be used for supporting different space-partitioning algorithms. These are only a few usages of OOP that are good things. OOP in my opinion can be hard to get right, easily over-designed, easily broken in common practices (such as getter and setter methods), require careful and experienced design. But its benefits do completely outweigh the functional programming paradigm or instances where one might BREAK OO principles. However, OOP in games is not the way to learn how to program games, without OOP Orge3D would not be able effectively abstract the rendering system (OpenGL, versus DirectX), support tons of plugins and many different plug-able functionality. It is beneficial for APIs because objects make it easier for programmers to understand how to use them, they are easier to visualize. Otherwise API and library writers have to write lots of documentation, case and point opengl. Very good for large APIs simpler ones are sometimes just as easy to understand with a small set of functions but the key there is small. For a triple-A commercial game title development time frames would double and risk would almost triple without OOP. Large programs are easier to develop with OOP and good design can reduce lots of overhead, performance issues, etc. I am talking hundreds of thousands of lines of code, especially the millions of lines of code in games today. It is absolutely necessary to decrease time to market, which is a constantly growing trend in the games industry along with abstracting out hardware specific code and cross-compiling, having the ability to release for multiple platforms, and reducing platform specific testing. Furthermore encapsulation reduces coding errors by simply preventing a programmer from calling any function that does anything at any time. Lisp solves that by defining each function as having a specific functionality similar to encapsulation. But this is my opinion on OOP, which is what the OP asked for, so I think some industry research would be necessary to say whether OOP is truly better for developing games. Obviously it is a trend for a reason, and games have only gotten more realistic and more detailed. I apologize for the long-winded post.
[java] Proper way to build GUI layout
5MinuteGaming replied to gretty's topic in General and Gameplay ProgrammingI have written a fairly large application with over 20 different forms. I use a combination of three things BorderLayouts, GridBagLayouts, and swing Borders. I would have used the GroupLayout, new as of jdk1.6 which is actually very versatile and provides an excellent dynamic sizing capability for your forms. I agree that GridBagLayout is hard to work with even for a veteran java programmer. Until you've done many different styles of forms with it it can be...quirky. If you are learning I would suggest staying away from GUI Builders my reasoning is that your initial instinct will be to make everything a static size, which will make the layout managers useless. One of the advantages of swing is it's ability to provide dynamic resizing of components. You won't find the same ease in Microsoft gui programming, with vb and c#, iirc. Borders are very important and with the GridBagLayout the insets are equally important to promote usability; you need spacing between components. As has been said you can use simple nested layouts. However, both GridBag and Group allow you to skip the nesting, making the behavior a bit simpler to manage, they allow you to put multiple sections of components in the same Panel without the need for nesting, but similar layouts can be achieved using nesting. The last alternative that I can suggest is rolling your own layout manager. It's extremely easy, of course you'll need to follow a few tutorials or examples, but it doesn't take much to make one. Good luck! As always feel free to ask more questions, layouts have become easier for me over the years so it doesn't take me much time to layout a form anymore.
[java] borderlayout ?question
5MinuteGaming replied to fantasyme's topic in General and Gameplay ProgrammingIt all depends on what kind of layout you are trying to go for. But in order to support more than 5 components using a BorderLayout you need to nest your objects (components) inside other containers (JPanel usually). GridBagLayout is a necessary layout for non-nested forms. If you use a null layout i.e. setLayout(null) you can position your components to absolute values meaning that where you setPosition(x,y) is where they will appear, but they won't move or resize if the window is resizable or if it's container is resized. A combination of FlowLayout, BorderLayout, CardLayout, and GridLayout is usually good for a start but eventually you will enjoy using GridBagLayout for flexibility and significantly shorter amount of code and nested components. BorderLayout will only allow a single component to occupy one direction, i.e. there can only be one component in the NORTH spot. see the LayoutManager interface java docs for a list of available layouts. I would familiarize yourself with as many as possible. It also isn't that difficult to make your own layout manager. Also you can get a good deal of information reading some of Sun's Java Tutorials they have them for just about anything, their not entirely geared toward professional level applications but they cover the basic usage of the sdk classes. They have one for layout managers Using Layout Managers it would definitely help to at least check out example code or read through it.
[java] Porting from C/C++ to Java: loading an 3d Object
5MinuteGaming replied to Ramon Wong's topic in General and Gameplay ProgrammingIt's hard to say without seeing the code you have so far. It may just be something really innocuous.
[java] Porting from C/C++ to Java: loading an 3d Object
5MinuteGaming replied to Ramon Wong's topic in General and Gameplay ProgrammingQuote:Original post by Ramon Wong question that I have is why do I have to divide the size with 8? The primitive type object wrappers provide constants for the size of the primitive type, but those are in bits, a short would usually be 16 bits, integer usually 32 bits. I guess this was for flexibility or maybe just completeness. I didn't think about it too much when I wrote the example but it would be more appropriate to do the following: Integer.SIZE/Byte.SIZE Since what you really are making is an array of Bytes. But I guess for file reading you need an exact number of bytes that your reading in, so I would say constants are just fine as long as the file format specifies the number of bytes for the field your trying to read. As a side note, when converting from C to Java the following would be very close. Where n is the size of the array. Cchar *buf = new char[n]; Javabyte buf = new byte[n]; You can do most of the same operations in a similar manor. E.g. Cmemcpy(dst, buf, n); JavaSystem.arrayCopy(buf, 0, dst, 0, buf.length); Feel free to post any other conversion questions you have.
[java] Porting from C/C++ to Java: loading an 3d Object
5MinuteGaming replied to Ramon Wong's topic in General and Gameplay ProgrammingString has a constructor that receives an array of bytes and converts that to a string. One thing to note however is that an array of bytes cannot be determined to only have 1 byte per character as it might be in wide character format. But there are constructors that allow for converting the type of ascii encoding if need be. To get the first 10 bytes from a stream or even the first n bytes, use the following. byte header[] = new byte[10]; is.read(header); String headerString = new String(header); Now to read in the integer version. byte versionBuf[] = new byte[Integer.SIZE/8]; is.read(versionBuf); ByteBuffer bb = ByteBuffer.wrap(versionBuf); IntBuffer ib = bb.asIntBuffer(); ib.rewind(); //must have to go back to the beginning of the buffer int version = ib.get(); Likewise there are multiple types of buffers in the java.nio package. Both are included in the android API so you should be good there. You may also be interested in DataInputStream which might make things easier for you if you are having trouble getting your head around how the ByteBuffer's work. Here would be one way to convert your function LoadHeader. private String id = null; private int version = -1; //default to an invalid version public boolean loadHeader(File file) throws IOException { //keep is a generic input stream, so that we can use any type of subclass of //InputStream in the future without needing to change the loading code. InputStream is = new FileInputStream(file); byte header[] = new byte[10]; is.read(header); id = new String(header); byte versionBuf[] = new byte[Integer.SIZE/8]; is.read(versionBuf); ByteBuffer bb = ByteBuffer.wrap(versionBuf); IntBuffer ib = bb.asIntBuffer(); ib.rewind(); version = ib.get(); if(!id.equals("MS3D000000")) { return false; } if(version != 3 && version != 4) { return false; } return true; //that whole thing can be shortened to, if the first is false it won't //evaluate the second and third expressions. //return !header.equals("MS3D000000") && version != 3 && version != 4; } C to Java caveat Java is a much more strongly typed language than C, so there is an abstraction between memory and your program. Unlike C where everything is just memory and you can manipulate it however you want and use neat pointer and memory address manipulations to do anything you want to it, in java there is a more strict contract between memory and the language everything essentially in java is an Object rather than memory. While java allows binary operations on primitive types an Array in java such as int[] array; is basically an object not like in C where it is just a sequence of bytes somewhere in memory. Thus you cannot guarantee the format of memory, so a by product of using java is to adhere to this strict contract. In java you must write code to specifically convert between types or properly use inheritance to solve your problems. Hope this helps, and also the reference documentation for Android is your friend, use it until you know every bit of it because it could save you a lot of time hunting down problems or writing low-level routines that have more stable and tested solutions. Just something to keep in mind as you convert your code. Skeletal Animation vs Keyframed on Android As far as Skeletal animation versus key framed. I am not sure what the programmable shader support is for the OpenGL ES specification. But if it is supported it might limit the amount of array size for matrices which would limit your Skeletal animation shader but it might still be doable. If there is no shader support then key framed is the way to go since doing skeletal animation without a shader would result in modifying the vertex buffer for an object and applying matrix transforms for the bones either every frame or just converting the skeletal animation to keyframed when loading the program, and applying the animation matrix transforms while loading.
[java] what is the proper way to create java gui?
5MinuteGaming replied to rajend3's topic in General and Gameplay Programming@Antheus - so I guess that makes my first bit of code there moot since the ActionListener is executed in the swing event thread. Although I have done a lot of JOptionPane.show*Dialog's in seperate threads while running concurrent code for doing certain tasks. As well as rendering things in separate threads. Usually setting JFrame.setIgnoreRepaint(true); and providing my own rendering thread, without any issues whatsoever. Since that is usually for rendering game specific things, I don't typically mix swing components into my games, thus I might see issues if I happen to use swing components in conjunction with game rendering.
[java] what is the proper way to create java gui?
5MinuteGaming replied to rajend3's topic in General and Gameplay ProgrammingI have never experienced a deadlock by instantiating and running a JFrame from the main thread. And I have run hundreds of java applications. However, just to be safe you can always use the invokeLater. If you creating and displaying JFrames from within your application it is probably preferable to use invokeLater to assure there are no threading issues. But like I said I have never ever run into a situation where a deadlock occurred from creating and displaying a JFrame from any thread. Even if that thread happens to be the Event thread. For instance JButton showWindow = new JButton("Show A Window"); showWindow.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { JFrame newFrame = new JFrame("Test Window"); newFrame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE); newFrame.setSize(300,200); newFrame.setLocationRelativeTo(null); //center on screen newFrame.setVisible(true); //display the window } }); this.add(showWindow); If you run into issues, it should be extremely easy to change what thread is creating and displaying your windows, if you put the display code within the super class of JFrame which is how I typically like to do things. public class MyFrame extends JFrame { public MyFrame() { super("My Frame"); setLayout(new BorderLayout()); add(new JLabel("Test Window"), BorderLayout.CENTER); this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); this.pack(); this.setLocationRelativeTo(null); this.setVisible(true); } public static void main(String...args) { new MyFrame(); // or /* save this only if you encounter issues with the other method SwingUtilities.invokeLater(new Runnable() { public void run() { new MyFrame(); } }); */ } } As far as the proper way to create a GUI, during school we never used invokeLater. I believe that you are splitting hairs on this it really is up to you to determine which method is best if you run into issues with one of the methods and not the other then that would be your answer. But like I said I have never run into any issue.
[java] Sending / receiving data in a datagram?
5MinuteGaming replied to The Rug's topic in General and Gameplay ProgrammingThe only way I can see to stream the incoming data packets to a DataInputStream without needing to create stream objects each time is to use PipedOutputStream and PipedInputStream. I'll see if I can work up an example for you.
[java] crazy java fps counter results
5MinuteGaming replied to Japheth's topic in General and Gameplay ProgrammingI'm guessing it has to do with your desktop being a dual core machine. Check your CPU usage while running the program, I'm pretty sure it will only use a single core thus your speed will greatly be reduced. The graphics card on your desktop won't actually increase the rendering speed unless you turn on hardware acceleration and opengl rendering support for java. According to the link below you can do that as a command line option: -Dsun.java2d.opengl=true I found the following link in a quick google search for java hardware acceleration, I'm sure there are more in depth explanations on the particular features that might help solve your problems. You also might want to try and put your rendering in a separate thread and do your own double buffering though I doubt this will make a difference. Just a thought though. Good luck with solving this issue. I would like to know of any progress you have with this and what method(s) you use.
[java] Java Team Required
5MinuteGaming replied to Funblocker's topic in General and Gameplay ProgrammingQuote:Original post by Stelimar Quote:Original post by Funblocker Unlike some (hack and Slash) mmo's written entirely in C++ the possibilities are endless when using Java as a jump off. I don't see how any of what you mentioned could be implemented in Java, but not C++... I would have to respectfully disagree with this. Java is a general purpose language, despite many unfounded assumptions that the VM is simply too slow it provides all that is necessary for such development. Quote:Original post by FunblockerThanks for the reply. I was hoping someone would point this out. What I wont mention at this stage are the extra features I have been working on for about 2 years that just wont work in any other language but Java. what I will mention though is the realism of this mmo both sight and sound will be unlike anything anyone has ever seen to date, including a well known Java based game called Wurm Online. Best regards Funblocker I would also have to question this statement since as I stated above Java is a general purpose programming language and C++ is as well. I fail to see exactly how any program can have features that would not be able to be implemented in any general purpose programming language. The only reason I can think of that you might say this is Java support for reflection and C/C++'s lack of support but it is not impossible to do reflection in C/C++. But all that aside I am interested in helping you along the way with your project. But at this time I can only devote a couple hours to help you get something up and running quickly. I believe that if you have at least something to start from then you may see more people willing to help you in your endeavor. I will PM you to discuss specifics.
Browser Game: Is it Entertaining Enough?
5MinuteGaming replied to Droopy's topic in Game Design and TheoryJust a quick thought that would improve the graphics quality by a ton, turn on antialiasing, it would help if you are drawing the balloons using the drawOval function in java.awt.Graphics. Do the following to turn on antialiasing in your paint method. Graphics2D g2d = (Graphics2D)g; g2d.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); Then whenever you use the "g2d" object to draw primitives it performs antialiasing operations to give you smooth appearing primitives. Other than that, I'm still playing around with it but it seems like an interesting concept so far.
Best online game developers?
5MinuteGaming replied to beatniq's topic in GDNet LoungeQuote:Original post by beatniq You guys have got to be kidding me. Wow, ok... I see I've stepped into your little xenophobic world here. My apologies for not knowing secret passcode on the proper way to phrase my question. lol... blow me. ...and don't ask me for any favors when you leave your mothers' basement. Very unprofessional beatniq, all anyone who responded to this thread wanted was clarification. I would still like to know what you mean by "online" or "video game developers" at first I thought you were talking about individuals here at this site that are good developers. Then I thought browser based games. I'm afraid without clarification noone will be able to provide you with the answers you are seeking. In addition I am not familiar with the Amnesia company what do they do and do they have a website? | https://www.gamedev.net/profile/63201-5minutegaming/ | CC-MAIN-2017-34 | refinedweb | 4,674 | 59.13 |
// MovieGuide.java - This program allows each theater patron to enter a value from 0 to 4 // indicating the number of stars that the patron awards to the Guide's featured movie of the // week. The program executes continuously until the theater manager enters a negative number to // quit. At the end of the program, the average star rating for the movie is displayed. import javax.swing.JOptionPane; public class MovieGuide2 { public static void main(String args[]) { // Declare and initialize variables. double numStars; // star rating. String numStarsString; // string version of star rating double averageStars; // average star rating. double totalStars = 0; // total of star ratings. int numPatrons = 0; // keep track of number of patrons // This is the work done in the housekeeping() method { numStarsString = JOptionPane.showInputDialog ("Enter Number of Stars between 1 and 4: ");// Get input. // This is the work done in the detailLoop() method numStars = Double.parseDouble (numStarsString); // Convert to double. while (numStars > 0) // Write while loop here { totalStars =+ numStars; System.out.println ("Number of Patrons Responding: " + numPatrons); numPatrons ++; System.out.println ("Number of Stars Given: " + numStars); numStarsString = JOptionPane.showInputDialog ("Enter Number of Stars between 1 and 4: ");// Get input. { if (totalStars == -1) { averageStars = (totalStars/numPatrons); // Calculate average star rating System.out.println("Average Star Value: " + averageStars); } } } // This is the work done in the endOfJob() method System.out.println ("End of File"); } System.exit(0); } // End of main() method. } // End of MovieGuide class.
It's not missing a curly brace because it submitted when I was making some adjustments. - sorry about that.
Quote
It is NOW missing a curly brace because it submitted when I was making some adjustments. Sorry....
This post has been edited by jon.kiparsky: 12 October 2012 - 09:34 AM
Reason for edit:: fixed code tags - [/code], not [END CODE] | http://www.dreamincode.net/forums/topic/295344-endless-loop-will-not-stop-and-does-not-increment-correctly/ | CC-MAIN-2016-22 | refinedweb | 293 | 60.51 |
Let’s create a basic project scructure by invoking mix new from the command line. Type:
$ mix new greet
Where greet is our name project. You should see following out put.
* creating README.md * creating .gitignore * creating mix.exs * creating config * creating config/config.exs * creating lib * creating lib/greet.ex * creating test * creating test/test_helper.exs * creating test/greet_test.exs
There is a lib folder which contains all application code. mix.exs holds the metadata and dependencies of your application.
Now edit the file: lib/greet.ex and add this code:
defmodule Greet do def main(_args) do IO.puts "Hello World" end end
Elixir uses escript to build an executable. At first we need to set the main_module in mix.exs:
def project do [app: :greet, version: "0.0.1", elixir: "~> 1.0", escript: [main_module: Greet], # <- add this line build_embedded: Mix.env == :prod, start_permanent: Mix.env == :prod, deps: deps] end
Then create an executable and run it:
$ mix escript.build $ ./greet
Return to the main article | https://josdem.io/techtalk/elixir/elixir_application/ | CC-MAIN-2022-33 | refinedweb | 168 | 64.07 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
How to open a popup view its active_id value?
Hey guys, I'm creating a button that opens a popup , but the view you want to open has different functions and the only way to differentiate is with the active_id attribute that appears in the URL. I mean, I want to open the stock.picking.form view, but this view has 3 different functions. It is the same view for three different functions. As shown in the figure below :
This is a URL of stock.picking.form active_id=2:
I want to open the view that has active_id = 2 . How I can open this view popup ?
This is the code of the function of the button:
@api.multi
def action_stock_picking(self):
self.ensure_one()
picking_form = self.env.ref('stock.view_picking_form', False)
ctx = dict(
default_model='stock.picking',
default_res_id=self.id,
default_composition_mode='comment',
mark_invoice_as_sent=True,
)
return {
'name': _('Formulario de Inventario: Recepciones'),
'type': 'ir.actions.act_window',
'view_type': 'form',
'view_mode': 'form',
'res_model': 'stock.picking',
'views': [(picking_form.id, 'form')],
'view_id': picking_form.id,
'target': 'new',
'context': ctx,
}
And this button code in the view.xml:
<button name="action_stock_picking" string="Inventario" type="object" icon="fa-arrow-right"/>
Thanks for all , I appreciate any help to solve this problem. Thanks.
add "res_id" entry in the returned dictionary, set value of this entry to the id of the record you want to open..
some_id = self.id # OR context.get('active_id'), OR whatever id you want to open
return {
'name': _('Formulario de Inventario: Recepciones'),
'type': 'ir.actions.act_window',
'view_type': 'form',
'view_mode': 'form',
'res_model': 'stock.picking',
'res_id': some_id,
'views': [(picking_form.id, 'form')],
'view_id': picking_form.id,
'target': 'new',
'context': c! | https://www.odoo.com/forum/help-1/question/how-to-open-a-popup-view-its-active-id-value-99647 | CC-MAIN-2016-50 | refinedweb | 301 | 53.27 |
.
Hey Everyone,
We try to very hard to avoid changes that might break 3rd party custom features but every now and then we can't avoid it if we want to move forward.
Some major improvements to the site search index were just completed and this resulted in a breaking change for anyone who has implemented search in their custom features building on top of our internal lucene.net search index.
Fixing your broken code is straightforward. Get the latest mojoPortal code and then compile your feature against it. Anywhere that it breaks (doesn't build) on search index related code just add:
using mojoPortal.SearchIndex;
and re-compile.
Other than that, in your indexbuilder you should assign new properties on IndexItem for CreatedUtc and LastModUtc to reflect those properties on your content. Also if your feature has a separate field for an excerpt, abstract, or summary, you should assign that on indexItem.ContentAbstract. There is also a new property for indexItem.Author that you can populate if it makes sense for your feature to show that in search results.
Sorry for the inconvenience but I think it will be worth it having these search index improvements for the long term.
Best,
Joe | https://www.mojoportal.com/Forums/Thread.aspx?pageid=5&t=11250~1 | CC-MAIN-2018-22 | refinedweb | 203 | 63.39 |
small test_set xgb predict - xgboost
i would like to ask a question about a problem that i have for the last couple days.
First of all i am a beginner in machine learning and this is my first time using the XGBoost algorithm so excuse me for any mistakes I have done.
I trained my model to predict whether a log file is malicious or not. After i save and reload my model on a different session i use the predict function which seems to be working normally ( with a few deviations in probabilities but that is another topic, I know I, have seen it in another topic )
The problem is this: Sometimes when i try to predict a "small" csv file after load it seems to be broken predicting only the Zero label, even for indexes that are categorized correct previously.
For example, i load a dataset containing 20.000 values , the predict() is working. I keep only the first 5 of these values using pandas drop, again its working. If i save the 5 values on a different csv and reload it its not working. The same error happens if i just remove by hand all indexes (19.995) and save file only with 5 remaining.
I would bet it is a size of file problem but when i drop the indexes on the dataframe through pandas it seems to be working
Also the number 5 ( of indexes ) is for example purpose the same happens if I delete a large portion of the dataset.
I first came up with this problem after trying to verify by hand some completely new logs, which seem to be classified correctly if thrown into the big csv file but not in a new file on their own.
Here is my load and predict code
##IMPORTS
df = pd.read_csv('big_test.csv')
df3 = pd.read_csv('small_test.csv')
#This one is necessary for the loaded_model')
loaded_model = joblib.load('finalized_model.sav')
result = loaded_model.predict(df)
print(result)
df2=df[:5]
result2 = loaded_model.predict(df2)
print(result2)
result3 = loaded_model.predict(df3)
print(result3)
The results i get are these:
[1 0 1 ... 0 0 0]
[1 0 1 0 1]
[0 0 0 0 0]
I can provide any code even from training or my dataset if necessary.
*EDIT: I use a pipeline for my data. I tried to reproduce the error after using xgb to fit the iris data and i could not. Maybe there is something wrong with my pipeline? the code is below :
df = pd.read_csv('big_test.csv')
# df.info()
# Split Dataset
attributes = ['uri','code','r_size','DT_sec','Method','http_version','PenTool','has_referer', 'Lang','LangProb','GibberFlag' ]
x_train, x_test, y_train, y_test = train_test_split(df[attributes], df['Scan'], test_size=0.2,
stratify=df['Scan'], random_state=0)
x_train, x_dev, y_train, y_dev = train_test_split(x_train, y_train, test_size=0.2,
stratify=y_train, random_state=0)
# print('Train:', len(y_train), 'Dev:', len(y_dev), 'Test:', len(y_test))
# set up graph function
def plot_precision_recall_curve(y_true, y_pred_scores):
precision, recall, thresholds = precision_recall_curve(y_true, y_pred_scores)
return ggplot(aes(x='recall', y='precision'),
data=pd.DataFrame({"precision": precision, "recall": recall})) + geom_line()
# XGBClassifier')
count_vectorizer = CountVectorizer(analyzer='char', ngram_range=(1, 2), min_df=10)
dict_vectorizer = DictVectorizer()
xgb = XGBClassifier(seed=0)
pipeline = Pipeline([
("feature_union", FeatureUnion([
('text_features', Pipeline([
('selector', ColumnSelector(['uri'])),
('count_vectorizer', count_vectorizer)
])),
('categorical_features', Pipeline([
('selector', ColumnSelector(['code','r_size','DT_sec','Method','http_version','PenTool','has_referer', 'Lang','LangProb','GibberFlag' ])),
('dict_vectorizer', dict_vectorizer)
]))
])),
('xgb', xgb)
])
pipeline.fit(x_train, y_train)
filename = 'finalized_model.sav'
joblib.dump(pipeline, filename)
Related
multiprocessing in python for images
I want to use multiprocessing in python. With func_process code, I extract patches from image and feed them to a trained network to predict the output: In main code, imagine I have an image, in a for loop using the rows, I select patches and make a matrix of these patches and feed this to network. In output in func_process, we get a vector of 0s and 1s like: pclass = [1 1 0 0 0 0 1 1 0 ... 0 0 1 0] as prediction results. I need to get all these vectors in each row of image and save them to outputimage_class to make the final mask. I think since rows are independent from each other, I can use multiprocessing . I have written the code. But the problem is that eventually I get a black image at the end while I see that I have nonzero values for pclass but the final result is 0!!!! Can you please tell me where is the problem with this code?? from joblib import Parallel, delayed import multiprocessing def func_process(outputimage_class,fname,image,hwsize,rowi): patches=[] #create a set of patches, oeprate on a per column basis for coli in xrange(33,1000): patches.append(image[rowi-32:rowi+32,coli-32:coli+32,:]) prediction = net.predict(patches) #predict the output pclass = prediction.argmax(axis=1) #get the argmax outputimage_class[rowi,hwsize+1:image.shape[1]-hwsize]=pclass % make the mask return outputimage_class if __name__ == "__main__": ... ... ... #load the trained network for fname in sorted(glob.glob(IMAGE_DIR+"*.tiff")): #get all of the files newfname_class = "%s/%s_class.png" % (OUTPUT_DIR,base_fname) #create the new files outputimage = np.zeros(shape=(10, 10)) scipy.misc.imsave(newfname_class, outputimage) #save a file to let potential other workers know that this file is being worked on and it should be skipped image = caffe.io.load_image(fname) #load the image to test outputimage_class = np.zeros(shape=(image.shape[0],image.shape[1])) % use multiprocessing num_cores = multiprocessing.cpu_count() outputimage_class = Parallel(n_jobs=num_cores)(delayed(func_process)(outputimage_class,fname,image,hwsize,rowi) for rowi in xrange(50,80)) outputimage_class = outputimage_class[hwsize:-hwsize, hwsize:-hwsize] scipy.misc.imsave(newfname,outputimage_class)
How do I get CSV files into an Estimator in Tensorflow 1.6
I am new to tensorflow (and my first question in StackOverflow) As a learning tool, I am trying to do something simple. (4 days later I am still confused) I have one CSV file with 36 columns (3500 records) with 0s and 1s. I am envisioning this file as a flattened 6x6 matrix. I have another CSV file with 1 columnn of ground truth 0 or 1 (3500 records) which indicates if at least 4 of the 6 of elements in the 6x6 matrix's diagonal are 1's. I am not sure I have processed the CSV files correctly. I am confused as to how I create the features dictionary and Labels and how that fits into the DNNClassifier I am using TensorFlow 1.6, Python 3.6 Below is the small amount of code I have so far. import tensorflow as tf import os def x_map(line): rDefaults = [[] for cl in range(36)] x_row = tf.decode_csv(line, record_defaults=rDefaults) return x_row def y_map(line): line = tf.string_to_number(line, out_type=tf.int32) y_row = tf.one_hot(line, depth=2) return y_row x_path_file = os.path.join('D:', 'Diag', '6x6_train.csv') y_path_file = os.path.join('D:', 'Diag', 'HasDiag_train.csv') filenames = [x_path_file] x_dataset = tf.data.TextLineDataset(filenames) x_dataset = x_dataset.map(x_map) x_dataset = x_dataset.batch(1) x_iter = x_dataset.make_one_shot_iterator() x_next_el = x_iter.get_next() filenames = [y_path_file] y_dataset = tf.data.TextLineDataset(filenames) y_dataset = y_dataset.map(y_map) y_dataset = y_dataset.batch(1) y_iter = y_dataset.make_one_shot_iterator() y_next_el = y_iter.get_next() init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) x_el = (sess.run(x_next_el)) y_el = (sess.run(y_next_el)) The output for x_el is: (array([1.], dtype=float32), array([1.], dtype=float32), array([1.], dtype=float32), array([1.], dtype=float32), array([1.], dtype=float32), array([0.] ... it goes on... The output for y_el is: [[1. 0.]]
You're pretty much there for a minimal working model. The main issue I see is that tf.decode_csv returns a tuple of tensors, where as I expect you want a single tensor with all values. Easy fix: x_row = tf.stack(tf.decode_csv(line, record_defaults=rDefaults)) That should work... but it fails to take advantage of many of the awesome things the tf.data.Dataset API has to offer, like shuffling, parallel threading etc. For example, if you shuffle each dataset, those shuffling operations won't be consistent. This is because you've created two separate datasets and manipulated them independently. If you create them independently, zip them together then manipulate, those manipulations will be consistent. Try something along these lines: def get_inputs( count=None, shuffle=True, buffer_size=1000, batch_size=32, num_parallel_calls=8, x_paths=[x_path_file], y_paths=[y_path_file]): """ Get x, y inputs. Args: count: number of epochs. None indicates infinite epochs. shuffle: whether or not to shuffle the dataset buffer_size: used in shuffle batch_size: size of batch. See outputs below num_parallel_calls: used in map. Note if > 1, intra-batch ordering will be shuffled x_paths: list of paths to x-value files. y_paths: list of paths to y-value files. Returns: x: (batch_size, 6, 6) tensor y: (batch_size, 2) tensor of 1-hot labels """ def x_map(line): rDefaults = [[] for cl in range(n_dims**2)] x_row = tf.stack(tf.decode_csv(line, record_defaults=rDefaults)) return x_row def y_map(line): line = tf.string_to_number(line, out_type=tf.int32) y_row = tf.one_hot(line, depth=2) return y_row def xy_map(x, y): return x_map(x), y_map(y) x_ds = tf.data.TextLineDataset(x_paths) y_ds = tf.data.TextLineDataset(y_paths) combined = tf.data.Dataset.zip((x_ds, y_ds)) combined = combined.repeat(count=count) if shuffle: combined = combined.shuffle(buffer_size) combined = combined.map(xy_map, num_parallel_calls=num_parallel_calls) combined = combined.batch(batch_size) x, y = combined.make_one_shot_iterator().get_next() return x, y To experiment/debug, x, y = get_inputs() with tf.Session() as sess: xv, yv = sess.run((x, y)) print(xv.shape, yv.shape) For use in an estimator, pass the function itself. estimator.train(get_inputs, max_steps=10000) def get_eval_inputs(): return get_inputs( count=1, shuffle=False x_paths=[x_eval_paths], y_paths=[y_eval_paths]) estimator.eval(get_eval_inputs)
Creating a .CSV file from a Lua table
I am trying to create a .csv file from a lua table. I've read some of the documentation online and on this forum... but can't seem to get it. I think it's because of the format of the lua table - take a look for yourselves. This script is all from a great open-source software called NeuralTalk2. The main point of the software is to caption images. You can read about it more on that page. Anyways, let me introduce to you the first piece of code: a function that takes the lua table and writes it to a .json file. This is how it looks like: function utils.write_json(path, j) -- API reference cjson.encode_sparse_array(true, 2, 10) local text = cjson.encode(j) local file = io.open(path, 'w') file:write(text) file:close() end Once the code compiles, the .json file looks like this: [{"caption":"a view of a UNK UNK in a cloudy sky","image_id":"0001"},{"caption":"a view of a UNK UNK in a cloudy sky","image_id":"0002"}] It goes on much longer, but generally, there is a "caption" following by some text, and an "image_id" followed by the image id. When I print the table onto the terminal, it looks like this: { 1681 : { caption : "a person holding a cell phone in their hand" image_id : "1681" } 1682 : { caption : "a person is taking a picture of a mirror" image_id : "1682" } } It has things before it and after it... I am just showing you the general format of the table. You may wonder how the table is defined... I am not sure there is a very clear definition of it inside the script. I will share it just for you to see, the file where it is defined depends on so many other files, so it's messy. I am hoping from the terminal output, you can understand generally the structure of the table, and from that know how the table is structured. I want to output it to a .csv file that will look like this image_id captions 1 xxxx 2 xxxx 3 xxxx How can I do this..? Not sure, given the format of the lua table... Here is the script where it is defined. Specifically, it is define at the end, but again, not sure itll be too much help. require 'torch' require 'nn' require 'nngraph' -- exotics require 'loadcaffe' -- local imports local utils = require 'misc.utils' require 'misc.DataLoader' require 'misc.DataLoaderRaw' require 'misc.LanguageModel' local net_utils = require 'misc.net_utils' local csv_utils = require 'misc.csv_utils' ------------------------------------------------------------------------------- -- Input arguments and options ------------------------------------------------------------------------------- cmd = torch.CmdLine() cmd:text() cmd:text('Train an Image Captioning model') cmd:text() cmd:text('Options') -- Input paths cmd:option('-model','','path to model to evaluate') -- Basic options cmd:option('-batch_size', 1, 'if > 0 then overrule, otherwise load from checkpoint.') cmd:option('-num_images', 100, 'how many images to use when periodically evaluating the loss? (-1 = all)') cmd:option('-language_eval', 0, 'Evaluate language as well (1 = yes, 0 = no)? BLEU/CIDEr/METEOR/ROUGE_L? requires coco-caption code from Github.') cmd:option('-dump_images', 1, 'Dump images into vis/imgs folder for vis? (1=yes,0=no)') cmd:option('-dump_json', 1, 'Dump json with predictions into vis folder? (1=yes,0=no)') cmd:option('-dump_path', 0, 'Write image paths along with predictions into vis json? (1=yes,0=no)') -- Sampling options cmd:option('-sample_max', 1, '1 = sample argmax words. 0 = sample from distributions.') cmd:option('-beam_size', 2, 'used when sample_max = 1, indicates number of beams in beam search. Usually 2 or 3 works well. More is not better. Set this to 1 for faster runtime but a bit worse performance.') cmd:option('-temperature', 1.0, 'temperature when sampling from distributions (i.e. when sample_max = 0). Lower = "safer" predictions.') -- For evaluation on a folder of images: cmd:option('-image_folder', '', 'If this is nonempty then will predict on the images in this folder path') cmd:option('-image_root', '', 'In case the image paths have to be preprended with a root path to an image folder') -- For evaluation on MSCOCO images from some split: cmd:option('-input_h5','','path to the h5file containing the preprocessed dataset. empty = fetch from model checkpoint.') cmd:option('-input_json','','path to the json file containing additional info and vocab. empty = fetch from model checkpoint.') cmd:option('-split', 'test', 'if running on MSCOCO images, which split to use: val|test|train') cmd:option('-coco_json', '', 'if nonempty then use this file in DataLoaderRaw (see docs there). Used only in MSCOCO test evaluation, where we have a specific json file of only test set images.') -- misc cmd:option('-backend', 'cudnn', 'nn|cudnn') cmd:option('-id', 'evalscript', 'an id identifying this run/job. used only if language_eval = 1 for appending to intermediate files') cmd:option('-seed', 123, 'random number generator seed to use') cmd:option('-gpuid', 0, 'which gpu to use. -1 = use CPU') cmd:text() ------------------------------------------------------------------------------- -- Basic Torch initializations ------------------------------------------------------------------------------- local opt = cmd:parse(arg) torch.manualSeed(opt.seed) torch.setdefaulttensortype('torch.FloatTensor') -- for CPU if opt.gpuid >= 0 then require 'cutorch' require 'cunn' if opt.backend == 'cudnn' then require 'cudnn' end cutorch.manualSeed(opt.seed) cutorch.setDevice(opt.gpuid + 1) -- note +1 because lua is 1-indexed end ------------------------------------------------------------------------------- -- Load the model checkpoint to evaluate ------------------------------------------------------------------------------- assert(string.len(opt.model) > 0, 'must provide a model') local checkpoint = torch.load(opt.model) -- override and collect parameters if string.len(opt.input_h5) == 0 then opt.input_h5 = checkpoint.opt.input_h5 end if string.len(opt.input_json) == 0 then opt.input_json = checkpoint.opt.input_json end if opt.batch_size == 0 then opt.batch_size = checkpoint.opt.batch_size end local fetch = {'rnn_size', 'input_encoding_size', 'drop_prob_lm', 'cnn_proto', 'cnn_model', 'seq_per_img'} for k,v in pairs(fetch) do opt[v] = checkpoint.opt[v] -- copy over options from model end local vocab = checkpoint.vocab -- ix -> word mapping ------------------------------------------------------------------------------- -- Create the Data Loader instance ------------------------------------------------------------------------------- local loader if string.len(opt.image_folder) == 0 then loader = DataLoader{h5_file = opt.input_h5, json_file = opt.input_json} else loader = DataLoaderRaw{folder_path = opt.image_folder, coco_json = opt.coco_json} end ------------------------------------------------------------------------------- -- Load the networks from model checkpoint ------------------------------------------------------------------------------- local protos = checkpoint.protos protos.expander = nn.FeatExpander(opt.seq_per_img) protos.crit = nn.LanguageModelCriterion() protos.lm:createClones() -- reconstruct clones inside the language model if opt.gpuid >= 0 then for k,v in pairs(protos) do v:cuda() end end ------------------------------------------------------------------------------- -- Evaluation fun(ction) ------------------------------------------------------------------------------- local function eval_split(split, evalopt) local verbose = utils.getopt(evalopt, 'verbose', true) local num_images = utils.getopt(evalopt, 'num_images', true) protos.cnn:evaluate() protos.lm:evaluate() loader:resetIterator(split) -- rewind iteator back to first datapoint in the split local n = 0 local loss_sum = 0 local loss_evals = 0 local predictions = {} while true do -- fetch a batch of data local data = loader:getBatch{batch_size = opt.batch_size, split = split, seq_per_img = opt.seq_per_img} data.images = net_utils.prepro(data.images, false, opt.gpuid >= 0) -- preprocess in place, and don't augment n = n + data.images:size(1) -- forward the model to get loss local feats = protos.cnn:forward(data.images) -- evaluate loss if we have the labels local loss = 0 if data.labels then local expanded_feats = protos.expander:forward(feats) local logprobs = protos.lm:forward{expanded_feats, data.labels} loss = protos.crit:forward(logprobs, data.labels) loss_sum = loss_sum + loss loss_evals = loss_evals + 1 end -- forward the model to also get generated samples for each image local sample_opts = { sample_max = opt.sample_max, beam_size = opt.beam_size, temperature = opt.temperature } local seq = protos.lm:sample(feats, sample_opts) local sents = net_utils.decode_sequence(vocab, seq) for k=1,#sents do local entry = {image_id = data.infos[k].id, caption = sents[k]} if opt.dump_path == 1 then entry.file_name = data.infos[k].file_path end table.insert(predictions, entry) if opt.dump_images == 1 then -- dump the raw image to vis/ folder local cmd = 'cp "' .. path.join(opt.image_root, data.infos[k].file_path) .. '" vis/imgs/img' .. #predictions .. '.jpg' -- bit gross print(cmd) os.execute(cmd) -- dont think there is cleaner way in Lua end if verbose then print(string.format('image %s: %s', entry.image_id, entry.caption)) end end -- if we wrapped around the split or used up val imgs budget then bail local ix0 = data.bounds.it_pos_now local ix1 = math.min(data.bounds.it_max, num_images) if verbose then print(string.format('evaluating performance... %d/%d (%f)', ix0-1, ix1, loss)) end if data.bounds.wrapped then break end -- the split ran out of data, lets break out if num_images >= 0 and n >= num_images then break end -- we've used enough images end local lang_stats if opt.language_eval == 1 then lang_stats = net_utils.language_eval(predictions, opt.id) end return loss_sum/loss_evals, predictions, lang_stats end local loss, split_predictions, lang_stats = eval_split(opt.split, {num_images = opt.num_images}) print('loss: ', loss) if lang_stats then print(lang_stats) end if opt.dump_json == 1 then -- dump the json print(split_predictions) utils.write_json('vis/vis.json', split_predictions) csv_utils.write('vis/vis.csv', split_predictions, ";") end
{ 1681 : { caption : "a person holding a cell phone in their hand" image_id : "1681" } 1682 : { caption : "a person is taking a picture of a mirror" image_id : "1682" } } Every {} denotes a table. The number or text in front of the colon is a key and the stuff behind the colon is the value stored in the table under that key. Let's create a table structure that would result an output like that one above: local myTable = {} myTable[1681] = {caption = "a person holding a cell phone in their hand", image_id = "1681"} myTable[1682] = {caption = "a person is taking a picture of a mirror", image_id = "1682"} Not sure what your problem is here. I think creating the desired csv file is rather trivial. All you need is a loop that creates a new line for each table entry and add the respective value's image_id (or key) and caption one line could look like: local nextLine = myTable[1681].image_id .. "," .. myTable[1681].caption .. "\n" of course this is not very beautiful and you would use a loop to get all elements of that table but I think I should leave some work for you as well ;)
If anyone is wondering, I figured out the solution a long time ago. function nt2_write(path, data, sep) sep = sep or ',' local file = assert(io.open(path, "w")) file:write('Image ID' .. "," .. 'Caption') file:write('\n') for k, v in pairs(data) do file:write(v["image_id"] .. "," .. v["caption"]) file:write('\n') end file:close() end Of course, you may need to change the string values, but yeah. Happy programming.
How to combine FCNN and RNN in Tensorflow?
I want to make a Neural Network, which would have recurrency (for example, LSTM) at some layers and normal connections (FC) at others. I cannot find a way to do it in Tensorflow. It works, if I have only FC layers, but I don't see how to add just one recurrent layer properly. I create a network in a following way : with tf.variable_scope("autoencoder_variables", reuse=None) as scope: for i in xrange(self.__num_hidden_layers + 1): # Train weights name_w = self._weights_str.format(i + 1) w_shape = (self.__shape[i], self.__shape[i + 1]) a = tf.multiply(4.0, tf.sqrt(6.0 / (w_shape[0] + w_shape[1]))) w_init = tf.random_uniform(w_shape, -1 * a, a) self[name_w] = tf.Variable(w_init, name=name_w, trainable=True) # Train biases name_b = self._biases_str.format(i + 1) b_shape = (self.__shape[i + 1],) b_init = tf.zeros(b_shape) self[name_b] = tf.Variable(b_init, trainable=True, name=name_b) if i+1 == self.__recurrent_layer: # Create an LSTM cell lstm_size = self.__shape[self.__recurrent_layer] self['lstm'] = tf.contrib.rnn.BasicLSTMCell(lstm_size) It should process the batches in a sequential order. I have a function for processing just one time-step, which will be called later, by a function, which process the whole sequence : def single_run(self, input_pl, state, just_middle = False): """Get the output of the autoencoder for a single batch Args: input_pl: tf placeholder for ae input data of size [batch_size, DoF] state: current state of LSTM memory units just_middle : will indicate if we want to extract only the middle layer of the network Returns: Tensor of output """ last_output = input_pl # Pass through the network for i in xrange(self.num_hidden_layers+1): if(i!=self.__recurrent_layer): w = self._w(i + 1) b = self._b(i + 1) last_output = self._activate(last_output, w, b) else: last_output, state = self['lstm'](last_output,state) return last_output The following function should take sequence of batches as input and produce sequence of batches as an output: def process_sequences(self, input_seq_pl, dropout, just_middle = False): """Get the output of the autoencoder Args: input_seq_pl: input data of size [batch_size, sequence_length, DoF] dropout: dropout rate just_middle : indicate if we want to extract only the middle layer of the network Returns: Tensor of output """ if(~just_middle): # if not middle layer numb_layers = self.__num_hidden_layers+1 else: numb_layers = FLAGS.middle_layer with tf.variable_scope("process_sequence", reuse=None) as scope: # Initial state of the LSTM memory. state = initial_state = self['lstm'].zero_state(FLAGS.batch_size, tf.float32) tf.get_variable_scope().reuse_variables() # THIS IS IMPORTANT LINE # First - Apply Dropout the_whole_sequences = tf.nn.dropout(input_seq_pl, dropout) # Take batches for every time step and run them through the network # Stack all their outputs with tf.control_dependencies([tf.convert_to_tensor(state, name='state') ]): # do not let paralelize the loop stacked_outputs = tf.stack( [ self.single_run(the_whole_sequences[:,time_st,:], state, just_middle) for time_st in range(self.sequence_length) ]) # Transpose output from the shape [sequence_length, batch_size, DoF] into [batch_size, sequence_length, DoF] output = tf.transpose(stacked_outputs , perm=[1, 0, 2]) return output The issue is with a variable scopes and their property "reuse". If I run this code as it is I am getting the following error: ' Variable Train/process_sequence/basic_lstm_cell/weights does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope? ' If I comment out the line, which tell it to reuse variables ( tf.get_variable_scope().reuse_variables() ) I am getting the following error: 'Variable Train/process_sequence/basic_lstm_cell/weights already exists, disallowed. Did you mean to set reuse=True in VarScope?' It seems, that we need "reuse=None" for the weights of the LSTM cell to be initialized and we need "reuse=True" in order to call the LSTM cell. Please, help me to figure out the way to do it properly.
I think the problem is that you're creating variables with tf.Variable. Please, use tf.get_variable instead -- does this solve your issue?
It seems that I have solved this issue using the hack from the official Tensorflow RNN example () with the following code with tf.variable_scope("RNN"): for time_step in range(num_steps): if time_step > 0: tf.get_variable_scope().reuse_variables() (cell_output, state) = cell(inputs[:, time_step, :], state) outputs.append(cell_output) The hack is that when we run LSTM first time, tf.get_variable_scope().reuse is set to False, so that the new LSTM cell is created. When we run it next time, we set tf.get_variable_scope().reuse to True, so that we are using the LSTM, which was already created.
diverging results from weka training and java training
I'm trying to create an "automated trainning" using weka's java api but I guess I'm doing something wrong, whenever I test my ARFF file via weka's interface using MultiLayerPerceptron with 10 Cross Validation or 66% Percentage Split I get some satisfactory results (around 90%), but when I try to test the same file via weka's API every test returns basically a 0% match (every row returns false) here's the output from weka's gui: === Evaluation on test split === === Summary === Correctly Classified Instances 78 91.7647 % Incorrectly Classified Instances 7 8.2353 % Kappa statistic 0.8081 Mean absolute error 0.0817 Root mean squared error 0.24 Relative absolute error 17.742 % Root relative squared error 51.0603 % Total Number of Instances 85 === Detailed Accuracy By Class === TP Rate FP Rate Precision Recall F-Measure ROC Area Class 0.885 0.068 0.852 0.885 0.868 0.958 1 0.932 0.115 0.948 0.932 0.94 0.958 0 Weighted Avg. 0.918 0.101 0.919 0.918 0.918 0.958 === Confusion Matrix === a b <-- classified as 23 3 | a = 1 4 55 | b = 0 and here's the code I've using on java (actually it's on .NET using IKVM): var classifier = new weka.classifiers.functions.MultilayerPerceptron(); classifier.setOptions(weka.core.Utils.splitOptions("-L 0.7 -M 0.3 -N 75 -V 0 -S 0 -E 20 -H a")); //these are the same options (the default options) when the test is run under weka gui string trainingFile = Properties.Settings.Default.WekaTrainingFile; //the path to the same file I use to test on weka explorer weka.core.Instances data = null; data = new weka.core.Instances(new java.io.BufferedReader(new java.io.FileReader(trainingFile))); //loads the file data.setClassIndex(data.numAttributes() - 1); //set the last column as the class attribute cl.buildClassifier(data); var tmp = System.IO.Path.GetTempFileName(); //creates a temp file to create an arff file with a single row with the instance I want to test taken from the arff file loaded previously using (var f = System.IO.File.CreateText(tmp)) { //long code to read data from db and regenerate the line, simulating data coming from the source I really want to test } var dataToTest = new weka.core.Instances(new java.io.BufferedReader(new java.io.FileReader(tmp))); dataToTest.setClassIndex(dataToTest.numAttributes() - 1); double prediction = 0; for (int i = 0; i < dataToTest.numInstances(); i++) { weka.core.Instance curr = dataToTest.instance(i); weka.core.Instance inst = new weka.core.Instance(data.numAttributes()); inst.setDataset(data); for (int n = 0; n < data.numAttributes(); n++) { weka.core.Attribute att = dataToTest.attribute(data.attribute(n).name()); if (att != null) { if (att.isNominal()) { if ((data.attribute(n).numValues() > 0) && (att.numValues() > 0)) { String label = curr.stringValue(att); int index = data.attribute(n).indexOfValue(label); if (index != -1) inst.setValue(n, index); } } else if (att.isNumeric()) { inst.setValue(n, curr.value(att)); } else { throw new InvalidOperationException("Unhandled attribute type!"); } } } prediction += cl.classifyInstance(inst); } //prediction is always 0 here, my ARFF file has two classes: 0 and 1, 92 zeroes and 159 ones it's funny because if I change the classifier to let's say NaiveBayes the results match the test made via weka's gui
You are using a deprecated way of reading in ARFF files. See this documentation. Try this instead: import weka.core.converters.ConverterUtils.DataSource; ... DataSource source = new DataSource("/some/where/data.arff"); Instances data = source.getDataSet(); Note that that documentation also shows how to connect to a database directly, and bypass the creation of temporary ARFF files. You could, additionally, read from the database and manually create instances to populate the Instances object with. Finally, if simply changing the classifier type at the top of the code to NaiveBayes solved the problem, then check the options in your weka gui for MultilayerPerceptron, to see if they are different from the defaults (different settings can cause the same classifier type to produce different results). Update: it looks like you're using different test data in your code than in your weka GUI (from a database vs a fold of the original training file); it might also be the case that the particular data in your database actually does look like class 0 to the MLP classifier. To verify whether this is the case, you can use the weka interface to split your training arff into train/test sets, and then repeat the original experiment in your code. If the results are the same as the gui, there's a problem with your data. If the results are different, then we need to look more closely at the code. The function you would call is this (from the Doc): public Instances trainCV(int numFolds, int numFold)
I had the same Problem. Weka gave me different results in the Explorer compared to a cross-validation in Java. Something that helped: Instances dataSet = ...; dataSet.stratify(numOfFolds); // use this //before splitting the dataset into train and test set! | https://java.develop-bugs.com/article/10000595/small+test_set+xgb+predict | CC-MAIN-2021-21 | refinedweb | 4,881 | 51.34 |
Important: This document was written before 2012. The auth options described in this document (OAuth 1.0, AuthSub, and ClientLogin) have been officially deprecated as of April 20, 2012 and are no longer available. We encourage you to migrate to OAuth 2.0 as soon as possible.
The Google Sites Data API allows client applications to access, publish, and modify content within a Google Site. Your client application can also request a list of recent activity, fetch revision history, and download attachments.
In addition to providing some background on the capabilities of the Sites Data API, this guide provides examples for interacting with the API using the Python client library. For help setting up the client library, see Getting Started with the Google Data Python Client Library. If you're interested in understanding more about the underlying protocol used by the Python client library to interact with the Sites API, please see the protocol guide.
Audience
This document is intended for developers who want to write client applications that interact with Google Sites using the Google Data Python Client Library.
Getting started
To use the Python client library, you'll need Python 2.2+ and the modules listed on the DependencyModules wiki page. After downloading the client library, see Getting Started with the Google Data Python Library for help installing and using the client.
Running the sample
A full working sample is located in the
samples/sites subdirectory of the project's Mercurial repository
(/samples/sites/sites_example.py).
Run the example as follows:
python sites_example.py # or python sites_example.py --site [sitename] --domain [domain or "site"] --debug [prints debug info if set]
If the required flags are not provided, the app will prompt you to input those values. The sample allows the user to perform a number of operations which demonstrate how to use the Sites API. As such, you'll need to authenticate to perform certain operations (e.g. modifying content). The program will also prompt you to authenticate via AuthSub, OAuth, or ClientLogin.
To include the examples in this guide into your own code, you'll need the following
import statements:
import atom.data import gdata.sites.client import gdata.sites.data
You will also need to setup a
SitesClient object, which represents a client connection to the Sites API.
Pass in your application's name and the webspace name of the Site (from its URL):
client = gdata.sites.client.SitesClient(source='yourCo-yourAppName-v1', site='yourSiteName')
To work with a Site that is hosted on a Google Apps domain, set the domain using the
domain parameter:
client = gdata.sites.client.SitesClient(source='yourCo-yourAppName-v1', site='yourSiteName', domain='example.com')
In the above snippets, the
source argument is optional but is recommended for logging purposes. It should
follow the format:
company-applicationname-version
Note: The rest of the guide assumes you created a
SitesClient object in the variable
client.
Authenticating to the Sites API
The Python client library can be used to work with either public or private feeds. The Sites Data API provides access to private and public feeds, depending on a Site's permissions and the operation you're trying to perform. For example, you may be able to read the content feed of a public Site but not make updates to it - something that requires an authenticated client. This can be done via ClientLogin username/password authentication, AuthSub, or OAuth.
Please see the Google Data APIs Authentication Overview for more information on AuthSub, OAuth, and ClientLogin.
AuthSub for web applications
AuthSub Authentication for Web Applications should be used by client applications which need to authenticate their users to Google or Google Apps accounts. The operator does not need access to the username and password for the Google Sites user - only an AuthSub token is required.
View instructions for incorporating AuthSub into your web application
Request a single-use token
When the user first visits your application, they need to authenticate. Typically, developers print some text and a link directing the user
to the AuthSub approval page to authenticate the user and request access to their documents. The Google Data Python client library provides a function,
generate_auth_sub_url() to generate this URL. The code below sets up a link to the AuthSubRequest page.
import gdata.gauth def GetAuthSubUrl(): next = '' scopes = [''] secure = True session = True return gdata.gauth.generate_auth_sub_url(next, scopes, secure=secure, session=session) print '<a href="%s">Login to your Google account</a>' % GetAuthSubUrl()
If you want to authenticate users on a Google Apps hosted domain, pass in the domain name to
generate_auth_sub_url():
def GetAuthSubUrl(): domain = 'example.com' next = '' scopes = [''] secure = True session = True return gdata.gauth.generate_auth_sub_url(next, scopes, secure=secure, session=session, domain=domain)
The
generate_auth_sub_url() method takes several parameters (corresponding to the query parameters used by the
AuthSubRequest handler):
- the next URL — URL that Google will redirect to after the user logs into their account and grants access; the example above
- the scope —
- secure, a boolean to indicate whether the token will be used in secure and registered mode or not;
Truein the example above
- session, a second boolean to indicate whether the single-use token will later be exchanged for a session token or not;
Truein the example above
Upgrading to a session token
See Using AuthSub with the Google Data API Client Libraries.
Retrieving information about a session token
See Using AuthSub with the Google Data API Client Libraries.
Revoking a session token
See Using AuthSub with the Google Data API Client Libraries.
Tip: Once your application has successfully acquired a long lived sessions token,
store that token in your database to recall for later use. There's no need to send the user back to AuthSub on every run of your application.
Use
client.auth_token = gdata.gauth.AuthSubToken(TOKEN_STR) to set an existing token on the client.
OAuth for web or installed/mobile applications
OAuth can be used as an alternative to AuthSub, and is intended for web applications. OAuth is similar to using the secure and registered mode of AuthSub in that all data requests must be digitally signed and you must register your domain.
View instructions for incorporating OAuth into your installed application
Fetching a request token
See Using OAuth with the Google Data API Client Libraries.
Authorizing a request token
See Using OAuth with the Google Data API Client Libraries.
Upgrading to an access token
See Using OAuth with the Google Data API Client Libraries.
Tip: Once your application has successfully acquired an OAuth access token,
store that token in your database to recall for later use. There's no need to send the user back through OAuth on every run of your application.
Use
client.auth_token = gdata.oauth.OAuthToken(TOKEN_STR, TOKEN_SECRET) to set an existing token on the client.
ClientLogin for installed/mobile applications
ClientLogin should be used by installed or mobile applications which need to authenticate their users to Google accounts. On first run, your application prompts the user for their username/password. On subsequent requests, an authentication token is referenced.
View instructions for incorporating ClientLogin into your installed application
To use ClientLogin, invoke the
ClientLogin()
method of
SitesClient object, which is inherited from
GDClient. Specify the email address and
password of the user on whose behalf your client is making requests. For example:
client = gdata.sites.client.SitesClient(source='yourCo-yourAppName-v1') client.ClientLogin('user@gmail.com', 'pa$$word', client.source);
Tip: Once your application has successfully authenticated the user for the first time, store the auth token in your database to recall for later use. There's no need to prompt the user for his/her password on every run of your application. See Recalling an auth token for more information.
For more information on using ClientLogin in your Python applications, see the Using ClientLogin with the Google Data API Client Libraries.
Site Feed
The site feed can be used to list the Google Sites a user owns or has viewing permissions for. It can also be used to modify the name of an existing site. Lastly, for Google Apps domains, it can also be used to create and/or copy an entire site.
Listing sites
To list the sites a user has access to, use the client's
GetSiteFeed() method. The method takes an optional
argument,
uri, which you may use to specify an alternate site feed URI. By default, the
GetSiteFeed()
uses the site name and domain set on the client object. See the Getting Started section for
more information on setting these values on your client object.
Here is an example of fetching the authenticated user's list of sites:
feed = client.GetSiteFeed() for entry in feed.entry: print '%s (%s)' % (entry.title.text, entry.site_name.text) if entry.summary.text: print 'description: ' + entry.summary.text if entry.FindSourceLink(): print 'this site was copied from site: ' + entry.FindSourceLink() print 'acl feed: %s\n' % entry.FindAclLink() print 'theme: ' + entry.theme.text
The above snippet prints the site's title, site name, site it was copied from, and its acl feed URI.
Creating new sites
Note: This feature is only available to Google Apps domains.
New sites can be provisioned by calling the library's
CreateSite() method.
Similar to the
GetSiteFeed() helper,
CreateSite() also accepts an
optional argument,
uri, which you may use to specify an alternate site feed URI (in the case of creating
the site under a different domain other than the one that's set on your
SitesClient object).
Here is an example of creating a new site with the theme 'slate' and providing a title and (optional) description:
client.domain = 'example2.com' # demonstrates creating a site under a different domain. entry = client.CreateSite('Title For My Site', description='Site to hold precious memories', theme='slate') print 'Site created! View it at: ' + entry.GetAlternateLink().href
The above request would create a new site under the Google Apps domain
example2.com.
Thus, the site's URL would be.
If the site is successfully created, the server will respond with a
gdata.sites.data.SiteEntry
object, populated with elements added by the server: a link to the site, a link to the site's acl feed,
the site name, the title, summary, and so forth.
Copying a site
Note: This feature is only available to Google Apps domains.
CreateSite() can also be used to copy an existing site. To do this, pass in the
source_site keyword argument.
Any site that's been copied will have this link, which is accessible via
entry.FindSourceLink(). Here is an example of duplicating the site
created in the Creating new sites section:
copied_site = client.CreateSite('Copy of Title For My Site', description='My Copy', source_site=entry.FindSourceLink()) print 'Site copied! View it at: ' + copied_site.GetAlternateLink().href
Important points:
- Only sites and site templates that the authenticated user owns can be copied.
- A site template can also be copied. A site is a template if the "Publish this site as a template" setting is checked in the Google Sites settings page.
- You can copy a site from another domain, pending you are listed as an owner on the source site.
Updating a site's metadata
To update the title or summary of a site, you'll need a
SiteEntry containing the site in question. This
example uses the
GetEntry() method to first fetch a
SiteEntry, and then change its title, description and category tag:
uri = '' site_entry = client.GetEntry(uri, desired_class=gdata.sites.data.SiteEntry) site_entry.title.text = 'Better Title' site_entry.summary.text = 'Better Description' category_name = 'My Category' category = atom.data.Category( scheme=gdata.sites.data.TAG_KIND_TERM, term=category_name) site_entry.category.append(category) updated_site_entry = client.Update(site_entry) # To force the update, even if you do not have the latest changes to the entry: # updated_site_entry = client.Update(site_entry, force=True)
Fetching the Activity Feed
Note: Access to this feed requires that you are a collaborator or owner of the Site. Your client must authenticate by using an AuthSub, OAuth, or ClientLogin token. See Authenticating to the Sites service.
You can fetch a Site's recent activity (changes) by fetching the activity feed.
The lib's
GetActivityFeed() method provides access to this feed:
print "Fetching activity feed of '%s'...\n" % client.site feed = client.GetActivityFeed() for entry in feed.entry: print '%s [%s on %s]' % (entry.title.text, entry.Kind(), entry.updated.text)
Calling
GetActivityFeed() returns a
gdata.sites.data.ActivityFeed object containing a list of
gdata.sites.data.ActivityEntry. Each activity entry contains information on
a change that was made to the Site.
Fetching Revision History
Note: Access to this feed requires that you are a collaborator or owner of the Site. Your client must authenticate by using an AuthSub, OAuth, or ClientLogin token. See Authenticating to the Sites service.
The revision feed provides information on the revision history for any content entry. The
GetRevisionFeed()
method can be used to fetch the revisions for a given content entry. The method takes an optional
uri
parameter that accepts a
gdata.sites.data.ContentEntry, a full URI of a content entry, or a content entry id.
This example queries the content feed, and fetches the revision feed for the first content entry:
print "Fetching content feed of '%s'...\n" % client.site content_feed = client.GetContentFeed() content_entry = content_feed.entry[0] print "Fetching revision feed of '%s'...\n" % content_entry.title.text revision_feed = client.GetRevisionFeed(content_entry) for entry in revision_feed.entry: print entry.title.text print ' new version on:\t%s' % entry.updated.text print ' view changes:\t%s' % entry.GetAlternateLink().href print ' current version:\t%s...\n' % str(entry.content.html)[0:100]
Calling
GetRevisionFeed() returns a
gdata.sites.data.RevisionFeed object containing a list of
gdata.sites.data.RevisionEntry. Each revision entry contains information such as the content
at that revision, the version number, and when the new version was created.
Content feed
Retrieving the content feed
Note: The content feed may or may not require authentication; depending on the Site's sharing permissions. If the Site is non-public, your client must authenticate by using an AuthSub, OAuth, or ClientLogin token. See Authenticating to the Sites service.
The content feed returns a Site's latest content. It can be accessed by calling the lib's
GetContentFeed() method, which takes an optional
uri string parameter for passing
a customized query.
Here is an example of fetching the entire content feed and printing out some interesting elements:
print "Fetching content feed of '%s'...\n" % client.site feed = client.GetContentFeed() for entry in feed.entry: print '%s [%s]' % (entry.title.text, entry.Kind()) # Common properties of all entry kinds. print ' content entry id: ' + entry.GetNodeId() print ' revision:\t%s' % entry.revision.text print ' updated:\t%s' % entry.updated.text if entry.page_name: print ' page name:\t%s' % entry.page_name.text if entry.content: print ' content\t%s...' % str(entry.content.html)[0:100] # Subpages/items will have a parent link. parent_link = entry.FindParentLink() if parent_link: print ' parent link:\t%s' % parent_link # The alternate link is the URL pointing to Google Sites. if entry.GetAlternateLink(): print ' view in Sites:\t%s' % entry.GetAlternateLink().href # If this entry is a filecabinet, announcementpage, etc., it will have a feed of children. if entry.feed_link: print ' feed of items:\t%s' % entry.feed_link.href print
Tip: The
entry.Kind() can be used to determine an entry's type.
The resulting
feed object is a
gdata.sites.data.ContentFeed containing a list
of
gdata.sites.data.ContentEntry. Each entry represents a different page/item within
the user's Site and has elements specific to the kind of entry it is. See the sample application for a better idea
of some of the properties available in each entry kind.
Content feed query examples
You can search the content feed using some of the standard Google Data API query parameters and those specific to the Sites API. For more detailed information and a full list of supported parameters, see the Reference Guide.
Note: The examples in this section make use of the
gdata.sites.client.MakeContentFeedUri() helper method
for constructing the base URI of the content feed.
Retrieving specific entry kinds
To fetch only a particular type of entry, use the
kind parameter. As an example, this snippet returns just
attachment entries:
kind = 'webpage' print 'Fetching only %s entries' % kind uri = '%s?kind=%s' % (client.MakeContentFeedUri(), kind) feed = client.GetContentFeed(uri=uri)
To return more than one type, separate each
kind with a comma. For example, this snippet returns
filecabinet and
listpage entries:
kind = ','.join(['filecabinet', 'listpage']) print 'Fetching only %s entries' % kind uri = '%s?kind=%s' % (client.MakeContentFeedUri(), kind) feed = client.GetContentFeed(uri=uri)
Retrieving a page by path
If you know the relative path of a page within the Google Site, you can use the
path parameter to fetch that particular page.
This example would return the page located at:
path = '/path/to/the/page' print 'Fetching page by its path: ' + path uri = '%s?path=%s' % (client.MakeContentFeedUri(), path) feed = client.GetContentFeed(uri=uri)
Retrieving all entries under a parent page
If you know the content entry id of a page (e.g. "1234567890" in the example below), you can use the
parent parameter
to fetch all of its child entries (if any):
parent = '1234567890' print 'Fetching all children of parent entry: ' + parent uri = '%s?parent=%s' % (client.MakeContentFeedUri(), parent) feed = client.GetContentFeed(uri=uri)
For additional parameters, see the Reference Guide.
Creating Content
Note: Before creating content for a site, ensure that you have set your site in the client.
client.site = "siteName"
New content (webpages, listpages, filecabinets, announcementpages, etc.) can be created by using
CreatePage().
The first argument to this method should be the kind of page to create, followed by the title, and its HTML content.
For a list of supported node types, see the
kind parameter in the Reference Guide.
Creating new items / pages
This example creates a new
webpage under the top-level, includes some XHTML for the page body,
and sets the heading title to 'New WebPage Title':
entry = client.CreatePage('webpage', 'New WebPage Title', html='<b>HTML content</b>') print 'Created. View it at: %s' % entry.GetAlternateLink().href
If the request is successful,
entry will contain a copy of the entry created on the server, as a
gdata.sites.gdata.ContentEntry.
To create more complex entry kind that are populated on creation (e.g. a
listpage with column headings), you'll need to create
the
gdata.sites.data.ContentEntry manually, fill in the properties of interest, and call
client.Post().
Creating items/pages under custom URL paths
By default, the previous example would be created under the URL and
have a page heading of 'New Webpage Title'. That is, the title is normalized to
new-webpage-title for the URL.
To customize a page's URL path, you can set the
page_name property on the content entry. The
CreatePage() helper
provides this as an optional keyword argument.
This example creates a new
filecabinet page with a heading of 'File Storage', but creates the page
under the URL
(instead of)
by specifying the
page_name property.
entry = client.CreatePage('filecabinet', 'File Storage', html='<b>HTML content</b>', page_name='files') print 'Created. View it at: ' + entry.GetAlternateLink().href
The server uses the following precedence rules for naming a page's URL path:
page_name, if present. Must satisfy
a-z, A-Z, 0-9, -, _.
title, must not be null if page name is not present. Normalization is to trim + collapse whitespace to '-' and remove chars not matching
a-z, A-Z, 0-9, -, _.
Creating subpages
To create subpages (children) under a parent page, use
CreatePage()'s
parent keyword argument.
The
parent can either be a
gdata.sites.gdata.ContentEntry or a string representing the
content's entry's full self id.
This example queries the content feed for
announcementpages and creates a new
announcement under the first one that is found:
uri = '%s?kind=%s' % (client.MakeContentFeedUri(), 'announcementpage') feed = client.GetContentFeed(uri=uri) entry = client.CreatePage('announcement', 'Party!!', html='My place, this weekend', parent=feed.entry[0]) print 'Posted!'
Uploading files
Just as in Google Sites, the API supports attachment uploads to a file cabinet page or parent page. Attachments must be uploaded
to a parent page. Therefore, you must set a parent link on the
ContentEntry you're trying to upload. See Creating subpages for more information.
The client library's
UploadAttachment() method provides the interface for uploading attachments.
Uploading attachments
This example uploads a PDF file to the first
filecabinet found in the user's content feed.
The attachment is created with a title of 'New Employee Handbook' and a (optional) description, 'HR packet'.
uri = '%s?kind=%s' % (client.MakeContentFeedUri(),'filecabinet') feed = client.GetContentFeed(uri=uri) attachment = client.UploadAttachment('/path/to/file.pdf', feed.entry[0], content_type='application/pdf', title='New Employee Handbook', description='HR Packet') print 'Uploaded. View it at: %s' % attachment.GetAlternateLink().href
If the upload is successful,
attachment will contain a copy of the created attachment on the server.
Uploading an attachment to a folder
Filecabinets in Google Sites support folders. The
UploadAttachment() provides an additional keyword
argument,
folder_name that you can use to upload an attachment into a
filecabinet folder. Simply specify that folder's name:
import gdata.data ms = gdata.data.MediaSource(file_path='/path/to/file.pdf', content_type='application/pdf') attachment = client.UploadAttachment(ms, feed.entry[0], title='New Employee Handbook', description='HR Packet', folder_name='My Folder')
Notice that this example passes a
gdata.data.MediaSource object to
UploadAttachment() instead
of a filepath. It also does not pass a content type. Instead, the content type is specified on the MediaSource object.
Web attachments
Web attachments are special kinds of attachments. Essentially, they're links to other files on the web
that you can add to your
filecabinet listings. This feature is analogous to the 'Add file by URL' upload method in the Google Sites UI.
Note: Web attachments can only be created under a
filecabinet. They cannot be uploaded to other types of pages.
This example creates a web attachment under the first
filecabinet found in the user's content feed.
Its title and (optional) description are set to 'GoogleLogo' and 'nice colors', respectively.
uri = '%s?kind=%s' % (client.MakeContentFeedUri(),'filecabinet') feed = client.GetContentFeed(uri=uri) parent_entry = feed.entry[0] image_url = '' web_attachment = client.CreateWebAttachment(image_url, 'image/gif', 'GoogleLogo', parent_entry, description='nice colors') print 'Created!'
The call creates a link pointing to the image at '' in the
filecabinet.
Updating Content
Updating a page's metadata and/or html content
The metadata (title, pageName, etc.) and page content of any entry kind can be edited by
using the client's
Update() method.
Below is an example of updating a
listpage with the following changes:
- The title is modified to 'Updated Title'
- The page's HTML content is updated to 'Updated HTML Content'
- The first column heading of the list is changed to "Owner"
uri = '%s?kind=%s' % (client.MakeContentFeedUri(),'listpage') feed = client.GetContentFeed(uri=uri) old_entry = feed.entry[0] # Update the listpage's title, html content, and first column's name. old_entry.title.text = 'Updated Title' old_entry.content.html = 'Updated HTML Content' old_entry.data.column[0].name = 'Owner' # You can also change the page's webspace page name on an update. # old_entry.page_name = 'new-page-path' updated_entry = client.Update(old_entry) print 'List page updated!'
Replacing an attachment's content + metadata
You can replace an attachment's file content by creating a new
MediaSource object
with the new file content and calling the client's
Update() method. The attachment's
metadata (such as title and description) can also be updated, or simply just the metadata.
This example demonstrates updating file content and metadata at the same time:
import gdata.data # Load the replacement content in a MediaSource. Also change the attachment's title and description. ms = gdata.data.MediaSource(file_path='/path/to/replacementContent.doc', content_type='application/msword') existing_attachment.title.text = 'Updated Document Title' existing_attachment.summary.text = 'version 2.0' updated_attachment = client.Update(existing_attachment, media_source=ms) print "Attachment '%s' changed to '%s'" % (existing_attachment.title.text, updated_attachment.title.text)
Deleting Content
To remove a page or item from a Google Site, first retrieve the content entry, then call the client's
Delete() method.
client.Delete(content_entry)
You can also pass the
Delete() method the content entry's
edit link and/or force the deletion:
# force=True sets the If-Match: * header instead of using the entry's ETag. client.Delete(content_entry.GetEditLink().href, force=True)
For more information about ETags, see the Google Data APIs reference guide.
Downloading Attachments
Each
attachment entry contains a content
src link which can be used to download the file contents.
The Sites client contains a helper method for accessing and downloading the file from this link:
DownloadAttachment().
It accepts a
gdata.sites.data.ContentEntry or download URI for its first argument, and a filepath to save the attachment
to as the second.
This example fetches a particular attachment entry (by querying it's
self link) and downloads the file to the specified path:
uri = '' attachment = client.GetEntry(uri, desired_class=gdata.sites.data.ContentEntry) print "Downloading '%s', a %s file" % (attachment.title.text, attachment.content.type) client.DownloadAttachment(attachment, '/path/to/save/test.pdf') print 'Downloaded!'
It is up to the app developer to specify a file extension that makes sense for the attachment's content type. The content type
can be found in
entry.content.type.
In some cases you may not be able to download the file to disk (e.g. if your app is running in Google App Engine).
For these situations, use
_GetFileContent() to fetch the file content and store it in memory.
This example download's an attachment to memory.
try: file_contents = client._GetFileContent(attachment.content.src) # TODO: Do something with the file contents except gdata.client.RequestError, e: raise e
ACL Feed
Overview of Sharing Permissions (ACLs)
Each ACL entry in the ACL feed represents an access role of a particular entity, either a user, a group of users, a domain, or the default access (which is a public site). Entries will only be shown for entities with explicit access - one entry will be shown for each e-mail address in the "People with Access" panel in the sharing screen of the Google Sites UI. Thus, domain admins will not be shown, even though they have implicit access to a site.
Roles
The role element represents an access level an entity can have. There are four possible values of the
gAcl:role element:
- reader — a viewer (equivalent to read-only access).
- writer — a collaborator (equivalent to read/write access).
- owner — typically the site admin (equivalent to read/write access).
Scopes
The scope element represents the entity that has this access level. There are four possible types of the
gAcl:scope element:
- user — an e-mail address value, e.g "user@gmail.com".
- group — a Google Group e-mail address, e.g "group@domain.com".
- domain — a Google Apps domain name, e.g "domain.com".
- default — There is only one possible scope of type "default", which has no value (e.g
<gAcl:scope). This particular scope controls the access that any user has by default on a public site.
Note: Domains cannot have a
gAcl:role value
set to "owner" access, they can only be readers or writers.
Retrieving the ACL feed
The ACL feed can be used to control a site's sharing permissions and can be fetched using the
GetAclFeed() method.
The following example fetches the ACL feed for the site currently set on the
SitesClient object,
and prints out the permission entries:
print "Fetching acl permissions of site '%s'...\n" % client.site feed = client.GetAclFeed() for entry in feed.entry: print '%s (%s) - %s' % (entry.scope.value, entry.scope.type, entry.role.value)
After a successful query,
feed will be a
gdata.sites.data.AclFeed object containing
a listing of
gdata.sites.data.AclEntry.
If you're working with entries in the SiteFeed, each
SiteEntry contains a link to its ACL feed.
For example, this snippet fetches the first site in the user's Site feed and queries its ACL feed:
feed = client.GetSiteFeed() site_entry = feed.entry[0] print "Fetching acl permissions of site '%s'...\n" % site_entry.site_name.text feed = client.GetAclFeed(uri=site_entry.FindAclLink())
Sharing a site
Note: Certain sharing ACLs may only be possible if the domain is configured to allow such permissions (e.g. if sharing outside of the domain for Google Apps domains is enabled, etc).
To share a Google Site using the API, create an
gdata.sites.gdata.AclEntry with the desired
gdata.acl.data.AclScope and
gdata.acl.data.AclRole values. See the
ACL feed Overview section for the possible
AclScope
and
AclRoles values.
This example grants read permissions on the Site to user 'user@example.com':
import gdata.acl.data scope = gdata.acl.data.AclScope(value='user@example.com', type='user') role = gdata.acl.data.AclRole(value='reader') acl = gdata.sites.gdata.AclEntry(scope=scope, role=role) acl_entry = client.Post(acl, client.MakeAclFeedUri()) print "%s %s added as a %s" % (acl_entry.scope.type, acl_entry.scope.value, acl_entry.role.value)
Group and Domain level sharing
Similar to sharing a site with a single user, you can share a site across a
Google group or Google Apps domain. The necessary
scope values are listed below.
Sharing to a group email address:
scope = gdata.acl.data.AclScope(value='group_name@example.com', type='group')
Sharing to an entire domain:
scope = gdata.acl.data.AclScope(value='example.com', type='domain')
Sharing at the domain level is only supported for Google Apps domains, and only for the domain that the site is hosted at. For example can only share the entire Site with domain1.com, not domain2.com. Sites that are not hosted on a Google Apps domain (e.g.) cannot invite domains.
Modifying sharing permissions
To an existing sharing permission on a Site, first fetch the
AclEntry in question, modify the permission
as desired, and then call the client's
Update() method to modify the ACL on the server.
This example modifies our previous
acl_entry from the Sharing a site section,
by updating 'user@example.com' to be a writer (collaborator):
acl_entry.role.value = 'writer' updated_acl = client.Update(acl_entry) # To force the update, even if you do not have the latest changes to the entry: # updated_acl = client.Update(acl_entrys, force=True)
For more information about ETags, see the Google Data APIs reference guide.
Removing sharing permissions
To remove a sharing permission, first retrieve the
AclEntry, then call the client's
Delete() method.
client.Delete(acl_entry)
You can also pass the
Delete() method the acl entry's
edit link and/or force the deletion:
# force=True sets the If-Match: * header instead of using the entry's ETag. client.Delete(acl_entry.GetEditLink().href, force=True)
For more information about ETags, see the Google Data APIs reference guide.
Special Topics
Retrieving a feed or entry again
If you want to retrieve a feed or entry that you've retrieved before, you can improve efficiency by telling the server to send the list or entry only if it has changed since the last time you retrieved it.
To do this sort of conditional retrieval, pass in an ETag value to the
GetEntry(). For example, if you had an existing
entry object:
import gdata.client try: entry = client.GetEntry(entry.GetSelfLink().href, desired_class=gdata.sites.data.ContentEntry, etag=entry.etag) except gdata.client.NotModified, error: print 'You have the latest copy of this entry' print error
If
GetEntry() throws the
gdata.client.NotModified exception, the entry's
ETag matches the version on the server, meaning you have the most up-to-date copy.
However, if another client/user has made modifications, the new entry will be returned in
entry
and no exception will be thrown.
For more information about ETags, see the Google Data APIs reference guide. | https://developers.google.com/google-apps/sites/docs/1.0/developers_guide_python | CC-MAIN-2017-09 | refinedweb | 5,287 | 50.12 |
Your Account
by Jeremy Jones
.
`-- mproject
|-- __init__.py
|-- bar_app
| |-- __init__.py
| |-- models.py
| |-- templates
| | `-- bar_main.html
| |-- urls.py
| `-- views.py
|-- foo_app
| |-- __init__.py
| |-- models.py
| |-- templates
| | `-- foo_main.html
| |-- urls.py
| `-- views.py
|-- main_app
| |-- __init__.py
| |-- models.py
| |-- templates
| | `-- base.html
| `-- views.py
|-- manage.py
|-- settings.py
`-- urls.py
'mproject.main_app',
'mproject.foo_app',
'mproject.bar_app',
(r'^foo/', include('mproject.foo_app.urls')),
(r'^bar/', include('mproject.bar_app.urls')),
<html>
<head>
<title>{% block title %}Main Base Title{% endblock %}</title>
</head>
<body>
{% block content %}
Unset Content
{% endblock %}
</body>
</html>
urlpatterns = patterns('',
# Example:
# (r'^mproject/', include('mproject.apps.foo.urls.foo')),
# Uncomment this for admin:
# (r'^admin/', include('django.contrib.admin.urls')),
(r'^main/', 'mproject.bar_app.views.main'),
)
from django.shortcuts import render_to_response
def main(request):
return render_to_response('bar_main.html', {})
{% extends "base.html" %}
{% block title %}Bar Title{% endblock %}
{% block content %}Bar Content{% endblock %}
urlpatterns = patterns('',
# Example:
# (r'^mproject/', include('mproject.apps.foo.urls.foo')),
# Uncomment this for admin:
# (r'^admin/', include('django.contrib.admin.urls')),
(r'^main/', 'mproject.foo_app.views.main'),
)
from django.shortcuts import render_to_response
def main(request):
return render_to_response('foo_main.html', {})
{% extends "base.html" %}
{% block title %}Foo Title{% endblock %}
{% block content %}Foo Content{% endblock %}.
And I should have said that after going through this whole procedure, you'd have a common name for any project you're working on. In this case, "jeremymjones.apps.gallery" would be the app.
The fun part comes when you have a module that uses other app modules. If you were thinking that your package was going to be a stand alone app, you'll have alot of retooling to do. But that's an issue of the programmer not the framework.
Oh and django's template system is the best out there, by far! OMG
Philipp von Weiterhausen recently posted a good roundup of some of the components available for Zope3.
I tinkered around with Zope a number of years ago back when I was starting out with Python. I have really only glanced occasionally since then at the current state of things. I'll have to check it out.
© 2015, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://archive.oreilly.com/pub/post/django_pluggable_apps_and_code.html | CC-MAIN-2015-14 | refinedweb | 370 | 54.9 |
tkcon: Getting Started tkcon: Getting Started Documentation Purpose & Features Limitations To Do Online Demo (requires Tk plugin) Using TkCon with other Tk Languages Getting Started Special Bindings Procedures Screenshot dump tkcon idebug observe Resource File: TkCon will search for a resource file in "$env(HOME)/.tkconrc" (Unix), "$env(HOME)/tkcon.cfg" (Windows) or "$env(PREF_FOLDER)/tkcon.cfg" (Macintosh). On DOS machines, "$env(HOME)" usually refers to "C:\". TkCon never sources the "~/.wishrc" file. The resource file is sourced by each new instance of the console. An example resource file is provided below. Command Line Arguments the Variables section for the recognized <color> names. -eval . Needed when attaching to non-Tcl interpreters. -package package_name (also -load) Packages to automatically load into the slave interpreters (ie - "Tk"). -rcfile filename Specify an alternate tkcon resource file name. -root widgetname Makes the named widget the root name of all consoles (ie - .tkcon). -slave tcl_script A tcl script to eval in each slave interpreter. This will append the one specified in the tkcon resource file, if any. Some examples of tkcon command line startup situations: megawish tkcon.tcl tkcon.tcl -font "Courier 12" -load Tk Use the courier font for tkcon and always load Tk in slave interpreters at startup. tkcon.tcl -rcfile ~/.wishrc -color,bg white Use the ~/.wishrc file as the resource file, and a white background for tkcon's text widgets. Variables: Certain variables in TkCon can be modified to suit your needs. It's easiest to do this in the resource file, but you can do it when time the program is running (and some can be changed via the Prefs menu). All these are part of the master interpreter's ::tkcon namespace. The modifiable array variables are ::tkcon::COLOR and ::tkcon::OPT. You can call 'tkcon set ::tkcon::COLOR' when the program is running to check its state. Here is an explanation of certain variables you might change or use: ::tkcon::COLOR(bg) The background color for tkcon text widgets. Defaults to the operating system default (determined at startup). ::tkcon::COLOR(blink) The background color of the electric brace highlighting, if on. Defaults to yellow. ::tkcon::COLOR(cursor) The background color for the insertion cursor in tkcon. Defaults to black. ::tkcon::COLOR(disabled) The foreground color for disabled menu items. Defaults to dark grey. ::tkcon::COLOR(proc) The foreground color of a recognized proc, if command highlighting is on. Defaults to dark green. ::tkcon::COLOR(var) The background color of a recognized var, if command highlighting is on. Defaults to pink. ::tkcon::COLOR(prompt) The foreground color of the prompt as output in the console. Defaults to brown. ::tkcon::COLOR(stdin) The foreground color of the stdin for the console. Defaults to black. ::tkcon::COLOR(stdout) The foreground color of the stdout as output in the console. Defaults to blue. ::tkcon::COLOR(stderr) The foreground color of stderr as output in the console. Defaults to red. ::tkcon::OPT(autoload) Packages to automatically load into the slave interpreter (ie - 'Tk'). This is a list. Defaults to {} (none). ::tkcon::OPT(blinktime) The amount of time (in millisecs) that braced sections should blink for. Defaults to 500 (.5 secs), must be at least 100. ::tkcon::OPT(blinkrange) Whether to blink the entire range for electric brace matching or to just blink the actual matching braces (respectively 1 or 0, defaults to 1). ::tkcon::OPT(buffer) The size of the console scroll buffer (in lines). Defaults to 512. ::tkcon::OPT(calcmode) Whether to allow expr commands to be run at the command line without prefixing them with expr (just a convenience). ::tkcon::OPT(cols) Number of columns for the console to start out with. Defaults to 80. ::tkcon::OPT(dead) What to do with dead connected interpreters. If dead is leave, TkCon automatically exits the dead interpreter. If dead is ignore then it remains attached waiting for the interpreter to reappear. Otherwise TkCon will prompt you. ::tkcon::OPT(exec) This corresponds to the -exec option above ::tkcon::OPT(font) Font to use for tkcon text widgets (also specified with -font). Defaults to the system default, or a fixed width equivalent. ::tkcon::OPT(gets) Controls whether tkcon will overload the gets command to work with tkcon. The valid values are: congets (the default), which will redirect stdin requests to the tkcon window; gets, which will pop up a dialog to get input; and {} (empty string) which tells tkcon not to overload gets. This value must be set at startup to alter tkcon's behavior. ::tkcon::OPT(history) The size of the history list to keep. Defaults to 48. ::tkcon::OPT(hoterrors) Whether hot errors are enabled or not. When enabled, errors that are returned to the console are marked with a link to the error info that will pop up in an minimal editor. This requires more memory because each error that occurs will maintain bindings for this feature, as long as the error is in the text widget. Defaults to on. ::tkcon::OPT(library) The path to any tcl library directories (these are appended to the auto_path when the after the resource file is loaded in). ::tkcon::OPT(lightbrace) Whether to use the brace highlighting feature or not (respectively 1 or 0, defaults to 1). ::tkcon::OPT(lightcmd) Whether to use the command highlighting feature or not (respectively 1 or 0, defaults to 1). ::tkcon::OPT(maineval) A tcl script to execute in the main interpreter after the slave interpreter is created and the user interface is initialized. ::tkcon::OPT(maxlinelen) A number that specifies the limit of long result lines. True result is still captured in $_ (and 'puts $_' works). Defaults to 0 (unlimited). ::tkcon::OPT(maxmenu) A number that specifies the maximum number of packages to show vertically in the Interp->Packages menu before breaking into another column. Defaults to 15. ::tkcon::OPT(nontcl) For those who might be using non-Tcl based Tk attachments, set this to 1. It prevents TkCon from trying to evaluate its own Tcl code in an attached interpreter. Also see my notes for non-Tcl based Tk interpreters. ::tkcon::OPT(prompt1) Like tcl_prompt1, except it doesn't require you use 'puts'. No equivalent for tcl_prompt2 is available (it's unnecessary IMHO). Defaults to {([file tail [pwd]]) [history nextid] % }. ::tkcon::OPT(rows) Number of rows for the console to start out with. Defaults to 20. ::tkcon::OPT(scollypos) Y scrollbar position. Valid values are left or right. Defaults to left. ::tkcon::OPT(showmenu) Show the menubar on startup (1 or 0, defaults to 1). ::tkcon::OPT(showmultiple) Show multiple matches for path/proc/var name expansion (1 or 0, defaults to 1). ::tkcon::OPT(slaveeval) A tcl script to execute in each slave interpreter right after it's created. This allows the user to have user defined info always available in a slave. Example: set ::tkcon::OPT(slaveeval) { proc foo args { puts $args } lappend auto_path . } ::tkcon::OPT(slaveexit) Allows the prevention of exit in slaves from exitting the entire application. If it is equal to exit, exit will exit as usual, otherwise it will just close down that interpreter (and any children). Defaults to close. ::tkcon::OPT(subhistory) Allow history substitution to occur (0 or 1, defaults to 1). The history list is maintained in a single interpreter per TkCon console instance. Thus you have history which can range over a series of attached interpreters. An example TkCon resource file might look like: ###################################################### ## My TkCon Resource File # Use a fixed default font #tkcon font fixed; # valid on unix #tkcon font systemfixed; # valid on win tkcon font Courier 12; # valid everywhere # Keep 50 commands in history set ::tkcon::OPT(history) 50 # Use a pink prompt set ::tkcon::COLOR(prompt) pink ###################################################### © Jeffrey Hobbs | http://docs.activestate.com/activetcl/8.5/tcl/tkcon/start.html | CC-MAIN-2018-43 | refinedweb | 1,285 | 59.8 |
I took the map of Winnipeg's Zoo and brought it into Blender, it came in fine using the plug-in but there is no height information. All I see are red flat images of buildings, no 3d models. What do I need to do to get 3d models from map?
asked
17 Apr '17, 20:31
darlala
11●1●1●2
accept rate:
0%
There are two issues:
Random example
The good thing about OSM is that you can rectify both issues easily (with a bit of time) yourself. Create yourself an account and start editing
Adding building outlines is quite straight forward, for building heights see:
answered
17 Apr '17, 21:00
SimonPoole ♦
33.1k●13●255●526
accept rate:
18%
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
This is the support site for OpenStreetMap.
Question tags:
import ×150
3d ×25
blender ×2
flatimages ×1
heightmaps ×1
question asked: 17 Apr '17, 20:31
question was seen: 554 times
last updated: 17 Apr '17, 21:00
Error importing tiger 2011 data into nominatim
Restarting osm2pqsql full planet import because of too low --cache setting? (import runs for 6 days already)
números de portal en SHP como los subo todos de una en addr:housenumber
Can I use KML files from Google Maps in OSM?
Issue importing tiger data into nominatim
import OSM data to PostGIS database via imposm3
trying to import a Topomap into Basecamp (Mac OSX) but it does not show up
How can I import my buildings (mapped by me) out of google earth into OSM?
How much RAM does osm2pgsql required in case the progress won't be killed? (full planet import)
Osm to postgresql import and basemaps problem
First time here? Check out the FAQ! | https://help.openstreetmap.org/questions/55666/my-citys-map-winnipeg-doesnt-export-with-height-maps | CC-MAIN-2018-22 | refinedweb | 308 | 63.63 |
That sounds easy: take two collection of identifiers, put them in sets, determine the intersection, done. Sadly, each collection uses identifiers from different databases. Worse, within one set identifiers from multiple databases. Mind you, I'm not going full monty, though some chemistry will be involved at some point. Instead, this post is really based on identifiers.
The example
Data set 1:
Data set 2: all metabolites from WikiPathways. This set has many different data sources, and seven provide more than 100 unique identifiers. The full list of metabolite identifiers is here.
The goal
Determine the interaction of two collections of identifiers from arbitrary databases, ultimately using scientific lenses. I will develop at least two solutions: one based on Bioclipse (this post) and one based on R (later).
Needs
First of all, we need something that links IDs in the first place. Not surprisingly, I will be using BridgeDb (doi:10.1186/1471-2105-11-5) for that, but for small molecules alternatives exist, like the Open PHACTS IMS based on BridgeDb, the Chemical Translation Service (doi:10.1093/bioinformatics/btq476) or UniChem (doi:10.1186/s13321-014-0043-5, doi:10.1186/1758-2946-5-3).
The Bioclipse implementation
The first thing we need to do is read the files. I have them saved as CSV even though it is a tab-separated file. Bioclipse will now open it in it's matrix editor (yes, I think .tsv needs to be linked to that editor, which does not seem to be the case yet). Reading the human metabolites from WikiPathways is done with this code (using Groovy as scripting language):
file1 = new File(
bioclipse.fullPath(
"/Compare Identifiers/human_metabolite_identifiers.csv"
)
)
set1 = new java.util.HashSet();
file1.eachLine { line ->
fields = line.split(/\t/)
def syscode;
def id;
if (fields.size() >= 2) {
(syscode, id) = line.split(/\t/)
}
if (syscode != "syscode") { // ok, not the first line
set1.add(bridgedb.xref(id, syscode))
}
}
Reading the other identifier set is a bit trickier. First, I manually changed the second column, to use the BridgeDb system codes. The list is short, and saves me from making mappings in the source code. One thing I decide to do in the source code is normalize the ChEBI identifiers (something that many of you will recognize):
file2 = new File(
bioclipse.fullPath("/Compare Identifiers/set.csv")
)
set2 = new java.util.HashSet();
file2.eachLine { line ->
fields = line.split(/\t/)
def name;
def syscode;
def id;
if (fields.size() >= 3) {
(name, syscode, id) = line.split(/\t/)
}
if (syscode != "syscode") { // ok, not the first line
if (syscode == "Ce") {
if (!id.startsWith("CHEBI:")) {
id = "CHEBI:" + id
}
}
set2.add(bridgedb.xref(id, syscode))
}
}
Then, the naive approach that does not take into account identifier equivalence makes it easy to list the number of identifiers in both sets:
intersection = new java.util.HashSet();
intersection.addAll(set1);
intersection.retainAll(set2)
println "set1: " + set1.size()
println "set2: " + set2.size()
println "intersection: " + intersection.size()
set1: 2584
set2: 6
intersection: 3
With the following identifiers in common:
[Ce:CHEBI:30089, Ce:CHEBI:15904, Ca:25513-46-6]
Of course, we want to use the identifier mapping itself. So, we first compare identifiers directly, and if not matching, use BridgeDb and an metabolite identifier mapping database (get one here):
mbMapper = bridgedb.loadRelationalDatabase(
bioclipse.fullPath(
"/VOC/hmdb_chebi_wikidata_metabolites.bridge"
)
)
intersection = new java.util.HashSet();
for (id2 in set2) {
if (set1.contains(id2)) {
// OK, direct match
intersection.add(id2)
} else {
mappings = bridgedb.map(mbMapper, id2)
for (mapped in mappings) {
if (set1.contains(mapped)) {
// OK, direct match
intersection.add(id2)
}
}
}
}
This gives five matches:
[Ch:HMDB00042, Cs:5775, Ce:CHEBI:15904, Ca:25513-46-6, Ce:CHEBI:30089]
The only metabolite it did not find in any pathway is the KEGG identified metabolite, homocystine. I just added this compound to Wikidata. That means that in the next metabolite mapping database, it will recognize this compound too.
The R and JavaScript implementations
I will soon write up the R version in a follow up post (but got to finish grading student reports first). | http://chem-bla-ics.blogspot.nl/2016_05_01_archive.html | CC-MAIN-2018-22 | refinedweb | 667 | 60.61 |
Home | Order Online | Downloads | Contact Us | Software Knowledgebase
it | es | pt | fr | de | jp | kr | cn | ru | nl | gr
Accessing the Elements
To access an individual element in the array, the index number follows the variable name in square brackets. The variable can then be treated like any other variable in C. The following example assigns a value to the first element in the array.
x[0] = 16;
The following example prints the value of the third element in an array.
printf("%d\n", x[2]);
The following example uses the scanf function to read a value from the keyboard into the last element of an array with ten elements.
scanf("%d", &x[9]);
Initializing Array Elements
Arrays can be initialized like any other variables by assignment. As an array contains more than one value, the individual values are placed in curly braces, and separated with commas. The following example initializes a ten dimensional array with the first ten values of the three times table.
int x[10] = {3, 6, 9, 12, 15, 18, 21, 24, 27, 30};
This saves assigning the values individually as in the following example.
int x[10];
x[0] = 3;
x[1] = 6;
x[2] = 9;
x[3] = 12;
x[4] = 15;
x[5] = 18;
x[6] = 21;
x[7] = 24;
x[8] = 27;
x[9] = 30;
Looping through an Array
As the array is indexed sequentially, we can use the for loop to display all the values of an array. The following example displays all the values of an array:
#include <stdio.h>
int main()
{
int x[10];
int counter;
/* Randomise the random number generator */
srand((unsigned)time(NULL));
/* Assign random values to the variable */
for (counter=0; counter<10; counter++)
x[counter] = rand();
/* Display the contents of the array */
for (counter=0; counter<10; counter++)
printf("element %d has the value %d\n", counter, x[counter]);
return 0;
}
though the output will print the different values every time, result will be displayed something like this: | https://www.datadoctor.biz/data_recovery_programming_book_chapter5-page30.html | CC-MAIN-2019-22 | refinedweb | 332 | 54.46 |
I am teaching myself C++ and amd now trying to simple programs related to data structures. I wrote this code for a simple stack. It is not working. I am frustrated and tired so I cannot think through the issue.
There are two main problems with which I need help.
1.
While my code compiles and runs, there are two warnings:
Loaded 'C:\WINDOWS\system32\ntdll.dll', Cannot find or open the PDB file
Loaded 'C:\WINDOWS\system32\kernel32.dll', Cannot find or open the PDB file
I don't know what those warnings mean. I am using MS VC++ 2010 Express.
2.
The code does not run correctly. Option 1, (list the stack), causes the program to immediately stop and the output window to close. Option 3, (remove an item), does the same thing. Option 2, (add an item), correctly asks for a name to add but closes the output window after the name is input. Only option 4, exit the program, works correctly.
Here is my code. Thanks in advance.
#include "../../std_lib_facilities.h" int top; int act; string s; string names [6] = {"James", "John", "Jerrold", "Jennifer", "", ""}; void list_items() { for (int i = 0; i < 6; i++) { cout << i + 1 <<", " << names[i] << endl; } } void add_item() { if (names [5] != "") {cout << "The stack is full.\n" << endl;} else { cout << "Enter the name to add to the stack.\n" << endl; cin >> s; int i = 0; while (names [i] != "") { top = i; i++; } names [top + 1] = s; cout << "The name " << s << " has been successfully added to the stack.\n" << endl; } } void remove_item() { if (names [0] == "") {cout << "The stack is empty.\n" << endl;} else { int i = 0; while (names [i] != "") { top = i; i++; } s = names [top]; names [top] = ""; cout << "The name " << s << " has been successfully removed from the stack.\n" << endl; } } void start() { cout << "Enter the desired activity.\n" << "1 for listing the stack items.\n" << "2 for adding an item to the stack.\n" << "3 for removing an item from the stack.\n" << "4 to exit the program.\n" << endl; cin >> act; while (act != 1 && act != 2 && act!= 3 && act != 4) { cout << "Please enter a choice 1 through 4.\n" << endl; cin >> act; } if (act == 1) {list_items();} if (act == 2) {add_item();} if (act == 3) {remove_item();} if (act == 4) { keep_window_open(); } } int main() { start(); return 0; }
Edited by Nathaniel10: code tags | https://www.daniweb.com/programming/software-development/threads/419029/data-structure-stack-problem | CC-MAIN-2017-34 | refinedweb | 384 | 85.39 |
Dear All,
I am trying to use fastjet with Root. I am including following header files of fasjet in my macros code:
#include "fastjet/PseudoJet.hh" #include "fastjet/ClusterSequence.hh" #include "fastjet/Selector.hh"
along with
gSystem->Load("~/Programs/fastjet-install/lib/libfastjet.so"); gSystem->Load("~/Programs/fastjet-install/lib/libfastjettools.so"); gSystem->AddIncludePath("~/Programs/fastjet-install/include");
but this is not making root access the above mentioned fastjet header files. (I also tried putting entire path for the header files in #include , still doesn’t work ).
I have fastjet environment variable and library defined in my shell, and it works well with other programs like Pythia8.
I also tried copying fastjet header files to the root header files folder (not the systematic way though) which makes it read the header files, but then i get an interpreter error as:
Error: Missing one of ' \/' expected at or after line 40. Error: Unexpected end of file (G__fgetstream():2) /usr/local/include/root/fastjet/internal/base.hh:44: *** Interpreter error recovered ***
base.hh (1.59 KB)
Here, base.hh is a read-only fastjet header file, hence can’t be edited. But this should not be the problem I think.
Kindly please help me how can I make fastjet work with my root. I will be highly grateful for your very kind help.
Regards,
Swasti. | https://root-forum.cern.ch/t/using-fasjet-header-files-in-root/17254 | CC-MAIN-2022-27 | refinedweb | 223 | 51.44 |
If you’re like me, you may have those moments where you’re at the terminal, hands hovering over your keyboard, and … nothing. I always seem to freeze up and probably rely too much on bash history. (The up arrow is my friend.)
While learning Kubernetes, I ended up posting 14 or 15 sticky notes on my monitor to help me in those moments — but after a while, I could barely read what was on the screen. So finally, I created one small, easy-to-read piece of paper to reference when I, or you, get stuck. This Kubernetes Cheat Sheet is meant to get you started performing commands in Kubernetes and provide you with all the basic commands at a quick glance. (Check out the downloadable asset below!)
Command Results
Some of the commands on this cheat sheet might not return any results, but have no fear! See below for some resources you can create, then quickly turn around and run the commands in your cheat sheet to alter your resources any way you wish!
Let’s start with pods. Here is the YAML for a basic busybox pod:
apiVersion: v1 kind: Pod metadata: name: busybox spec: containers: - image: busybox:1.28.4 command: - sleep - "3600" name: busybox restartPolicy: Always
Learn more about YAML here.
Create the pod with this command:
kubectl create -f busybox.yaml
Use this command to create a deployment:
kubectl run nginx --image=nginx
Use this command to create a service from the deployment above:
kubectl expose deployment nginx --port=80 --type=NodePort
Sign up for a free community account today and check out these free courses: Kubernetes Essentials and Beginner’s Guide to Containers and Orchestration to learn more about Kubernetes!
Here is the YAML for a simple persistent volume using local storage from the node:
apiVersion: v1 kind: PersistentVolume metadata: name: data-pv namespace: web spec: storageClassName: local-storage capacity: storage: 1Gi accessModes: - ReadWriteOnce hostPath: path: /mnt/data
Use the following command to create the persistent volume:
kubectl apply -f my-pv.yaml
Here is the YAML for a simple ConfigMap:
apiVersion: v1 kind: ConfigMap metadata: name: my-config-map data: myKey: myValue anotherKey: anotherValue
Use the following command to create the ConfigMap:
kubectl apply -f configmap.yaml
Here is the YAML for a secret:
apiVersion: v1 kind: Secret metadata: name: my-secret stringData: myKey: myPassword
Use this command to create the secret:
kubectl apply -f secret.yaml
Here is the YAML for a service account:
apiVersion: v1 kind: ServiceAccount metadata: name: acr namespace: default secrets: - name: acr
Use this command to create the service account:
kubectl apply -f serviceaccount.yaml
Download Now!
This should be enough to get you started! I’ve also created this PDF for you to download and keep next to you for reference!
If you enjoyed these exercises and following along with the commands, check out the brand new Cloud Native Certified Kubernetes Administrator (CKA) course on Linux Academy to dig deeper into Kubernetes! Or check out these additional free resources:
Very helpful, thank you!
Thanks a lot
Many thanks | https://linuxacademy.com/blog/containers/kubernetes-cheat-sheet/ | CC-MAIN-2019-30 | refinedweb | 512 | 59.33 |
Pointer is a variable which stores address of another variable(memory location which has certain value).When we declare a variable ,a specific memory is allocated to the variable depending on the data type.Consider the following statements :
int age = 25; //Fig 1
float percentage = 90.6; //Fig 2
A specific memory location(Say 200) is allocated to variable age(Fig 1).2 bytes will be given as it is of int type and the value 25 will be stored and 4 bytes will be allocated to percentage (Fig 2) as it of float type at say 300 location.
In order to store these locations/addresses we use pointers.
Declaration of Pointers
datatype *pointer_name;
- datatype is the type of data(char,int,float) associated with the pointer.
- * Â specifies that the variable used is a pointer(stores address of another variable).It refers to the value of variable to which it points(for which it stores the address).
- pointer_name could be any name of pointer variable.
Initialization of Pointers
datatype variable_name = value;
datatype  *pointer_name = &variable_name;
Example 1
#include<stdio.h> int main() { int age =25; int *ptr =&age; printf("The value of variable is %d\n",age); printf("The address of variable is %d \n",ptr); printf("The value of variable is %d",*ptr); }
Example 2
Multiplication,Addition and Division using Pointers
#include <stdio.h> int main() { int a,b; int sum =0,mul=1,div=1; int *ptr1,*ptr2; ptr1 = &a; ptr2= &b; //printf("Enter two numbers\n"); scanf("%d%d",&a,&b); sum = *ptr1 + *ptr2; mul = *ptr1 * *ptr2; div = *ptr1/ *ptr2; printf("The sum is %d\n",sum); printf("The multiplication is %d\n",mul); printf("The division is %d\n",div); return 0; }
We can add integers to or subtract integers from pointers as well as subtract one pointer from the other.We can add integers to or subtract integers from pointers as well as subtract one pointer from the other.
We can compare pointers by using relational operators in the expressions. For e.g. p1>p2,p1==p2 and p1!=p2 are all valid in C.We can compare pointers by using relational operators in the expressions. For e.g. p1>p2,p1==p2 and p1!=p2 are all valid in C.
Generic Pointers
Program
#include<stdio.h> int main() { int age=10; char name='c'; void *ptr; ptr=&age; int typecast1 = *(int*)ptr; printf("Generic Pointer points to integer value %d\n",typecast1); ptr = &name; char typecast2 = *(char*)ptr; printf("Generic Pointer points to character name %c",typecast2); return 0; }
Illustration of Program
- ptr is a generic pointer (void as the data type).
- ptr stores the address of variable age.
- typecast1 variable of int type is used to store *(int*)ptr to convert void to int type and is printed using printf.
- Now ptr stores the address of variable name which is of char type.
- typecast2 variable of char type is used to convert void to char type and is printed on screen using printf.
If typecasting is not done and pointer value is printed as such(*ptr) then error message would be displayed on screen.If typecasting is not done and pointer value is printed as such(*ptr) then error message would be displayed on screen.
Null Pointers
Null Pointers does not point to any valid memory address(Fig 3).In order to declare a null pointer,a predefined constant NULL is used.They are used to represent conditions such as the end of a list of unknown length or the failure to perform certain action.
Syntax
int *ptr = NULL;
Short URL: | http://letslearncs.com/pointers-in-c/ | CC-MAIN-2017-30 | refinedweb | 599 | 63.39 |
Ligeti: Désordre¶
Note
Explore the
abjad/demos/desordre/ directory for the complete code to
this example, or import it into your Python session directly with
from
abjad.demos import desordre.
This example demonstrates the power of exploiting redundancy to model musical structure. The piece that concerns us here is Ligeti’s Désordre: the first piano study from Book I. Specifically, we will focus on modeling the first section of the piece:
The redundancy is immediately evident in the repeating pattern found in both staves. The pattern is hierarchical. At the smallest level we have what we will here call a cell:
There are two of these cells per measure. Notice that the cells are strictly contained within the measure (i.e., there are no cells crossing a bar line). So, the next level in the hierarchy is the measure. Notice that the measure sizes (the meters) change and that these changes occur independently for each staff, so that each staff carries it’s own sequence of measures. Thus, the staff is the next level in the hierarchy. Finally there’s the piano staff, which is composed of the right hand and left hand staves.
In what follows we will model this structure in this order (cell, measure, staff, piano staff), from bottom to top.
The cell¶
Before plunging into the code, observe the following characteristic of the cell:
1. It is composed of two layers: the top one which is an octave “chord” and the bottom one which is a straight eighth note run.
2. The total duration of the cell can vary, and is always the sum of the eight note funs.
3. The eight note runs are always stem down while the octave “chord” is always stem up.
4. The eight note runs are always beamed together and slurred, and the first two notes always have the dynamic markings ‘f’ ‘p’.
The two “layers” of the cell we will model with two Voices inside a simultaneous Container. The top Voice will hold the octave “chord” while the lower Voice will hold the eighth note run. First the eighth notes:
>>> pitches = [1,2,3] >>> notes = scoretools.make_notes(pitches, [(1, 8)]) >>> beam = Beam() >>> attach(beam, notes) >>> slur = Slur() >>> attach(slur, notes) >>> dynamic = Dynamic('f') >>> attach(dynamic, notes[0]) >>> dynamic = Dynamic('p') >>> attach(dynamic, notes[1])
>>> voice_lower = Voice(notes) >>> voice_lower.>> command = indicatortools.LilyPondCommand('voiceTwo') >>> attach(command, voice_lower)
The notes belonging to the eighth note run are first beamed and slurred. Then
we add the dynamics to the first two notes, and finally we put them inside
a Voice. After naming the voice we number it
2 so that the stems of the
notes point down.
Now we construct the octave:
>>> import math >>> n = int(math.ceil(len(pitches) / 2.)) >>> chord = Chord([pitches[0], pitches[0] + 12], (n, 8)) >>> articulation = Articulation('>') >>> attach(articulation, chord)
>>> voice_higher = Voice([chord]) >>> voice_higher.>> command = indicatortools.LilyPondCommand('voiceOne') >>> attach(command, voice_higher)
The duration of the chord is half the duration of the running eighth notes if the duration of the running notes is divisible by two. Otherwise the duration of the chord is the next integer greater than this half. We add the articulation marking and finally ad the Chord to a Voice, to which we set the number to 1, forcing the stem to always point up.
Finally we combine the two voices in a simultaneous container:
>>> container = Container([voice_lower, voice_higher]) >>> container.is_simultaneous = True
This results in the complete Désordre cell:
>>> cell = Staff([container]) >>> show(cell)
Because this cell appears over and over again, we want to reuse this code to generate any number of these cells. We here encapsulate it in a function that will take only a list of pitches:
def make_desordre_cell(pitches): '''The function constructs and returns a *Désordre cell*. `pitches` is a list of numbers or, more generally, pitch tokens. ''' notes = [scoretools.Note(pitch, (1, 8)) for pitch in pitches] beam = spannertools.Beam() attach(beam, notes) slur = spannertools.Slur() attach(slur, notes) clef = indicatortools.Dynamic('f') attach(clef, notes[0]) dynamic = indicatortools.Dynamic('p') attach(dynamic, notes[1]) # make the lower voice lower_voice = scoretools.Voice(notes) lower_voice.') attach(articulation, chord) # make the upper voice upper_voice = scoretools.Voice([chord]) upper_voice.name = 'RH Upper Voice' command = indicatortools.LilyPondCommand('voiceOne') attach(command, upper_voice) # combine them together container = scoretools.Container([lower_voice, upper_voice]) container.is_simultaneous = True # make all 1/8 beats breakable leaves = select(lower_voice).by_leaf() for leaf in leaves[:-1]: bar_line = indicatortools.BarLine('') attach(bar_line, leaf) return container
Now we can call this function to create any number of cells. That was actually the hardest part of reconstructing the opening of Ligeti’s Désordre. Because the repetition of patters occurs also at the level of measures and staves, we will now define functions to create these other higher level constructs.
The measure¶
We define a function to create a measure from a list of lists of numbers:
def make_desordre_measure(pitches): '''Makes a measure composed of *Désordre cells*. `pitches` is a list of lists of number (e.g., [[1, 2, 3], [2, 3, 4]]) The function returns a measure. ''' for sequence in pitches: container = abjad.demos.desordre.make_desordre_cell(sequence) time_signature = inspect_(container).get_duration() time_signature = mathtools.NonreducedFraction(time_signature) time_signature = time_signature.with_denominator(8) measure = scoretools.Measure(time_signature, [container]) return measure
The function is very simple. It simply creates a DynamicMeasure and then
populates it with cells that are created internally with the function
previously defined. The function takes a list pitches which is actually a
list of lists of pitches (e.g.,
[[1,2,3], [2,3,4]]. The list of lists of
pitches is iterated to create each of the cells to be appended to the
DynamicMeasures. We could have defined the function to take ready made cells
directly, but we are building the hierarchy of functions so that we can pass
simple lists of lists of numbers to generate the full structure. To construct
a Ligeti measure we would call the function like so:
>>> pitches = [[0, 4, 7], [0, 4, 7, 9], [4, 7, 9, 11]] >>> measure = make_desordre_measure(pitches) >>> staff = Staff([measure]) >>> show(staff)
The staff¶
Now we move up to the next level, the staff:
def make_desordre_staff(pitches): r'''Makes Désordre staff. ''' staff = scoretools.Staff() for sequence in pitches: measure = abjad.demos.desordre.make_desordre_measure(sequence) staff.append(measure) return staff
The function again takes a plain list as argument. The list must be a list of lists (for measures) of lists (for cells) of pitches. The function simply constructs the Ligeti measures internally by calling our previously defined function and puts them inside a Staff. As with measures, we can now create full measure sequences with this new function:
>>> pitches = [[[-1, 4, 5], [-1, 4, 5, 7, 9]], [[0, 7, 9], [-1, 4, 5, 7, 9]]] >>> staff = make_desordre_staff(pitches) >>> show(staff)
The score¶
Finally a function that will generate the whole opening section of the piece Désordre:
def make_desordre_score(pitches): '''Makes Désordre score. ''' assert len(pitches) == 2 staff_group = scoretools.StaffGroup() staff_group.context_name = 'PianoStaff' # build the music for hand in pitches: staff = abjad.demos.desordre.make_desordre_staff(hand) staff_group.append(staff) # set clef and key signature to left hand staff clef = indicatortools.Clef('bass') attach(clef, staff_group[1]) key_signature = indicatortools.KeySignature('b', 'major') attach(key_signature, staff_group[1]) # wrap the piano staff in a score score = scoretools.Score([staff_group]) return score
The function creates a PianoStaff, constructs Staves with Ligeti music and appends these to the empty PianoStaff. Finally it sets the clef and key signature of the lower staff to match the original score. The argument of the function is a list of length 2, depth 3. The first element in the list corresponds to the upper staff, the second to the lower staff.
The final result:
>>> top = [ ... [[-1, 4, 5], [-1, 4, 5, 7, 9]], ... [[0, 7, 9], [-1, 4, 5, 7, 9]], ... [[2, 4, 5, 7, 9], [0, 5, 7]], ... [[-3, -1, 0, 2, 4, 5, 7]], ... [[-3, 2, 4], [-3, 2, 4, 5, 7]], ... [[2, 5, 7], [-3, 9, 11, 12, 14]], ... [[4, 5, 7, 9, 11], [2, 4, 5]], ... [[-5, 4, 5, 7, 9, 11, 12]], ... [[2, 9, 11], [2, 9, 11, 12, 14]], ... ]
>>> bottom = [ ... [[-9, -4, -2], [-9, -4, -2, 1, 3]], ... [[-6, -2, 1], [-9, -4, -2, 1, 3]], ... [[-4, -2, 1, 3, 6], [-4, -2, 1]], ... [[-9, -6, -4, -2, 1, 3, 6, 1]], ... [[-6, -2, 1], [-6, -2, 1, 3, -2]], ... [[-4, 1, 3], [-6, 3, 6, -6, -4]], ... [[-14, -11, -9, -6, -4], [-14, -11, -9]], ... [[-11, -2, 1, -6, -4, -2, 1, 3]], ... [[-6, 1, 3], [-6, -4, -2, 1, 3]], ... ]
>>> score = make_desordre_score([top, bottom])
>>> lilypond_file = documentationtools.make_ligeti_example_lilypond_file(score)
>>> show(lilypond_file)
Now that we have the redundant aspect of the piece compactly expressed and encapsulated, we can play around with it by changing the sequence of pitches.
In order for each staff to carry its own sequence of independent measure
changes, LilyPond requires some special setup prior to rendering. Specifically,
one must move the LilyPond
Timing_translator out from the score context and
into the staff context.
(You can refer to the LilyPond documentation on Polymetric notation to learn all about how this works.)
In this example we a custom
documentationtools function to set up our
LilyPond file automatically. | http://abjad.mbrsi.org/literature_examples/ligeti.html | CC-MAIN-2017-09 | refinedweb | 1,529 | 57.57 |
I read Havocs post on embeddable languages with great joy. I very much agree that this is a good model for writing a larger application. I.E. you write the core of your application in C/C++ and then do the higher levels in an embedded dynamic scripting language. All the reasons Havoc lists are important, especially for the Gnome platform (a swarm-of-processes model where we want to avoid duplicating the platform in each language used).
There are a lot of dynamic scripting languages around, but only a few of them really qualify as embeddable languages in the form Havoc proposes. I would say the list is basically just: lua, JavaScript and some of the lisp dialects.
I’m far from a JavaScript lover, but it seems pretty obvious to me that from these alternatives it is the best choice. For a variety of reasons:
- Its a modern dynamic language
Its dynamically typed, it has precise gargabe collection, lambda functions, generators, array comprehension, etc
- Its object oriented (sorta)
We want to wrap the GObject type system which is object oriented, so this is important. The JavaScript prototype-based model is a bit weird, but its simple and it works.
- It has no “platform” of its own
If we inject the Gnome platform (Gtk+, Gio, etc) into JavaScript it doesn’t have to fight with any “native” versions of the same functionallity, and we won’t be duplicating anything causing bloat and conversions. (JS does have a Date object, but that is basically all.)
- Lots of people know it, and if not there are great docs
Chances of finding someone who knows JS is far higher than finding someone who knows e.g. lua or scheme, and having a large set of potential developers is important for a free software project.
- Lots of activity around the language and implementations
The language continually evolves, and there is a standards body trying to shephard this. There is also a multitude of competetive implementations, each trying to be best. This leads to things like the huge performance increases with TraceMoney and Google v8. Such performance work and focus is unlikely to happen in smaller and less used languages.
To experiment with this I started working on GScript (gscript module in gnome svn). It is two things. First of all its an API for easily embedding JavaScript (using SpiderMonkey atm) in a Gtk+ application. Secondly its a binding of GObject and GObject-based libraries to JavaScript.
Here is a quick example:
GScriptEngine *engine;
GScriptValue *res;
engine = g_script_engine_new ();
res = g_script_engine_evaluate_script (engine, "5+5");
g_print ("script result: %s", g_script_value_to_string (res));
g_object_unref (res);
If you want to expose a gobject (such as a GtkLabel) as a global property in JS you can do:
GtkLabel *label = gtk_label_new ("Test");
GScriptValue *val = g_script_value_new_from_gobject (G_OBJECT (label));
g_script_value_set_property (g_script_engine_get_global (engine), "the_label", val);
Then you can access the label from javascript by the name “the_label”. You can read and set the object properties, connect and emit the signals and call all the methods that are availible via introspection.
You can also easily call from javascript into native code.
static GScriptValue *native_function (int n_args, GScriptValue **args);
GScriptValue *f = g_script_value_new_from_function (native_function, num_args);
g_script_value_set_property (g_script_engine_get_global (engine), "function", f);
The JavaScript wrappers are fully automatic, and lazily binds objects/classes as they are used in JS. The object properties and signal information are extracted from the GType machinery. Method calls are done using the new GObject-Introspection system.
More work is clearly needed on the details of the JS bindings, but this is already a usable piece of code. I’m very interested in feedback about interest in something like this, and discussions about how the JS bindings should look.
Hear hear, totally agree that JS is a good option to tie things together with.
GScript looks awesome!
Hmm, I think we need some sort of standard query language and DOM to hide all of the complexities. As I have said before the beauty of JS is not the language but the environment it works in. If for instance we used GtkBuilder as the DOM the API could be such that you just register the glade/gtkbuilder tree and can query by name (even using libraries such as jQuery which makes the job even easier).
J5 - should be possible to look up GtkWidgets by name; the GtkBuildable interface exposes an _get_name call for all buildable widgets. Also, gobject-introspection gives you bindings to Hippo Canvas, GooCanvas as well as WebKit.
Neat
But I giust think it should be called giavascript.
Ciao
John (J5) Palmieri: What a nice idea.
window.getObjectsByType(”GtkLabel”);
window.getObjectByName(”location-entry”); # s/Name/Id/ ?
notebook = window.getObjectByName(”document-notebook”);
notebook.set_tab_pos(notebook.POS_LEFT);
John: Introducing a dom and query language is interesting. Using gobject introspection we could easily expose getters and other properties to t the query language, eg so you can easily do “child.children[0].child.label”.
properties, getters, fields (to be deprecated in 3.0), widget hierarchies are possible.
@John, sheesh… if that is doable, it’d be awesome!
I really love the idea of having a scriptable “platform” - Python is the current gnomish of doing that. As a longtime Python user (1.5.x
I loved it.
(Random scripting language thoughts ahead
But I would love a language that I can use for both scripting and to build applications out of without the overhead of an interpreter. Javascript is popular, but is a horrible language in its current form. So while I applaud gscript and Miguel’s new csharp scripting, I still yearn for a decent language that can be run as a script, and compiled efficiently when needed.
Maybe a form of Vala scripting? Or maybe seed7 -? Or a compiler for the Blue language?
For the moment I’m learning Seed7 and using both its interpreter and compiler.
I don’t think a query language is quite the right thing. You really want to get the precise thing you want, not something that “matches the query”. However, it should be very easy to use the wrapped version of:
GObject *gtk_builder_get_object (GtkBuilder *builder, const gchar *name);
To look up an object from the UI you built with GtkBuilder and then connect up the signals and whatnot.
Even more interesting would be if you could define some script snippets in the gtkbuilder files themselves. Is the format extensible enough to support that?
Conrad:
You should look up the work done in TraceMonkey and Google V8. There is some serious performance work happening around JavaScript.
I don’t mind python, but its not really a great embeddable language, because it comes with too much baggage. It has tons and tons of libraries, most that “conflict” with the stuff in the Gtk/Gnome development platform. Its also more heavyweight than javascript I believe.
alexl: IMO the library that comes with Python is a big advantage. Even for simple plugin-like scripts, it’s useful to have all those utility functions for strings, regexps, path names, date/time… For Javascript, the Gnome platform would have to offer all that stuff on its own.
Andy - ‘giavascript’ doesn’t look quite right to me. How about ‘guavascript’? Begins with G, rhymes with javascript, and ‘guava’ immediately offers a visual theme for the project…
You might want to look at the JavaScriptCore C API
Personally I would like to see a general framework for working with scripts. I.e. We have a single frontend (GScript) and plugins (lua-plugin, js-plugin, schema-plugin, python-plugin etc.). In such way every languaguage is/can be included.
oliver:
When writing an application in python the python standard libs are great. In fact, they are often the reason you’re using python in the first place. However, if you’re writing your application in C using the gtk+/gnome platform and just using scripting to tie together the highlevel behaviour of your app its not so good.
For instance, none of the python objects can be passed into the lower levels of your app. The python libs don’t follow the behavior of the platform (gconf settings, desktop specified http proxy, etc, etc). And you get double implementations of all the things as most stuff exists in both set of libaries.
Maciej:
Thats seems like a very bad idea to me. Everything using different languages is a total pain in the ass for maintainability. It will also lead to multiple languages used in the same desktop/app leading to much bloat. Its also technically very hard as these languages behave very differently wrt how they look for the embedder and how they manage memory and other resources.
Would be great to, say, have this go into glib proper eventually, and the API be fully scripting-language-independent and you install modules to use individual languages for scripting. Then all apps (gedit, totem, …) can use scripts in any language you have a scripting module installed for…
behdad:
I can’t really see that. The API isn’t really language agnostic and cannot be, because it depends on the language you’re embedding. For instance, python does not have a global object like JS does, and the native values spec:ed by GScriptValue are very much different between different languages. So are details like how calls to native functions work, how objects/derivation work, etc, etc.
I fail to see how MONO is never mentioned in these discussions?
1. A large part of the GNOME community is also involved with MONO
2. MONO can run scripting languages such as IronPython and IronRuby
3. Once MONO can fully host the dynamic language runtime (DLR) which from what I can understand shouldn’t be too soon, even more languages will be added to that list (including JavaScript).
4. A lot of developers (outside of the GNOME/Linux community) is working with .NET programming, the threshold for them to begin GNOME development will be lowered.
5. The philosophy of .NET/MONO is very similar to that of glib, taken one step further. glib is written in C, in part to make it easier for people to create bindings for other languages (as we all can see, this has succeded), CIL is a bytecode standard in .NET and MONO which all languages compile to, it is designed to support a variety of higher level languages. glib is also written with multiplatform ability in mind, so also for CIL.
Markus:
It is exactly your nr 5 that makes mono unsuitable for this kind of embedded scripting. Its not a compliment to the Gnome stack but largely a replacement. So, you get double implementations of most things that don’t interchange well. Mono is also rather heavy-weight.
I’m not saying mono is bad in general. Its an excellent way to write an app, but not in the C core plus highlevel behaviour scripting model. If anything I would rather use C# as the base of the app and then script it in a DLR based .net scripting language like IronPython.
Yeah, I see your point, but should one make (power-)users suffer because one love ones own code too much? Maybe the way to go is the mono way, it is IMHO a superior technology.
Markus:
To each his own. Some people like .net, some not. I have no interest in arguing about that.
Anyway, i don’t think the exact language picked is really that important for the users. One can write good or bad programs in all languages, and arguing that you must use a specific language to make the user happy is just lame.
Hi Alexl, I was also thinking about javascript + GObject for some time. Nice to see that you made such big progress on it! recently I also came up with some ideas about bridging Javascript with gobject at
BTW: in the svn rev 4, js-gtk doesn’t work as expected because you’ve prefixed the signals with “on_”.
[...] Javascript + GObject is something that is going to happen. When I read my jammed VALA mail list, I found a post related to GScript, an javascript + GObject implementation. It was announced in [...]
alexl
you have my full support on this. Zoo of languages is very uneasy environment to live in. And JS must-know today, because of web. Also it is impossable to incorporate useful scripts done by users(can’t imagine firefox exentions written in 15+ scripting langs) if there are no standard lang.
>vaScript prototype-based model is a bit weird,
if i’m not mistaken, next version of standard has separate classes and objects, not prototype based…
sorry for bad english
alxel,
i added a factory function to create some gtk objects with GScript.
Just a prove-idea thing. as shown in the screenshot.
also, the js engine has a standard name ‘libjs’ (at least on fedora, try search js-devel package), in configure.ac a PKG_CHECK libjs is sufficient.
alxel,
Do you have any ideas on how to invoke the function members of a namespace and the constructors of an object-klass?
rainwoodman:
You can already create object with e.g.:
new GType.GtkLabel(”label”, “some text”)
There is no code atm to import a full namespace, so the functions of a namespace are not visible. However, the actual call is very similar to the one for method calls (they are both just c functions really).
Some work is happening on getting this running atm. More info later.
Hi,
This is an area I’ve been thinking about quite a lot recently and I’ve been considering development of an all-GLib implementation of ECMA-262. A GLib implementation would avoid duplication of code and hopefully be easier for Gnome developers to maintain. Perhaps a collaboration is in order? What do you think?
Matt.
Matt:
Eh, i’m not sure what you mean, but a (re-)implementation of javascript just to use glib is both duplication of code (you’ll be running two js implementations if you also run firefox) and harder to maintain (spidermonkey is already maintained, the new implementation is not).
Not to mention that we wont’ get any of the advantages of all the important performance work done on the mainline js engines.
Hi Alex,
By duplication of code, I mean Spidermonkey implementing its own data structures, data manipulation functions, unicode handling etc when Glib already provides such facilities. This might not be a problem on the server or desktop, but on memory constrained embedded devices (where Gnome is really starting to gain some momentum), this is an issue.
If one is to use an existing Javascript engine, perhaps JavaScriptCore would be a better choice seeing as WebKit seems to be winning the embedded browser wars, otherwise systems are going to end up with all of GLib’s facilities, some duplicated by SpiderMonkey and some duplicated WebKit simultaneously. That is a really messy overall solution. Of course, bringing in a 3rd javascript engine and running that alongside webkit is not ideal either, but unless someone comes up with a grade A 100% Gtk+ and Glib-based web browser (which lets face it, isn’t likely to happen anytime soon), there is not much choice about that.
I don’t yet have a convincing argument for implementing yet another Javascript engine, as I said in my original post, I’m just considering things at the moment. Such a task is hardly trivial, so there are lots of reasons not to do it. I feel uneasy about using SpiderMonkey and JavaScriptCore because they’ve both been hacked around and are pretty messy inside (e.g. one minute the engine is designed to be standalone, the next minute is integrated into a browser, then it is hacked out again). My comment about a Glib-based Javascript engine being easier for Gnome developers to maintain meant that the code would be more in fitting with the GTK+ way of doing things from the inside out. Of course, another project is going to involve more work than using what is already available. There are pros and cons either way.
Matt. | http://blogs.gnome.org/alexl/2008/09/09/embeddable-languages-an-implementation/ | crawl-002 | refinedweb | 2,683 | 62.58 |
Preferences subsystem
While reading this file, it's advisable to consult the Doxygen documentation for the Inkscape::Preferences class.
Contents
Where preferences are stored
Preferences are currently stored in an XML file called
~/.inkscape/preferences.xml (0.46 and earlier) ~/.config/Inkscape/preferences.xml (0.47 and later)
on Linux or
%APPDATA%\inkscape\preferences.xml
on Windows. (See Default Values on Microsoft Windows for the location of %APPDATA%)
This file stores the hierarchy of values, much like a GConf database or the Windows Registry. In this file, element nodes correspond to keys (folders) and attributes to entries.
In future, there will be no guarantee that preferences are stored in an XML file.
Find a place for your value
When creating a new preference value, start by examining this file and finding a logical place for the new value. For example, if I want to add a value for some option related to the Select tool, I find the group element with. Now this element is empty, but I can store my value in a new attribute. If you have added a new object to the program, such as a new tool or dialog, add a new element in an appropriate top-level element. If you want to store something entirely new and not yet taken care of, you can even add a new top-level element.
Here is a positive example:
<group id="tools"> <eventcontext id="select" foo="1" bar="1.0" /> </group>
Do NOT do something like this:
<group id="options"> <group id="nudgedistance" value="2.8346457"/> </group>
This is because if you think about element nodes as folders and attributes as entries, you would get a path like "/options/nudgedistance/value", which is redundant. It's better to place your preference as an attribute of the options node. However, it would be ideal if you found a better place for your preference than the generic "options" hierarchy.
Some points of interest:
- Element names are irrelevant. What matters for finding the value is the id attribute values. So you can use group elements or any others, whatever seems more logical.
- Values are stored in attributes. No text within elements is allowed (except whitespace).
Add the value to the skeleton
When no preferences file exists, a new one is created based on src/preferences-skeleton.h. Moreover, if the preferences file lacks some values, the defaults are taken from the same file. So, I edit this file adding
"<group id=\"tools\">\n" " <eventcontext id=\"select\"\n" " foo=\"1\"\n" " bar=\"1.0\" />\n" "</group>\n"
before the closing
"</inkscape>";
Don't forget to escape quotes. The value given in this file is the default; if the user has changed it, his/her value in preferences.xml takes precedence over src/preferences-skeleton.h.
Now when you recompile, run, and exit Inkscape, the new elements with the default values will be added to your local preferences.xml (without affecting other values there).
Access the value in the program
Now for the interesting part. If you want to access your value in the program, start by adding
#include "preferences.h"
This header includes a singleton class that you can use to access the preferences. To get at the value of the preference, use its members methods. To receive an instance of this class, call the static function get(). Always name the pointer to this object "prefs", for consistency.
Inkscape::Preferences *prefs = Inkscape::Preferences::get(); double val = prefs->getDouble("/tools/select/bar", 1.0);
What this call does is it reads the value from the memory representation of preferences.xml. This representation was created on program start, may be used or modified by changing any values in it, and will be written back into preferences.xml when the program exits. Preferences are also saved in the crash handler, so changes to them should be preserved even if Inkscape crashes.
To pinpoint the value we need, you pass to this function a text string in a syntax that is vaguely reminiscent of XPath, except that it uses id attribute values instead of element names. That is, /tools/select/bar means, find top-level element with id="tools", within it, an element with id="select", and finally get the value from the bar attribute of the select node. The second argument is the default value which will be returned if no such value is stored in the preferences.
It is important that this prefs->getDouble call is done before each use of the value, and not just once upon program launch, because the preferences may be edited by the user via a dialog. If your preference is directly tied to some user interface element, it's best to use observers to update it (more on them later).
Similarly, to write a new value back into the preferences' memory representation, use
prefs->setDouble("/tools/select/bar", new_bar);
In addition to double values, you can also store and retrieve Glib::ustrings, ints, bools, CSS styles and a few more (check up-to-date Doxygen documentation for a list). Always choose the right type for your value.
Preference observers
If your preference is directly tied to something and you don't want to retrieve its value every time, you can use observers. You do this by deriving from the Inkscape::Preferences::Observer inner class. Here is an example on how to do it:
class MyObserver: public Inkscape::Preferences::Observer { public: MyObserver() : Inkscape::Preferences::Observer("/tools/select/bar") {} virtual void notify(Inkscape::Preferences::Entry const &value) { double new_value = value.getDouble(); // ... // do something with you value here // ... } };
You can set observers on any point of the hierarchy - both keys and individual entries. This way, the notify method will be called every time your preference is updated. If you want to change the preference from its handler, be sure to guard against infinite recursion (or ideally don't do this, because it's not a good practice). Inkscape::Preferences::Entry has more public methods that essentially mirror those of the Preferences class, but don't take a path parameter. To get the preference's path, use getPath(). To get the preference's base name (i.e. the last element of its path), use getEntryName().
NOTE: In future, this mechanism may be obsoleted in favor of sigc++ signals.
Guard against screw-ups
You should not assume that the values in the prefs file were written there by your code - they may have been very well directly tweaked by the user. If your code will crash when an out-of-bounds value is returned, you should use the methods getIntLimited and getDoubleLimited. Do not use these to store boolean preferences as integers limited to 0 and 1 - use getBool and setBool for that.
Add UI
In ui/dialogs/inkscape-preferences.cpp is the code that attaches a GUI to any given preference. If your option doesn't already have a logical place to go, add it under the "Misc" section. For example, to add an integer-selecting spin-button (sb) you could add to inkscape-preferences.h the line
PrefSpinButton _steps_arrow,
And add to inkscape-preferences.cpp the lines:
_steps_arrow.init ( "/tools/select/bar", 0.0, 3000.0, 0.01, 1.0, 2.0, false, false); _page_steps.add_line( false, _("Arrow keys move by:"), _steps_arrow, _("px"), _("Pressing an arrow key moves selected object(s) or node(s) by this distance (in px units)"), false);
The next section of this tutorial will be devoted to how a preference value can be edited in an options dialog. See related page: PreferencesDialog.
Discuss below:
I also think we should consider having a global preferences file in /etc/inkscape/preferences.xml, and similarly with markers.svg, that are installed during program installation, and merged with or overwritten by the user's settings. This is standard UNIX config file strategy. - bryce
Njh wonders whether you would be better off using gconf?
Maybe, but I'm not in a position to decide. Right now I'm interested in how I can use the existing preferences system to store and retrieve a value. -- bb
Looking further, it appears that gconf is indeed a similar XML system, so at some point it should be easy enough to switch over. --njh
Mental and I were discussing GConf, and he brought up the issue of Win32 compatibility. It would be nice if we could abstract a preferences interface that would work with both the windows registry and GConf. I think they have similar interfaces - so it shouldn't be impossible --ted
- Please do not use the Windows registry. The beauty of having the preferences file is that Inkscape does not need an install or admin rights, and does not litter the OS (at least, i think that is what the registry does). It is very easy for a user that has problems to delete his preferences.xml and download a new one; much easier than fiddling in the registry. - Johan
If this is going to be done, why not also look into implementing Apple's .plist support, it's a simple xml format defined: These files are stored in ~/Library/Preferences/tld.<company>.<product>.plist - tom
Well, if we're looking around at existing approaches, the discussion probably wouldn't be complete without taking a look at the new Java preferences API added in 1.4. They currently store to a dir/dir/xml tree on Linux, and to the registry on Windows, so they do that exact abstraction we're looking at. (funny thing is, for some apps, I wrote a layer to abstract that for running in Java VM's prior to 1.4 - double indirection). --jon
The prefs have been extensively refactored, so much of the above discussion is no longer relevant. Coding in plist support should be easier now if desired. GConf is also possible as a backend but it would be a pain for users that need to switch between KDE and Gnome, so on the mailing list we decided against it. On top of that, it seems like GConf will be obsoleted in favor of the (hopefully desktop-neutral) dconf in some near future. -- Krzysztof Kosiński (tweenk) | https://wiki.inkscape.org/wiki/index.php?title=Preferences_subsystem&oldid=76004 | CC-MAIN-2020-10 | refinedweb | 1,692 | 64.1 |
Query:
I have a dictionary of values read from two fields in a database: a string field and a numeric field. The string field is unique, so that is the key of the dictionary.
I can sort on the keys, but how can I sort based on the values?
How to sort a dictionary by value? Answer #1:
Python(sorted(x.items(), key=lambda item: item[1])) {0: 0, 2: 1, 1: 2, 4: 3, 3: 4}
Older Python))
In Python3 since unpacking is not allowed, we can use
x = {1: 2, 3: 4, 4: 3, 2: 1, 0: 0} sorted_x = sorted(x.items(), key=lambda kv: kv[1])
If you want the output as a dict, you can use
collections.OrderedDict:
import collections sorted_dict = collections.OrderedDict(sorted_x)
You might also want to go through this article for more information:
Answer #2:
As simple as:
sorted(dict1, key=dict1.get)
Well, it is actually possible to do a “sort by dictionary values”. original post was trying to address such an issue. And the solution is to do sort of list of the keys, based on the values, as shown above.
Answer #3:
You could use:
sorted(d.items(), key=lambda x: x[1]):
[('one', 1), ('two', 2), ('three', 3), ('four', 4), ('five', 5)]
Answer #4:
Dicts can’t be sorted, but you can build a sorted list from them.
A sorted list of dict values:
sorted(d.values())
A list of (key, value) pairs, sorted by value:
from operator import itemgetter sorted(d.items(), key=itemgetter(1))
Answer #5:
In recent Python 2.7, we have the new OrderedDict type, which remembers the order in which the items were added.
>>> d = {"third": 3, "first": 1, "fourth": 4, "second": 2} >>> for k, v in d.items(): ... print "%s: %s" % (k, v) ... second: 2 fourth: 4 third: 3 first: 1 >>> d {'second': 2, 'fourth': 4, 'third': 3, 'first': 1}
To make a new ordered dictionary from the original, sorting by the values:
>>> from collections import OrderedDict >>> d_sorted_by_value = OrderedDict(sorted(d.items(), key=lambda x: x[1]))
The OrderedDict behaves like a normal dict:
>>> for k, v in d_sorted_by_value.items(): ... print "%s: %s" % (k, v) ... first: 1 second: 2 third: 3 fourth: 4 >>> d_sorted_by_value OrderedDict([('first': 1), ('second': 2), ('third': 3), ('fourth': 4)])
Answer #6:
UPDATE: 5 DECEMBER 2015 using Python 3.5
Whilst I found the accepted answer useful, I was also surprised that it hasn’t been updated to reference OrderedDict from the standard library collections module as a viable, modern alternative – designed to solve exactly this type of problem.
from operator import itemgetter from collections import OrderedDict x = {1: 2, 3: 4, 4: 3, 2: 1, 0: 0} sorted_x = OrderedDict(sorted(x.items(), key=itemgetter(1))) # OrderedDict([(0, 0), (2, 1), (1, 2), (4, 3), (3, 4)])
The official OrderedDict documentation offers a very similar example too, but using a lambda for the sort function:
# regular unsorted dictionary d = {'banana': 3, 'apple':4, 'pear': 1, 'orange': 2} # dictionary sorted by value OrderedDict(sorted(d.items(), key=lambda t: t[1])) # OrderedDict([('pear', 1), ('orange', 2), ('banana', 3), ('apple', 4)])
Answer #7:
It can often be very handy to use namedtuple. For example, you have a dictionary of ‘name’ as keys and ‘score’ as values and you want to sort on ‘score’:
import collections Player = collections.namedtuple('Player', 'score name') d = {'John':5, 'Alex':10, 'Richard': 7}
sorting with lowest score first:
worst = sorted(Player(v,k) for (k,v) in d.items())
sorting with highest score first:
best = sorted([Player(v,k) for (k,v) in d.items()], reverse=True)
Now you can get the name and score of, let’s say the second-best player (index=1) very Pythonically like this:
player = best[1] player.name 'Richard' player.score 7
Answer #8:
This is the code:
import operator origin_list = [ {"name": "foo", "rank": 0, "rofl": 20000}, {"name": "Silly", "rank": 15, "rofl": 1000}, {"name": "Baa", "rank": 300, "rofl": 20}, {"name": "Zoo", "rank": 10, "rofl": 200}, {"name": "Penguin", "rank": -1, "rofl": 10000} ] print ">> Original >>" for foo in origin_list: print foo print "\n>> Rofl sort >>" for foo in sorted(origin_list, key=operator.itemgetter("rofl")): print foo print "\n>> Rank sort >>" for foo in sorted(origin_list, key=operator.itemgetter("rank")): print foo
Here are the results:
Original
{'name': 'foo', 'rank': 0, 'rofl': 20000} {'name': 'Silly', 'rank': 15, 'rofl': 1000} {'name': 'Baa', 'rank': 300, 'rofl': 20} {'name': 'Zoo', 'rank': 10, 'rofl': 200} {'name': 'Penguin', 'rank': -1, 'rofl': 10000}
Rofl
{'name': 'Baa', 'rank': 300, 'rofl': 20} {'name': 'Zoo', 'rank': 10, 'rofl': 200} {'name': 'Silly', 'rank': 15, 'rofl': 1000} {'name': 'Penguin', 'rank': -1, 'rofl': 10000} {'name': 'foo', 'rank': 0, 'rofl': 20000}
Rank
{'name': 'Penguin', 'rank': -1, 'rofl': 10000} {'name': 'foo', 'rank': 0, 'rofl': 20000} {'name': 'Zoo', 'rank': 10, 'rofl': 200} {'name': 'Silly', 'rank': 15, 'rofl': 1000} {'name': 'Baa', 'rank': 300, 'rofl': 20}
Answer #9:
Try the following approach. Let us define a dictionary called mydict with the following data:
mydict = {'carl':40, 'alan':2, 'bob':1, 'danny':3}
If one wanted to sort the dictionary by keys, one could do something like:
for key in sorted(mydict.iterkeys()): print "%s: %s" % (key, mydict[key])
This should return the following output:
alan: 2 bob: 1 carl: 40 danny: 3
On the other hand, if one wanted to sort a dictionary by value (as is asked in the question), one could do the following:
for key, value in sorted(mydict.iteritems(), key=lambda (k,v): (v,k)): print "%s: %s" % (key, value)
The result of this command (sorting the dictionary by value) should return the following:
bob: 1 alan: 2 danny: 3 carl: 40
Hope you learned something from this post.
Follow Programming Articles for more! | https://programming-articles.com/how-to-sort-a-dictionary-by-value-in-python-answered/ | CC-MAIN-2022-40 | refinedweb | 947 | 66.67 |
updated copyright year
1: \ environmental queries 2: 3: \ Copyright (C) 1995,1996,1997,1998,2000,2003,2007: [IFUNDEF] cell/ : cell/ 1 cells / ; [THEN] 21: [IFUNDEF] float/ : float/ 1 floats / ; [THEN] 22: 23: \ wordlist constant environment-wordlist 24: 25: vocabulary environment ( -- ) \ gforth 26: \ for win32forth compatibility 27: 28: ' environment >body constant environment-wordlist ( -- wid ) \ gforth 29: \G @i{wid} identifies the word list that is searched by environmental 30: \G queries. 31: 32: 33: : environment? ( c-addr u -- false / ... true ) \ core environment-query 34: \G @i{c-addr, u} specify a counted string. If the string is not 35: \G recognised, return a @code{false} flag. Otherwise return a 36: \G @code{true} flag and some (string-specific) information about 37: \G the queried string. 38: environment-wordlist search-wordlist if 39: execute true 40: else 41: false 42: endif ; 43: 44: : e? name environment? 0= ABORT" environmental dependency not existing" ; 45: 46: : $has? environment? 0= IF false THEN ; 47: 48: : has? name $has? ; 49:. 57: 8 constant ADDRESS-UNIT-BITS ( -- n ) \ environment 58: \G Size of one address unit, in bits. 59: 60: 1 ADDRESS-UNIT-BITS chars lshift 1- constant MAX-CHAR ( -- u ) \ environment 61: \G Maximum value of any character in the character set 62: 63: MAX-CHAR constant /COUNTED-STRING ( -- n ) \ environment 64: \G Maximum size of a counted string, in characters. 65: 79: \G True if @code{/} etc. perform floored division 94: \G Counted string representing a version string for this version of 95: \G Gforth (for versions>0.3.0). The version strings of the various 96: \G versions are guaranteed to be ordered lexicographically. 97: 98: : return-stack-cells ( -- n ) \ environment 99: \G Maximum size of the return stack, in cells. 100: [ forthstart 6 cells + ] literal @ cell/ ; 101: 102: : stack-cells ( -- n ) \ environment 103: \G Maximum size of the data stack, in cells. 104: [ forthstart 4 cells + ] literal @ cell/ ; 105: 106: : floating-stack ( -- n ) \ environment 107: \G @var{n} is non-zero, showing that Gforth maintains a separate 108: \G floating-point stack of depth @var{n}. 109: [ forthstart 5 cells + ] literal @ 110: [IFDEF] float/ float/ [ELSE] [ 1 floats ] Literal / [THEN] ; 111: 112: 15 constant #locals \ 1000 64 / 113: \ One local can take up to 64 bytes, the size of locals-buffer is 1000 114: maxvp constant wordlists 115: 116: forth definitions 117: previous 118: | https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/environ.fs?hideattic=0;sortby=rev;f=h;only_with_tag=MAIN;content-type=text%2Fx-cvsweb-markup;ln=1;rev=1.36 | CC-MAIN-2022-21 | refinedweb | 391 | 51.48 |
The XML Diff and Patch GUI Tool
Amol Kher
Microsoft Corporation
July 2004
Applies to:
the XML Diff and Patch GUI tool
Summary: This article shows how to use the XmlDiff class to compare two XML files and show these differences as an HTML document in a .NET Framework 1.1 application. The article also shows how to build a WinForms application for comparing XML files.
Contents
Introduction
An Overview of the XML Diff and Patch API
XML Diff and Patch Meets Winforms
Working with XML DiffGrams
Other Features of the XML Diff and Patch Tool
Introduction
There is no good command line tool that can be used to compare two XML files and view the differences. There is an online tool called XML Diff and Patch that's available on the GotDotNet website under the XML Tools section. For those who have not, you can find it at Microsoft XML Diff and Patch 1.0. It is a very convenient tool for those who want to compare the difference between two XML files. Comparing XML files is different from comparing regular text files because one wants to compare logical differences in the XML nodes not just differences in text. For example one may want to compare XML documents and ignore white space between elements, comments or processing instructions. The XML Diff and Patch tool allows one to perform such comparisons but it is primarily available as an online web application. We cannot take this tool and use it from command line.
This article focuses on developing a command-line tool by reusing code from the XML Diff and Patch installation and samples. The tool works very similar to the WinDiff utility; it presents the differences in a separate window and highlights them.
The XML Diff and Patch tool contains a library that contains an XmlDiff class, which can be used to compare two XML documents. The Compare method on this class takes two files and either returns true, if the files are equal, or generates an output file called an XML diffgram containing a list of differences between the files. The XmlDiff class can be supplied an options class XmlDiffOptions that can be used to set the various options for comparing files.
An Overview of the XML Diff and Patch API
The XmlDiff class implements a Compare method.
This method is the one we use, though there are other overloads that take the filepath directly. The XmlDiffOptions enumeration has all the Ignore{*} options. You can set this enumeration on the XmlDiff class using the Options property.
So much for a quick primer! I think we are ready to understand our simple app. To understand this article you should have an idea of the XmlDiff class and XmlDiffOptions
XML Diff and Patch Meets Winforms
We built a small Windows application, which comprises two forms. One form prompts the user to specify two files, and the other form hosts an Internet Explorer control, which displays the highlighted differences side-by-side between the two files, similar to any other file compare tool we know.
The UI design is kept very simple and hence usable, since that's not the part we are focusing in this article. You can always download the code and make it more usable for yourself. The idea of this article is to demo the XmlDiff code and show the differences in a nice IE control. Figure 1 shows what our main screen looks like.
Figure 1. The main screen
The File menu has an Exit command.
The Diff Options menu allows the user to select the options that will directly be passed on to the Compare method, which uses the XmlDiffOptions enumeration. This keeps the utility simple and easy to understand. The following screen shot shows the options available. These are directly mapped to the XmlDiffOptions object.
Figure 2. Available options
- Here's a quick primer on what some of the options look like and what they mean. For more detailed information visit the Diff and Patch Overview.
- Ignore Processing instructions: Do not compare Processing instructions. Thus <a></a> and <a><?somepi?></a> are both considered equal.
- Ignore white spaces (normalize text values): Do not compare white space. This means insignificant white space. White space marked by xml:space="preserve" will be compared. But white space after element tags or any such possibly insignificant white space will be ignored. Thus <root><a/></root> and <root>\n<a/>\n</root> are both equal.
- Ignore prefixes: The prefixes of element and attribute names are not compared. When this option is selected then two names that have the same local name and namespace URI but with a different prefix are treated as the same names. The following two XML would be considered equal when this option is set. <a xmlns:<ns1:child/></a> and <a xmlns:<ns2:child/></a> are equal.
- Ignore Namespaces: The namespace URIs of the element and attribute names are not compared. This option also implies that the name prefixes are ignored. When this option is selected then two names with the same local name but a different namespace URI and prefix are treated as the same names. Thus <a xmlns:<ns1:child/></a> and <a xmlns:<ns2:child/></a> are equal under this option.
- Ignore Child Order: The order of child nodes of each element is ignored. When this option is selected then two nodes with the same value that differ only by their position among sibling child nodes are treated as the same nodes. Thus <a><b/><c/></a> and <a><c/><b/></a> are equal.
The following is the basic control flow of the application.
When the user clicks the Compare button the following actions take place.
- Both the input files are verified to exist, since they could have been entered by hand and hence the path may be wrong.
- The XmlDiffOptions enumeration is set using the values of the checked items on the Diff Options Menu drop-down. This is done using a SetDiffOptions method.
- DoCompare is called which compares two files.
- The two files are compared and the diffgram is written out to a temporary file (vxd.out). This file is used to figure out the differences.
- The samples code we mentioned earlier is called to figure out the differences. This code takes the original file and the diffgram file as inputs and generates the output, which consists of rows (HTML encoded) that show the side by side differences of the two files compared.
- HTML is written out to a temporary file and displayed in the IE Control in a separate window. This HTML shows the Diff in the desired manner.
Working with XML DiffGrams
Before we move on to the samples code that gives us our HTML, we should discuss what the diffgram looks like. DiffGram doesn't really tell us the visual differences; it isn't the actual differences file. What it does tell us is that given a file A and a diffgram file, you can get to file B by applying the patches specified in the diffgram. In other words, the diffgram shows us how to incrementally build the target file, which is the file we compared against originally. The diffgram itself is written in XML, which can be parsed and used to apply on the original file to get the target file. The diffgram code consists of tags such as add, remove, and change. For more information on the diffgram tags look at this Diff Language page. See the following sample taken from the XML Diff Patch site. The concept would be similar to XPath users. Every tag has a match attribute which works like a select operation. It allows you to move to a specific location in the original file. The other tags then work relative to the position you are placed at. So for instance, match="2" would mean go to the second child node from this location. An add tag adds specific text or markup while a remove tag removes specific text or markup. There are other helper tags such as change, which is used to update the contents.
<?xml version="1.0" encoding="utf-16"?> <xd:xmldiff <xd:node <xd:change <xd:node <xd:add> <e>Some text 4</e> <f>Some text 5</f> </xd:add> <xd:node <xd:changeChanged text</xd:change> <xd:remove </xd:node> <xd:node <xd:remove <xd:addnew value</xd:add> <xd:changechanged attribute value</xd:change> </xd:node> <xd:remove <xd:add <xd:add <xd:add </xd:add> </xd:add> </xd:node> <xd:descriptor </xd:xmldiff>
As you can see, parsing this code and applying the changes specified in the diffgram is not trivial. However, thankfully we don't have to do all that ourselves. The XmlDiff and Patch utility ships with samples code that does all this work for us. It can be found in the Samples\XmlDiffView directory. We compiled that source code and then copied the generated library (XmlDiffPath.View.dll) out to our directory to reuse and link to it. It contains one class called XmlDiffView. XmlDiffView has a method called Load, which takes the original XML file and the DiffGram file. Load internally loads the original file and applies the diffgram patches to it to reach the target file. While doing so, it also stores the HTML required to show the differences in two columns for each line that was read. The desired output HTML is got by invoking the GetHTML method, which takes a TextWriter to write the HTML.
For the interested reader, the bulk of parsing work is done in a private method found in XmlDiffView.cs file called ApplyDiffgram. I am quoting it here to see what's going on.
private void ApplyDiffgram( XmlNode diffgramParent, XmlDiffViewParentNode sourceParent ) { sourceParent.CreateSourceNodesIndex(); XmlDiffViewNode currentPosition = null; IEnumerator diffgramChildren=diffgramParent.ChildNodes.GetEnumerator(); while ( diffgramChildren.MoveNext() ) { XmlNode diffgramNode = (XmlNode)diffgramChildren.Current; if ( diffgramNode.NodeType == XmlNodeType.Comment ) continue; XmlElement diffgramElement = diffgramChildren.Current as XmlElement; if ( diffgramElement == null ) throw new Exception( "Invalid node in diffgram." ); if ( diffgramElement.NamespaceURI != XmlDiff.NamespaceUri ) throw new Exception( "Invalid element in diffgram." ); string matchAttr = diffgramElement.GetAttribute( "match" ); XmlDiffPathNodeList matchNodes = null; if ( matchAttr != string.Empty ) matchNodes = XmlDiffPath.SelectNodes( _doc, sourceParent, matchAttr ); switch ( diffgramElement.LocalName ) { case "node": if ( matchNodes.Count != 1 ) throw new Exception( "The 'match' attribute of 'node' element must select a single node." ); matchNodes.MoveNext(); if ( diffgramElement.ChildNodes.Count > 0 ) ApplyDiffgram( diffgramElement, (XmlDiffViewParentNode)matchNodes.Current ); currentPosition = matchNodes.Current; break; case "add": if ( matchAttr != string.Empty ) { OnAddMatch( diffgramElement, matchNodes, sourceParent, ref currentPosition ); } else { string typeAttr = diffgramElement.GetAttribute( "type" ); if ( typeAttr != string.Empty ) { OnAddNode( diffgramElement, typeAttr, sourceParent, ref currentPosition ); } else { OnAddFragment( diffgramElement, sourceParent, ref currentPosition ); } } break; case "remove": OnRemove( diffgramElement, matchNodes, sourceParent, ref currentPosition ); break; case "change": OnChange( diffgramElement, matchNodes, sourceParent, ref currentPosition ); break; } } }
The main objective here is to get the current node from the diffgram, and based on the action specified we do, either add, remove, or change operation which is called out by the different case statements.
And that's it. Believe it or not, all we did was piece all of these things together much the same way it was done on the online tool to generate the output file. There remained the small matter of displaying the HTML in an IE Control.
Given below is the code that we overviewed earlier to generate the diffgram and then the output file. The source code download attached to this article contains the full implementation.
public void DoCompare(string file1, string file2) { Random r = new Random(); //to randomize the output files and hence allow //us to generate multiple files for the same pair //of comparisons. string startupPath = Application.StartupPath; //output diff file. diffFile = startupPath + Path.DirectorySeparatorChar + "vxd.out"; XmlTextWriter tw=new XmlTextWriter(new StreamWriter(diffFile) ); tw.Formatting = Formatting.Indented; //This method sets the diff.Options property. SetDiffOptions(); bool isEqual = false; //Now compare the two files. try { isEqual = diff.Compare( file1, file2, compareFragments, tw); } catch ( XmlException xe ) { MessageBox.Show( "An exception occured while comparing\n" + xe.StackTrace ); } finally { tw.Close(); } if ( isEqual ) { //This means the files were identical for given options. MessageBox.Show ( "Files Identical for the given options"); return; //dont need to show the differences. } //Files were not equal, so construct XmlDiffView. XmlDiffView dv = new XmlDiffView(); //Load the original file again and the diff file. XmlTextReader orig = new XmlTextReader( file1 ); XmlTextReader diffGram = new XmlTextReader( diffFile ); dv.Load( orig, diffGram ); //Wrap the HTML file with necessary html and //body tags and prepare it before passing it to //the GetHtml method. string tempFile = startupPath + Path.DirectorySeparatorChar + "diff" + r.Next() + ".htm"; StreamWriter sw1 = new StreamWriter( tempFile ); //Wrapping sw1.Write("<html><body><table>"); sw1.Write("<tr><td><b>"); sw1.Write(textBox1.Text); sw1.Write("</b></td><td><b>"); sw1.Write(textBox2.Text); sw1.Write("</b></td></tr>"); //This gets the differences but just has the //rows and columns of an HTML table dv.GetHtml( sw1 ); //Finish wrapping up the generated HTML and //complete the file by putting legend in the end just like the //online tool. sw1.Write("<tr><td><b>Legend:</b> <font style='background-color: yellow'" + " color='black'>added</font> <font style='background-color: red'"+ "color='black'>removed</font> <font style='background-color: "+ "lightgreen' color='black'>changed</font> "+ "<font style='background-color: red' color='blue'>moved from</font>"+ " <font style='background-color: yellow' color='blue'>moved to"+ "</font> <font style='background-color: white' color='#AAAAAA'>"+ "ignored</font></td></tr>"); sw1.Write("</table></body></html>"); //HouseKeeping...close everything we dont want to lock. sw1.Close(); dv = null; orig.Close(); diffGram.Close(); File.Delete ( diffFile ); //Open the IE Control window and pass it //the HTML file we created. Browser b = new Browser( tempFile ); b.Show(); //Display it! //Done! }
As you can see, we use the XmlDiff object to compare the two files (try catch block). XmlDiff takes a StreamWriter to write out the diffgram text. This diff file and the original file are then loaded by the Load method into the XmlDiffView object if the files are not equal (isEqual flag). We preformat the output HTML with the required leading HTML tags. The HTML returned by GetHTML contains only the rows and columns of the two files. So we wrap that HTML with the complete and correct html tags that can be loaded in any Web browser.
Other Features of the XML Diff and Patch Tool
Since the tool is built out of modules, modules can be easily replaced and recompiled. If you think of a more efficient way of parsing the diffgram, you can plug that in and use it to generate the output. Also the output currently is directly put to an IE Plug-in through a temporary file. If required, this can be stored out to a permanent file.
I hope you find this utility useful for comparing XML files and working with XmlDiff easier in future. | https://msdn.microsoft.com/en-us/library/aa302295.aspx | CC-MAIN-2015-18 | refinedweb | 2,465 | 57.16 |
What is Recursion?
A recursion is a function which calls itself directly or indirectly over a defined boundary to generate user expected results.
Some common problems for recursion are Fibonacci series, Factorial of an integer and Tower of Hanoi
Recursion Example
#include <stdio.h> // T(n) = Θ(n) // Aux space = Θ(n) int getFactorial(int n) { if(n==0 || n==1) return 1; return n*getFactorial(n-1); } int main() { int n, res; scanf("%d", &n); res = getFactorial(n); printf("%d", res); return 0; }
Test case
Input 4 Output 24
Head recursion
If a recursion has code statements to be executed after function call then it is a Head recursion. Head recursions are generally hard to convert into loop statements.
Example
void fun(int n) { if(n==0) return 0; fun(n-1); printf("%d", n); // Post recursive operation }
Tail recursion
Tail recursions will not have any code statements after function calls and are generally at the end of the function declaration. Tail recursions are easy to convert into loop statements.
Example
void fun(int n) { if(n==0) return 0; printf("%d", n); fun(n-1); }
Which is better?
Generally, tail recursions are always better. Even though they both have same time complexity and Auxiliary space, tail recursions takes an edge in terms of memory in function stack. Head recursions will wait in function stack memory until the post recursion code statements are executed which causes a latency in overall results, whereas tail recursions will be terminated in function stack over execution.
That's it
Thanks for reading!! If you have any questions about the post feel free to leave a comment below.
Follow me on twitter: @soorya_54
Discussion (6)
@soorya54 Does
return n*getFactorial(n-1);make it a head recursion?
Loved it.
Yes, @imthedarkclown . A better optimization for this will be
How about discussing trampolines for languages that do not have tail-call optimisation?
Sure, will add that to my bucket list of topics for future posts.
Explaining things in a better way is harder than the things we actually explain. Got to know more about recursions.
Great read | https://practicaldev-herokuapp-com.global.ssl.fastly.net/soorya54/head-recursion-vs-tail-recursion-22o3 | CC-MAIN-2021-25 | refinedweb | 352 | 53.41 |
.
The expression of groups as first-class program objects improves software composition: collective functions can take an explicit argument representing the group of participating threads. Consider a library function that imposes requirements on its caller. Explicit groups make these requirements explicit, reducing the chances of misusing the library function. Explicit groups and synchronization help make code less brittle, reduce restrictions on compiler optimization, and improve forward compatibility.
The Cooperative Groups programming model consists of the following elements:
- Data types representing groups of cooperating threads and their properties;
- Intrinsic groups defined by the CUDA launch API (e.g., thread blocks);
- Group partitioning operations;
- A group barrier synchronization operation;
- Group-specific collectives.
Cooperative Groups Fundamentals
At its simplest, Cooperative Groups is an API for defining and synchronizing groups of threads in a CUDA program. Much of the Cooperative Groups (in fact everything in this post) works on any CUDA-capable GPU compatible with CUDA 9. Specifically, that means Kepler and later GPUs (Compute Capability 3.0+).
To use Cooperative Groups, include its header file.
#include <cooperative_groups.h>
Cooperative Groups types and interfaces are defined in the
cooperative_groups C++ namespace, so you can either prefix all names and functions with
cooperative_groups::, or load the namespace or its types with
using directives.
using namespace cooperative_groups; // or... using cooperative_groups::thread_group; // etc.
It’s not uncommon to alias it to something shorter. Assume the following namespace alias exists in the examples in this post.
namespace cg = cooperative_groups;
Code containing any intra-block Cooperative Groups functionality can be compiled in the normal way using
nvcc (note that many of the examples in this post use C++11 features so you need to add the
--std=c++11 option to the compilation command line).
Thread Groups
The fundamental type in Cooperative Groups is
thread_group, which is a handle to a group of threads. The handle is only accessible to members of the group it represents. Thread groups expose a simple interface. You can get the size (total number of threads) of a group with the
size() method:
unsigned size();
To find the index of the calling thread (between
0 and
size()-1) within the group, use the
thread_rank() method:
unsigned thread_rank();
Finally, you can check the validity of a group using the
is_valid() method.
bool is_valid();
Thread Group Collective Operations
Thread groups provide the ability to perform collective operations among all threads in a group. Collective operations, or simply collectives, are operations that need to synchronize or otherwise communicate amongst a specified set of threads. Because of the need for synchronization, every thread that is identified as participating in a collective must make a matching call to that collective operation. The simplest collective is a barrier, which transfers no data and merely synchronizes the threads in the group. Synchronization is supported by all thread groups. As you’ll learn later in this post, some group types support other collectives.
You can synchronize a group by calling its collective
sync() method, or by calling the
cooperative_groups::sync() function. These perform barrier synchronization among all threads in the group (Figure 2).
g.sync(); // synchronize group g cg::synchronize(g); // an equivalent way to synchronize g
Here’s a simple example of a parallel reduction device function written using Cooperative Groups. When the threads of a group call it, they cooperatively compute the sum of the values passed by each thread in the group (through the
val argument).
using namespace cooperative_groups; __device__ int reduce_sum(thread_group g, int *temp, int val) { int lane = g.thread_rank(); // Each iteration halves the number of active threads // Each thread adds its partial sum[i] to sum[lane+i] for (int i = g.size() / 2; i > 0; i /= 2) { temp[lane] = val; g.sync(); // wait for all threads to store if(lane<i) val += temp[lane + i]; g.sync(); // wait for all threads to load } return val; // note: only thread 0 will return full sum }
Now let’s look at how to create thread groups.
Thread Blocks
If you have programmed with CUDA before, you are familiar with thread blocks, the fundamental unit of parallelism in a CUDA program. Cooperative Groups introduces a new datatype,
thread_block, to explicitly represent this concept within the kernel. An instance of
thread_block is a handle to the group of threads in a CUDA thread block that you initialize as follows.
thread_block block = this_thread_block();
As with any CUDA program, every thread that executes that line has its own instance of the variable
block. Threads with the same value of the CUDA built-in variable
blockIdx are part of the same thread block group.
Synchronizing a
thread_block group is much like calling
__syncthreads(). The following lines of code all do the same thing (assuming all threads of the thread block reach them).
__syncthreads(); block.sync(); cg::synchronize(block); this_thread_block().sync(); cg::synchronize(this_thread_block());
The
thread_block data type extends the
thread_group interface with the following block-specific methods.
dim3 group_index(); // 3-dimensional block index within the grid dim3 thread_index(); // 3-dimensional thread index within the block
These are equivalent to CUDA’s
blockIdx and
threadIdx, respectively.
Here’s a simple kernel that uses the
reduce_sum() device function to compute the sum of all values in an input array. It starts by computing many partial sums in parallel in
thread_sum(), where each thread strides through the array computing a partial sum (and uses vector loads for higher memory access efficiency). The kernel then uses
thread_block groups for cooperative summation, and
atomicAdd() to combine the block sums.
__device__ int thread_sum(int *input, int n) { int sum = 0; for(int i = blockIdx.x * blockDim.x + threadIdx.x; i < n / 4; i += blockDim.x * gridDim.x) { int4 in = ((int4*)input)[i]; sum += in.x + in.y + in.z + in.w; } return sum; } __global__ void sum_kernel_block(int *sum, int *input, int n) { int my_sum = thread_sum(input, n); extern __shared__ int temp[]; auto g = this_thread_block(); int block_sum = reduce_sum(g, temp, my_sum); if (g.thread_rank() == 0) atomicAdd(sum, block_sum); }
We can launch this function to compute the sum of a 16M-element array like this.
int n = 1<<24; int blockSize = 256; int nBlocks = (n + blockSize - 1) / blockSize; int sharedBytes = blockSize * sizeof(int); int *sum, *data; cudaMallocManaged(&sum, sizeof(int)); cudaMallocManaged(&data, n * sizeof(int)); std::fill_n(data, n, 1); // initialize data cudaMemset(sum, 0, sizeof(int)); sum_kernel_block<<<nBlocks, blockSize, sharedBytes>>>(sum, data, n);
Partitioning Groups
Cooperative Groups provides you the flexibility to create new groups by partitioning existing groups. This enables cooperation and synchronization at finer granularity. The
cg::tiled_partition() function partitions a thread block into multiple “tiles”. Here’s an example that partitions each whole thread block into tiles of 32 threads.
thread_group tile32 = cg::partition(this_thread_block(), 32);
Each thread that executes the partition will get a handle (in
tile32) to one 32-thread group. 32 is a common choice, because it corresponds to a warp: the unit of threads that are scheduled concurrently on a GPU streaming multiprocessor (SM).
Here’s another example where we partition into groups of four threads.
thread_group tile4 = tiled_partition(tile32, 4);
The
thread_group objects returned by
tiled_partition() are just like any thread group. So, for example, we can do things like this:
if (tile4.thread_rank()==0) printf("Hello from tile4 rank 0: %d\n", this_thread_block().thread_rank());
Every fourth thread will print, as in the following.
Hello from tile4 rank 0: 0 Hello from tile4 rank 0: 4 Hello from tile4 rank 0: 8 Hello from tile4 rank 0: 12 ...
Modularity
The real power of Cooperative Groups lies in the modularity that arises when you can pass a group as an explicit parameter to a function and depend on a consistent interface across a variety of thread group sizes. This makes it harder to inadvertently cause race conditions and deadlock situations by making invalid assumptions about which threads will call a function concurrently. Let’s me show you an example.
__device__ int sum(int *x, int n) { ... __syncthreads(); ... return total; } __global__ void parallel_kernel(float *x, int n) { if (threadIdx.x < blockDim.x / 2) sum(x, count); // error: half of threads in block skip // __syncthreads() => deadlock }
In the preceding code example, a portion of the threads of each block call
sum(), which calls
__syncthreads(). Since not all threads in the block reach the
__syncthreads(), there is a deadlock situation, since
__syncthreads() invokes a barrier which waits until all threads of the block reach it. Without knowing the details of the implementation of a library function like
sum(), this is an easy mistake to make.
The following code uses Cooperative Groups to require that a thread block group be passed into the call. This makes that mistake much harder to make.
// Now much clearer that a whole thread block is expected to call __device__ int sum(thread_block block, int *x, int n) { ... block.sync(); ... return total; } __global__ void parallel_kernel(float *x, int n) { sum(this_thread_block(), x, count); // no divergence around call }
In the first (incorrect) example, the caller wanted to use fewer threads to compute
sum(). The modularity enabled by Cooperative Groups means that we can apply the same reduction function to a variety of group sizes. Here’s another version of our sum kernel that uses tiles of 32 threads instead of whole thread blocks. Each tile does a parallel reduction—using the same
reduce_sum() function as before—and then atomically adds its result to the total.
__global__ void sum_kernel_32(int *sum, int *input, int n) { int my_sum = thread_sum(input, n); extern __shared__ int temp[]; auto g = this_thread_block(); auto tileIdx = g.thread_rank() / 32; int* t = &temp[32 * tileIdx]; auto tile32 = tiled_partition(g, 32); int tile_sum = reduce_sum(tile32, t, my_sum); if (tile32.thread_rank() == 0) atomicAdd(sum, tile_sum); }
Optimizing for the GPU Warp Size
Cooperative Groups provides an alternative version of
cg::tiled_partition() that takes the tile size as a template parameter, returning a statically sized group called a
thread_block_tile. Knowing the tile size at compile time provides the opportunity for better optimization. Here are two static tiled partitions that match the two examples given previously.
thread_block_tile<32> tile32 = tiled_partition<32>(this_thread_block()); thread_block_tile<4> tile4 = tiled_partition<4> (this_thread_block());
We can use this to slightly optimize our tiled reduction so that when passed a statically sized
thread_block_tile the inner loop will be unrolled.
template <typename group_t> __device__ int reduce_sum(group_t g, int *temp, int val) { int lane = g.thread_rank(); // Each iteration halves the number of active threads // Each thread adds its partial sum[i] to sum[lane+i] #pragma unroll for (int i = g.size() / 2; i > 0; i /= 2) { temp[lane] = val; g.sync(); // wait for all threads to store if (lane < i) val += temp[lane + i]; g.sync(); // wait for all threads to load } return val; // note: only thread 0 will return full sum }
Also, when the tile size matches the hardware warp size, the compiler can elide the synchronization while still ensuring correct memory instruction ordering to avoid race conditions. Intentionally removing synchronizations is an unsafe technique (known as implicit warp synchronous programming) that expert CUDA programmers have often used to achieve higher performance for warp-level cooperative operations. Always explicitly synchronize your thread groups, because implicitly synchronized programs have race conditions.
For parallel reduction, which is bandwidth bound, this code is not significantly faster on recent architectures than the non-static
tiled_partition version. But it demonstrates the mechanics of statically sized tiles which can be beneficial in more computationally intensive uses.
Warp-Level Collectives
Thread Block Tiles also provide an API for the following warp-level collective functions:
.shfl()
.shfl_down()
.shfl_up()
.shfl_xor()
.any()
.all()
.ballot()
.match_any()
.match_all()
These operations are all primitive operations provided by NVIDIA GPUs starting with the Kepler architecture (Compute Capability 3.x), except for match_any() and match_all(), which are new in the Volta architecture (Compute Capability 7.x).
Using
thread_block_tile::shfl_down() to simplify our warp-level reduction does benefit our code: it simplifies it and eliminates the need for shared memory.
template <int tile_sz> __device__ int reduce_sum_tile_shfl(thread_block_tile<tile_sz> g, int val) { // Each iteration halves the number of active threads // Each thread adds its partial sum[i] to sum[lane+i] for (int i = g.size() / 2; i > 0; i /= 2) { val += g.shfl_down(val, i); } return val; // note: only thread 0 will return full sum } template<int tile_sz> __global__ void sum_kernel_tile_shfl(int *sum, int *input, int n) { int my_sum = thread_sum(input, n); auto tile = tiled_partition<tile_sz>(this_thread_block()); int tile_sum = reduce_sum_tile_shfl<tile_sz>(tile, my_sum); if (tile.thread_rank() == 0) atomicAdd(sum, tile_sum); }
Discovering Thread Concurrency
In the GPU’s SIMT (Single Instruction Multiple Thread) architecture, the GPU streaming multiprocessors (SM) execute thread instructions in groups of 32 called warps. The threads in a SIMT warp are all of the same type and begin at the same program address, but they are free to branch and execute independently. At each instruction issue time, the instruction unit selects a warp that is ready to execute and issues its next instruction to the warp’s active threads. The instruction unit applies an active mask to the warp to ensure that only threads that are active issue the instruction. Individual threads in a warp may be inactive due to independent branching in the program.
Thus, when data-dependent conditional branches in the code cause threads within a warp to diverge, the SM disables threads that don’t take the branch. The threads that remain active on the path are referred to as coalesced.
Cooperative Groups provides the function
coalesced_threads() to create a group comprising all coalesced threads:
coalesced_group active = coalesced_threads();
As an example, consider the following thread that creates a
coalesced_group inside a divergent branch taken only by odd-numbered threads. Odd-numbered threads within a warp will be part of the same
coalesced_group, and thus they can be synchronized by calling
active.sync().
auto block = this_thread_block(); if (block.thread_rank() % 2) { coalesced_group active = coalesced_threads(); ... active.sync(); }
Keep in mind that since threads from different warps are never coalesced, the largest group that
coalesced_threads() can return is a full warp.
It’s common to need to work with the current active set of threads, without making assumptions about which threads are present. This is necessary to ensure modularity of utility functions that may be called in different situations but which still want to coordinate the activities of whatever threads happen to be active.
A good example is “warp-aggregated atomics”., it can be used as a drop-in replacement for
atomicAdd(). You can see the full details of warp-aggregated atomics in this NVIDIA Developer Blog post.
The key to correct operation of warp aggregation is in electing a thread from the warp to perform the atomic add. To enable use inside divergent branches, warp aggregation can’t just pick thread zero of the warp because it might be inactive in the current branch. Instead, as the blog post explains, warp intrinsics can be used to elect the first active thread in the warp.
The Cooperative Groups
coalesced_group type makes this trivial, since its
thread_rank() method ranks only threads that are part of the group. This enables a simple implementation of warp-aggregated atomics that is robust and safe to use on any GPU architecture. The
coalesced_group type also supports warp intrinsics like
shfl(), use in the following code to broadcast to all threads in the group.
__device__ int atomicAggInc(int *ptr) { cg::coalesced_group g = cg::coalesced_threads(); int prev; // elect the first active thread to perform atomic add if (g.thread_rank() == 0) { prev = atomicAdd(ptr, g.size()); } // broadcast previous value within the warp // and add each active thread’s rank to it prev = g.thread_rank() + g.shfl(prev, 0); return prev; }
We hope that after reading this introduction you are as excited as we are about the possibilities of flexible and explicit groups of cooperating threads for sophisticated GPU algorithms. To get started with Cooperative Groups today, download the CUDA Toolkit version 9 or higher from. The toolkit includes various examples that use Cooperative Groups.
But we haven’t covered everything yet! New features in Pascal and Volta GPUs help Cooperative Groups go farther, by enabling creation and synchronization of thread groups that span an entire kernel launch running on one or even multiple GPUs. In a follow-up post we plan to cover the
grid_group and
multi_grid_group types and the
cudaLaunchCooperative* APIs that enable them. Stay tuned. | https://devblogs.nvidia.com/cooperative-groups/ | CC-MAIN-2020-24 | refinedweb | 2,702 | 52.29 |
Dynamic Graphs
Published on January 11, 2019 under the tag haskell
TL;DR: Alex Lang and I recently “finished” a library for dealing with dynamic graphs. The library focuses on the dynamic connectivity problem in particular, although it can do some other things as well.
The story of this library began with last year’s ICFP contest. For this contest, the goal was to build a program that orchestrates a number of nanobots to build a specific minecraft-like structure, as efficiently as possible. I was in Japan at the time, working remotely from the Tsuru Capital office, and a group of them decided to take part in this contest.
I had taken part in the 2017 ICFP contest with them, but this year I was not able to work on this at all since the ICFP contest took place in the same weekend as my girlfriends’ birthday. We went to Fujikawaguchiko instead – which I would recommend to anyone interested in visiting the Fuji region. I ended up liking it more than Hakone, where I was a year or two ago.
Anyway, after the contest we were discussing how it went and Alex thought a key missing piece for them was a specific algorithm called dynamic connectivity. Because this is not a trivial algorithm to put together, we ended up using a less optimal version which still contained some bugs. In the weeks after the contest ended Alex decided to continue looking into this problem and we ended up putting this library together.
The dynamic connectivity problem is very simply explained to anyone who is at least a little familiar with graphs. It comes down to building a datastructure that allows adding and removing edges to a graph, and being able to answer the question “are these two vertices (transitively) connected” at any point in time.
This might remind you of the union-find problem. Union-find, after all, is a good solution to incremental dynamic connectivity. In this context, incremental means that edges may only be added, not removed. A situation where edges may be added and removed is sometimes referred to as fully dynamic connectivity.
Like union-find, there is unfortunately no known persistent version of this algorithm without sacrificing some performance. An attempt was made [to create a fast, persistent union find] but I don’t think we can consider this successful in the Haskell sense of purity since the structure proposed in that paper is inherently not thread-safe; which is one of the reasons to pursue persistence in the first place.
Anyway, this is why the library currently only provides a mutable interface. The library uses the
PrimMonad from the primitive library to ensure you can use our code both in
IO and
ST, where the latter lets us reclaim purity.
Let’s walk through a simple example of using the library in plain
IO.
import qualified Data.Graph.Dynamic.Levels as GD import qualified Data.Tree as T main :: IO () main = do graph <- GD.empty'
Let’s consider a fictional map of Hawaiian islands.
mapM_ (GD.insert_ graph) ["Akanu", "Kanoa", "Kekoa", "Kaiwi", "Onakea"] GD.link_ graph "Akanu" "Kanoa" GD.link_ graph "Akanu" "Kaiwi" GD.link_ graph "Akanu" "Onakea" GD.link_ graph "Kaiwi" "Onakea" GD.link_ graph "Onakea" "Kanoa" GD.link_ graph "Kanoa" "Kekoa"
The way the algorithm works is by keeping a spanning forest at all times. That way we can quickly answer connectivity questions: if two vertices belong to the same tree (i.e., they share the same root), they are connected.
For example, can we take ferries from Kaiwi to Kekoa? The following statement prints
True.
Such a question, however, could have been answered by a simpler algorithm such as union find which we mentioned before. Union find is more than appropriate if edges can only be added to a graph, but it cannot handle cases where we want to delete edges. Let’s do just so:
In a case such as the one above, where the deleted edge is not part of the spanning forest, not much interesting happens, and the overall connectivity is not affected in any way.
However, it gets interesting when we delete an edge that is part of the spanning tree. When that happens, we kick off a search to find a “replacement edge” in the graph that can restore the spanning tree.
In our example, we can replace the deleted Akanu - Onakea edge with the Kanoa - Onakea edge. Finding a replacement edge is unsurprisingly the hardest part of the problem, and a sufficiently effecient algorithm was only described in 1998 by Holm, de Lichtenberg and Thorup in this paper.
The algorithm is a little complex, but the paper is well-written, so I’ll just stick with a very informal and hand-wavey explanation here:
If an edge is cut from the spanning forest, then this turns one spanning tree in the forest into two components.
The algorithm must consider all edges in between these two components to find a replacement edge. This can be done be looking at the all the edges adjacent to the smaller of the two components.
Reasonable amortized complexity, O(log² n), is achieved by “punishing” edges that are considered but not taken, so we will consider them less frequently in subsequent calls.
Back to our example. When we go on to delete the Onakea - Kanoa edge, we cannot find a replacement edge, and we are left with a spanning forest with two components.
We can confirm this by asking the library for the spanningforest and then using the very handy
drawForest from
Data.Tree to visualize it:
This prints:
Kanoa | +- Akanu | `- Kekoa Onakea | `- Kaiwi
Let’s restore connectivity to leave things in proper working order for the residents of our fictional island group, before closing the blogpost.
For finishing words, what are some future directions for this library? One of the authors of the original paper, M. Thorup, wrote a follow-up that improves the theoretical space and time complexity a little. This seems to punish us with bad constant factors in terms of time performance – but it is probably still worth finishing because it uses significantly less memory. Contributions, as always, are welcome. :-) | https://jaspervdj.be/posts/2019-01-11-dynamic-graphs.html | CC-MAIN-2019-13 | refinedweb | 1,034 | 62.07 |
Clusters in EC2 and Hadoop
GraphLab create allows for the execution of jobs on the Turi Distributed platform, which runs on EC2 as well as Hadoop YARN clusters. While Turi Distributed is set up automatically on EC2 on-demand, it needs to be deployed on Hadoop.
In this section, we will walk through the concept of a Cluster in the GraphLab Create API and how it can be used to execute jobs remotely, either in Hadoop or in EC2.
The Cluster
The GraphLab Create API includes the notion of a Cluster, which serves as a logical environment to host the distributed execution of jobs (as opposed to the local host environment, which can be asynchonous, but not distributed). GraphLab Create clusters can be created either in EC2 or on Hadoop YARN; while they can equally be used as environments for running jobs, their behavior is slightly different; hence they are represented by two different types:
graphlab.deploy.Ec2Cluster and
graphlab.deploy.HadoopCluster. After creating a cluster object once, it can be retrieved at a later time to continue working with an existing cluster. Below we will elaborate on the specifics of each environment.
Creating a Cluster in EC2
In EC2 a cluster is created in two steps: first, a
graphlab.deploy.Ec2Config object is created, describing the cluster and how to access AWS. The cluster description includes the properties for EC2 instances that are going to be used to form the cluster, like instance type and region, the security group name, etc. Second, the cluster is launched by calling
ec2_cluster.create. When creating your EC2 cluster, you must also specify a name, and an S3 path where the EC2 cluster maintains its state and logs.
import graphlab as gl # Define your EC2 environment. In this example we use the default settings. ec2config = gl.deploy.Ec2Config() ec2 = gl.deploy.ec2_cluster.create(name='my-cluster', s3_path='s3://my-bucket', ec2_config=ec2config, num_hosts=4)
At this point you can use the object
ec2 for remote and distributed job execution.
It is important to note that the
create call will already start the hosts in EC2, so costs will be incurred at that point. They will be shutdown after an idle period, which is 30 minutes by default or set as parameter (in seconds) in the create method. Setting the timeout to a negative value will cause the cluster to run indefinitely or until explicitly stopped. For example, if you wanted to extend the timeout to one hour you would create the cluster like so:
ec2 = gl.deploy.ec2_cluster.create(name='my-cluster', s3_path='s3://my-bucket', ec2_config=ec2config, num_hosts=4, idle_shutdown_timeout=3600)
You can retrieve the properties of a cluster by printing the cluster object:
print ec2
S3 State Path: s3://my-bucket EC2 Config : [instance_type: m3.xlarge, region: us-west-2, aws_access_key: ABCDEFG] Num Hosts : 4 Status : Running
Creating a Cluster in Hadoop
In order to work with a Hadoop cluster, Turi Distributed needs to be set up on the Hadoop nodes. For instructions on how to obtain and install DD please refer to the Hadoop setup chapter
When you installed Turi Distributed your provided an installation path that you need to refer to when creating a
HadoopCluster Object through
graphlab.deploy.hadoop_cluster.create object. Essentially this path is your client-side handle to the Hadoop cluster within the GraphLab Create API. Moreover, when creating your Hadoop cluster object, you must specify a name, which you can later use to retrieve an existing cluster form your workbench. You can also specify hadoop_conf_dir, which is the directory of your custom Hadoop configuration path. If
hadoop_conf_dir is not specified, GraphLab Create uses your default Hadoop configuration path on your machine.
import graphlab as gl # Define your Hadoop environment td-deployment = 'hdfs://our.cluster.com:8040/user/name/turi-dist-folder' hd = gl.deploy.hadoop_cluster.create(name='hadoop-cluster', turi_dist_path=td-deployment)
You can retrieve the properties of a cluster by printing the cluster object:
print hd
Hadoop Cluster: Name: : hadoop-cluster Cluster path : hdfs://our.hadoop-cluster.com:8040/user/name/turi-distributed-folder Hadoop conf dir : /Users/name/yarn-conf Number of Containers: : 3 Container Size (in mb) : 4096 Container num of vcores : 2 Port range : 9100 - 9200 Additional packages : ['names']
(See Section Dependencies for more information about additional packages.)
Unlike a
HadoopCluster, once an
Ec2Cluster is created, it is physically running in AWS. This cluster can be loaded at a later time and/or a separate Python session:
c = gl.deploy.ec2_cluster.load('s3://my-bucket')
Executing Jobs in a Cluster
In order to execute a job in a cluster, you pass the cluster object to the
graphlab.deploy.job.create API, independently of whether it is a Hadoop or an EC2 cluster. While the job is running, the client machine can be shutdown and the job will continue to run. In the event that the client process terminates, you can reload the job and check its status.
def add(x, y): return x + y # c has been created or loaded before job = gl.deploy.job.create(add, environment=c, x=1, y=2)
Note that the parameter names in the kwargs of the
job.create call need to match the parameter names in the definition of your method (
x and
y in this example).
The syntax for getting job status, metrics, and results are the same for all jobs. You can invoke
job.get_status
to get the status,
job.get_metrics to get job metrics, and
job.get_results to get job results.
For example, to get the results:
print job.get_results()
2
Jobs can be cancelled using
job.cancel; note that for an EC2 cluster this does not stop the EC2 hosts.
job.cancel()
For Hadoop-specific job failures (for instance, preemption), you can use the
job.get_error API.
It is possible that a job succeeds, but tasks inside a job fail. To debug this, use the
job.get_metrics API.
EC2 Notes
- Once the execution is complete, the idle timeout period will start, after which the EC2 instance(s) started will be terminated. Launching another job will reset the idle timeout period.
- A set of packages to be installed in addition to graphlab and its dependencies can be specified as a list of strings in the
createcall.
- Execution logs will be maintained in S3 (using the
s3_pathparameter in the cluster creation call).
Hadoop Notes
- Job status is also available through normal Hadoop monitoring, as GraphLab Create submits jobs using a GraphLab YARN application. Logs for executions are available using Yarn logs.
- The location of the logs is available in the job summary, which can be viewed by calling
print job. You can also use
job.get_log_file_pathto get the location of the logs.
- If you are using Hadoop in Cloudera HA mode, you need to include conf.cloudera.hdfs in your CLASSPATH environment variable. | https://turi.com/learn/userguide/deployment/pipeline-ec2-hadoop.html | CC-MAIN-2017-09 | refinedweb | 1,141 | 54.83 |
09 May 2012 07:09 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
“We also plan to start-up the No 2 unit two months after starting up the No 1 unit,” the source added.
He dispelled an impression that there was going to be a delay in the start up.
The No 2 PTA unit also has a design capacity to produce 2.2m tonnes/year and is located in
The company continues to be on the look-out for competitively priced feedstock paraxylene (PX) shipments to build inventory levels ahead of the targeted start-up, the source said.
“Taking into consideration the prevailing PTA prices, I will be interested to buy PX cargoes at $1,500/tonne CFR (€1,155/tonne) (cost and freight)
Other bids for spot PX June shipments were at $1,525/tonne CFR Taiwan/Ningbo against offers at $1,535/tonne CFR Taiwan/Ning | http://www.icis.com/Articles/2012/05/09/9557550/chinas-hengli-group-targets-end-july-start-up-for-pta-unit.html | CC-MAIN-2014-41 | refinedweb | 149 | 57.81 |
CS50x Cash
From problem set 1
Cash is the easier project we work on the entire course (provided you’re using the CS50 library). It basically asks us to count how many coins we need to use to return a change. Here the cashier wants to use the least amount of coins possible, not mattering how many coins of each are in the cashier’s drawer.
To begin, we open CS50 IDE through the Menu in the course page or by typing ide.cs50.io on your browser. Then procced to create our cash project folder.
cd ~/pset1
mkdir cash
cd cash
touch cash.c
You’ll notice that now we have a folder called cash with a file called cash.c inside. Let’s open it.
open cash.c
This is where we’ll start to code! Step by step:
// Prompt user for a change value
// Make sure it's positive
// Round it and store as cents in a variable
// Count the least amount of coins needed for the change
// Print result
Reading the Implementation Details in the problem set we learn that we’ll need at least two libraries to make this work: stdio and math. I wanted so much to not use CS50 library, for my own motives (maybe I’ll explain further on), so I added ctype for running manually all the checks cs50 would’ve run for me to make sure the user enters a float. I used scanf(), passed every test, excluding re-prompting after the user entered a char, I could only terminate the program… SO, back to the training wheels, CS50 library included in the header.
So this is what the base of our code should look like:
#include <stdio.h>
#include <math.h>
#include <cs50.h>int main(void)
{
}
The next step is to prompt the user for a value, his owed change, and make sure that value is valid. For that I’m creating a function. Inside it, I chose a do/while loop for this case, because it makes sure we prompt the user at least one time before moving on, and then re-prompts if condition isn’t met.
Remember that main should always be the first function in the program, by convention, so any other we add need to go after main.
float getValue(void)
{
float change;
do
{
change = get_float("Change owed: ");
}
while (change < 0.00);return change;
}
As we now have a function we should remember to add it’s prototype to the header, otherwise the compiler is going to check an error when we call it in main. As I’m also using a custom function to count the coins, let’s add it now.
float getValue(void);
int coins(int cents);
Moving on, time to call our getValue function in main, store the value it returns and then round our hard earned dollars into cents. Math library has the perfect function to do it! It’s called (surprise) round!
float owed = getValue();
int cents = round(owed * 100);
From here you can start checking the code to see if it’s working properly. Compile it using make on terminal and run some tests.
make cash
An example of intended behavior:
$ ./cash
Change owed: -0.41
Change owed: foo
Change owed: 0.41
We can see that the program keeps asking the user for a valid value until it gets it. What do we do now? Let’s count coins.
int coins(int cents)
{
int count = 0;
while (cents > 0)
{
if (cents >= 25)
{
cents -= 25;
count++;
}
else if (cents >= 10)
{
cents -= 10;
count++;
}
else if (cents >= 5)
{
cents -= 5;
count++;
}
else
{
cents -= 1;
count++;
}
}
return count;
Our coins function takes the rounded cents value we prepared before. Then it uses a loop to check if this value is bigger than each one of our coins, decreasing the value of cents by the value of the coin and adding one to our count every time it’s true. In other words, if the change owed was $1.16, we transform the number to 116 cents. What the code does is to check if 116 is bigger than 25, if true, decreases it to 91 and add a coin to the counter. A 25 cents coin, but that doesn’t matter for us. It repeats until we have decreased 116 to 16, then it jumps to the next coin value and so on. At the end, we’ll have four 25 cents coins, one ten cents, one five cents, and one one cent. Our count then is 7. That’s what the function will return.
Due to the parameters of the print function required to our last step, we can now finish our code adding one single line to main.
printf("%d\n", coins(cents));
Printf is going to print an integer followed by a new line on the terminal, as declared by “%d\n”. What this integer will be is defined by the second parameter, which is our coins function. So the integer is going to be the integer returned by the coins function.
Let’s run the staff checks on the code now.
~/pset1/cash/ $ style50 cash.c
Results generated by style50 v2.7.4
Looks good!~/pset1/cash/ $ check50 cs50/problems/2020/x/cash
...
Results for cs50/problems/2020/x/cash generated by check50 v3.1.2
:) cash.c exists
:) cash.c compiles
:) input of 0.41 yields output of 4
:) input of 0.01 yields output of 1
:) input of 0.15 yields output of 2
:) input of 1.6 yields output of 7
:) input of 23 yields output of 92
:) input of 4.2 yields output of 18
:) rejects a negative input like -1
:) rejects a non-numeric input of "foo"
:) rejects a non-numeric input of ""
As we can see from the happy faces, everything ok!
Here follows the full code with comments. | https://guilherme-pirani.medium.com/cs50x-cash-dc4f0c7ef584 | CC-MAIN-2022-40 | refinedweb | 981 | 82.04 |
bloc makes building applications for the Ethereum blockchain as easy. Bloc uses the blockapps api and provides:
The easiest way to get started is to install
bloc from NPM:
npm install -g blockapps-bloc
You can also check out the github repo and build it by running
git clone bloc; npm install -g
You can use
bloc init to create a sample app.
bloc init
bloc init builds a base structure for your blockchain app as well as sets some default parameters values for creating transactions. These can be edited in the
config.yaml file in your app directory.
The
config.yaml file also holds the app's
apiURL. This can be configured to point to an isolated test network, or the real Ethereum network. You can change this link, which will allow you to build and test in a sandboxed environment, and later re-deploy on the real Ethereum blockchain.
You will find the following files in your newly created app directory:
/app/components/contracts/lib/meta/routes/static/usersapp.jsbower.jsonconfig.yamlgulpfile.jsmarko-taglib.jsonnode_modulespackage.json
The "contracts" directory holds Ethereum blockchain code, written in the Solidity language, which you can learn about here-. This is the code that will run on the blockchain. Samples contracts have been provided to get you started.
Key management to handle account keys for users and signing transactions with bloc.
Once contracts are deployed
bloc provides a RESTful interface for interacting with deployed contracts. Simply call contract methods with an address and pass the password to decrypt your key.
After initing your app, run the following to download the dependencies for the app:
npm install
Once this is finished run
bloc genkey
This generates a new user with name
admin as well as a private key and fills it with test-ether (note- free sample Ether is only available on the test network, of course). You can view the address information in the newly created
app/users/admin/<address>.json file. Also, beware that this file contains your private key, so if you intend to use this address on the live network, make sure you keep this file secure and hidden.
The new account has also been created on the blockchain, and you can view account information by using our REST API directly in a browser by visiting < fill in your address here >
An example output is:
curl ""
Getting a contract live on the blockchain is a two step process
To compile a smartcontract, run
bloc compile <ContractName>
If there are any bugs in your contract code, this is where you will be allowed to fix them.
Upload a contract using
bloc upload <ContractName>
You will now see that Ether has been deducted from your account
curl ""
Also, the newly created contract has been given its own address, which you can view in the data in the
app/users/<username> folder. Viewing contract information, including compiled bytecode for your Solidity contract can be done using the same URL that you use to view your own account information.
curl ""
Bloc ships with a node server. To get the server up and running
bloc start
Now you can visit one of the contracts in your application, for example. Note
that the local webserver relies on dynamically generated templates, founds in the
app/components directory.
bloc will run through three contract status checks
This will be reflected in the application as well as at the terminal
Once you have a deployed contract
bloc will provide a simple REST API for interacting with the contract. The API has routes for viewing contract methods, symbols, calling contract methods. The keyserver and contract API documentation can be viewed here
Usage: /usr/local/bin/bloc <command> (options)Commands:init [appName] start a new projectcompile [contract] compile contract in contract folderupload contract upload contract to blockchaingenkey [user] generate a new private key and fill it at the faucet,namespaced by usersend start prompt, transfer (amount*unit) to (address)start start bloc as a webserver with live reloadOptions:-u [default: "admin"]
bloc uses blockapps-js, our library for interfacing with the blockchain in a simple way.
Smart contracts that are written in javascript-like language called Solidity. A good place to start playing around with Solidity is the online compiler. | https://www.npmjs.com/package/blockapps-bloc | CC-MAIN-2017-30 | refinedweb | 709 | 56.49 |
Duplexing the sponge: single-pass authenticated encryption and other applications
- Ashlie Lamb
- 1 years ago
- Views:
Transcription
1 Duplexing the sponge: single-pass authenticated encryption and other applications Guido Bertoni 1, Joan Daemen 1, Michaël Peeters 2, and Gilles Van Assche 1 1 STMicroelectronics 2 NXP Semiconductors acks.. Keywords: sponge functions, duplex construction, authenticated encryption, key wrapping, provable security, pseudo-random bit sequence generator, Keccak 1 Introduction While most symmetric-key modes of operations are based on a block cipher or a stream cipher, there exist modes using a fixed permutation as underlying primitive. Designing a cryptographically strong permutation suitable for such purposes is similar to designing a block cipher without a key schedule and this design approach was followed for several recent hash functions, see, e.g., [19]. The sponge construction is an example of such a mode. With its arbitrarily long input and output sizes, it allows building various primitives such as a stream cipher or a hash function [7]. In the former, the input is short (typically the key and a nonce) while the output is as long as the message to encrypt. In contrast, the la er takes a message of any length at input and produces a digest of small length. Some applications can take advantage of both a long input and a long output size. For instance, authenticated encryption combines the encryption of a message and the generation of a message authentication code (MAC) on it. It could be implemented with one sponge function call to generate a key stream (long output) for the encryption and another call to generate the MAC (long input). However, in this case, encryption and authentication are separate processes without any synergy. The duplex construction is a novel way to use a fixed permutation (or transformation) to allow the alternation of input and output blocks at the same rate as the sponge construction, like a full-duplex communication. In fact, the duplex construction can be seen as a particular way to use the sponge construction, hence it inherits its security properties. By using the duplex construction, authenticated encryption requires only one call to the underlying permutation (or transformation) per message block. In a nutshell, the input blocks of the duplex are used to input the key and the message blocks, while the intermediate output blocks are used as key stream and the last one as a MAC. Authenticated encryption (AE) has been extensively studied in the last ten years. Block cipher modes clearly are a popular way to provide simultaneously both integrity and confidentiality. Many block cipher modes have been proposed and most of these come with a
2 security proof against generic a acks, e.g., [3,21,28,45,38,5,30,43,32,24,46,39,25,27,26,31]. Interestingly, there have also been a empts at designing dedicated hybrid primitives offering efficient simultaneous stream encryption and MAC computation, e.g., Helix and Phelix [20,48]. However, these primitives were shown to be weak [36,40,49]. Another example of hybrid primitive is the Grain-128 stream cipher to which optional built-in authentication was recently added [50]. Our proposed mode shares with these hybrid primitives that it offers efficient simultaneous stream encryption and MAC computation. It shares with the block cipher modes that it has provable security against generic a acks. However, it is the first such construction that (directly) relies on a permutation rather than a block cipher and that proves its security based on this type of primitive. An important efficiency parameter of an AE mode is the number of calls to the block cipher or to the permutation per block. While encryption or authentication alone requires one call per block, some AE modes only require one call per block for both functions. The duplex construction naturally provides a good basis for building such an efficient AE mode. Also, the AE mode we propose natively supports intermediate tags and the authenticated encryption of a sequence of messages. Authenticated encryption can also be used to transport secret keys in a confidential way and to ensure their integrity. This task, called key wrapping, is very important in key management and can be implemented with our construction if each key has a unique identifier. Finally, the duplex construction can be used for other modes as well, such as a reseedable pseudo-random bit sequence generator (PRG) or to prove the security of an overwrite mode where the input block overwrites part of the state instead of XORing it in. These modes can readily be used by the concrete sponge function K [11] and the members of a recent wave of lightweight hash functions that are in fact sponge functions: Quark [2], Photon [23] and Spongent [14]. For these, and for the small-width instances of K, our security bound against generic a acks beyond the birthday bound published in [10] allows constructing solutions that are at the same time compact, efficient and potentially secure. The remainder of this paper is organized as follows. First, we propose a model for authenticated encryption in Section 2. Then in Section 3, we review the sponge construction. The core concept of this paper, namely the duplex construction, is defined in Section 4. Its use for authenticated encryption is given in Section 5 and for other applications in Section 6. Finally, Section 7 discusses the use of a flexible and compact padding. 2 Modeling authenticated encryption We consider authenticated encryption as a process that takes as input a key K, a data header A and a data body B and that returns a cryptogram C and a tag T. We denote this operation by the term wrapping and the operation of taking a data header A, a cryptogram C and a tag T and returning the data body B if the tag T is correct by the term unwrapping. The cryptogram is the data body enciphered under the key K and the tag is a MAC computed under the same key K over both header A and body B. So here the header A can play the role of associated data as described in [42]. We assume the wrapping and unwrapping operations as such to be deterministic. Hence two equal inputs (A, B) = (A, B ) will give rise to the same output (C, T) under the same key K. If this is a problem, it can be tackled by expanding A with a nonce. Formally, for a given key length k and tag length t, we consider a pair of algorithms W and U, with W : Z2 k (Z2) 2 Z2 Z2 t : (K, A, B) (C, T) = W(K, A, B), and U : Z k 2 (Z 2) 2 Z t 2 Z 2 {error} : (K, A, C, T) B or error. 2
3 The algorithms are such that if (C, T) = W(K, A, B) then U(K, A, C, T) = B. As we consider only the case of non-expanding encryption, we assume from now on that C = B. 2.1 Intermediate tags and authenticated encryption of a sequence So far, we have only considered the case of the authentication and encryption of a single message, i.e., a header and body pair (A, B). It can also be interesting to authenticate and encrypt a sequence of messages in such a way that the authenticity is guaranteed not only on each (A, B) pair but also on the sequence received so far. Intermediate tags can also be useful in practice to be able to catch fraudulent transactions early. Let (A, B) = (A (1), B (1), A (2),..., A (n), B (n) ) be a sequence of header-body pairs. We extend the function of wrapping and unwrapping as providing encryption over the last body B (n) and authentication over the whole sequence (A, B). Formally, W and U are defined as: W : Z k 2 (Z 2) 2+ Z 2 Z t 2 : (K, A, B) (C(last), T (last) ) = W(K, A, B), and U : Z k 2 (Z 2) 2+ Z t 2 Z 2 {error} : (K, A, C, T (last) ) B (last) or error. Here, (Z 2 )2+ means any sequence of binary strings, with an even number of such strings and at least two. To wrap a sequence of header-body pairs, the sender calls W(K, A (1), B (1) ) with the first header-body pair to get (C (1), T (1) ), then W(K, A (1), B (1), A (2), B (2) ) with the second one to get (C (2), T (2) ), and so on. To unwrap, the receiver first calls U(K, A (1), C (1), T (1) ) to retrieve the first body B (1), then U(K, A (1), C (1), A (2), C (2), T (2) ) to retrieve the second body, and so on. As we consider only the case of non-expanding encryption, we assume that C (i) = B (i) for all i. 2.2 Security requirements We consider two security notions from [45] and works cited therein, called privacy and authenticity. Together, these notions are central to the security of authenticated encryption [3]. Privacy is defined in Eq. (1) below. Informally, it means that the output of the wrapping function looks like uniformly chosen random bits to an observer who does not know the key. Adv priv (A) = Pr[K $ Z2 k : A[W(K,, )] = 1] Pr[A[R(, )] = 1], (1) with R(A, B) = RO(A, B) B (n) +t where B(n) is the last body in A, B, x is the bitlength of string x, l indicates truncation to l bits and K $ Z2 k means that K is chosen randomly and uniformly among the set Z2 k. In this definition, we use a random oracle RO as defined in [4], but allowing sequences of one or more binary strings as input (instead of a single binary string). Here, a random oracle is a map from (Z2 )+ to Z2, chosen by selecting each bit of RO(x) uniformly and independently, for every input. The original definition can still be used by defining an injective mapping from (Z2 )+ to Z2. For privacy, we consider only adversaries who respect the nonce requirement. For a single header-body pair, it means that, for any two queries (A, B) and (A, B ), we have A = A B = B. In general, the nonce requirement specifies that for any two queries (A, B) and (A, B ) of equal length n, we have pre(a, B) = pre(a, B ) B (n) = B (n), with pre(a, B) = (A (1), B (1), A (2),..., B (n 1), A (n) ) the sequence with the last body omitted. As for a stream cipher, not respecting the nonce requirement means that the adversary can learn the bitwise difference between two plaintext bodies. 3
4 Authenticity is defined in Eq. (2) below. Informally, it quantifies the probability of the adversary successfully generating a forged ciphertext-tag pair. Adv auth (A) = Pr[K $ Z k 2 : A[W(K,, )] outputs a forgery]. (2) Here a forgery is a sequence (A, C, T) such that U(K, A, C, T) = error and that the adversary made no query to W with input (A, B) returning (C (n), T), with C (n) the last ciphertext body of A, C. Note that authenticity does not need the nonce requirement. 2.3 An ideal system We can define an ideal system using a pair of independent random oracles (RO C, RO T ). For a single header-body pair, encryption and tag computation are implemented as follows. The ciphertext C is produced by XORing B with a key stream. This key stream is the output of RO C (K, A). If (K, A) is a nonce, key streams for different data inputs are the result of calls to RO C with different inputs and hence one key stream gives no information on another. The tag T is the output of RO T (K, A, B). Tags computed over different header-body pairs will be the result of calls to RO T with different inputs. Key stream sequences give no information on tags and vice versa as they are obtained by calls to different random oracles. Let us define the ideal system in the general case, which we call RO. Wrapping is defined as W(K, A, B) = (C (n), T (n) ), if A, B contains n header-body pairs, with C (n) = RO C (K, pre(a, B)) B (n) B(n), T (n) = RO T (K, A, B) t. The unwrapping algorithm U first checks that T (n) = RO T (K, A, B) t and if so decrypts each body B (i) = RO C (K, A (1), B (1), A (2),..., A (i) ) C (i) C(i) from the first one to the last one and finally returns the last one B (n) = RO C (K, pre(a, B)) C (n) C(n). The security of RO is captured by Lemmas 1 and 2. Lemma 1. Let A[RO C, RO T ] be an adversary having access to RO C and RO T and respecting the nonce requirement. Then, Adv priv RO (A) q2 k if the adversary makes no more than q queries to RO C or RO T. Proof. For any fixed last body B (n), the output of RO is indistinguishable from that of RO used in Eq. (1), unless A makes a query to RO C or RO T with the correct key K as first argument. This last event has probability q2 k among q queries and the advantage can be bounded following [34, Theorem 1]. The conclusion is still valid for a variable B (n), as a different B (n) implies a different pre(a, B). Lemma 2. Let A[RO C, RO T ] be an adversary having access to RO C and RO T. Then, RO satisfies Adv auth RO (A) q2 k + 2 t if the adversary makes no more than q queries to RO C or RO T. Proof. A similar argument as in Lemma 1 can be applied here. In addition, the adversary can just be lucky and output the correct tag T with probability 2 t. 3 The sponge construction The sponge construction [7] builds a function [ f, pad, r] with variable-length input and arbitrary output length using a fixed-length permutation (or transformation) f, a padding rule pad and a parameter bitrate r. 4
5 For the padding rule we use the following notation: the padding of a message M to a sequence of x-bit blocks is denoted by M pad[x]( M ), where M is the length of M. This notation highlights that we only consider padding rules that append a bitstring that is fully determined by the length of M and the block length x. We may omit [x], M or both if their value is clear from the context. Definition 1. A padding rule is sponge-compliant if it never results in the empty string and if it satisfies following criterion: n 0, M, M Z 2 : M = M M pad[r]( M ) = M pad[r]( M ) 0 nr (3) For the sponge construction to be secure (see Section 3.2), the padding rule pad must be sponge-compliant. As a sufficient condition, a padding rule that is reversible, non-empty and such that the last block must be non-zero, is sponge-compliant [7]. 3.1 Definition The permutation f operates on a fixed number of bits, the width b. The sponge construction has a state of b bits. First, all the bits of the state are initialized to zero. The input message is padded with the function pad[r] and cut into r-bits blocks. Then it proceeds in two phases: the absorbing phase followed by the squeezing phase: Absorbing phase The r-bit input message blocks are XORed into the first r bits of the state, interleaved with applications of the function f. When all message blocks are processed, the sponge construction switches to the squeezing phase. Squeezing phase The first r bits of the state are returned as output blocks, interleaved with applications of the function f. The number of iterations is determined by the requested number of bits. Finally the output is truncated to the requested length. The sponge construction is illustrated in Figure 1, and Algorithm 1 provides a formal definition. Fig. 1. The sponge construction The value c = b r is called the capacity. The last c bits of the state are never directly affected by the input blocks and are never output during the squeezing phase. The capacity c actually determines the a ainable security level of the construction [8,10]. 5
6 Algorithm 1 The sponge construction [ f, pad, r] Require: r < b Interface: Z = sponge(m, l) with M Z 2, integer l > 0 and Z Zl 2 P = M pad[r]( M ) Let P = P 0 P 1... P w with P i = r s = 0 b for i = 0 to w do s = s (P i 0 b r ) s = f (s) end for Z = s r while Z < l do s = f (s) Z = Z s r end while return Z l 3.2 Security Cryptographic functions are o en designed in two steps. In the first step, one chooses a construction that uses a cryptographic primitive with fixed input and output size (e.g., a compression function or a permutation) and builds a function that can take inputs and or generate outputs of arbitrary size. If the security of this construction can be proven, for instance as in this case using the indifferentiability framework, it reduces the scope of cryptanalysis to that of the underlying primitive and guarantees the absence of singlestage generic a acks (e.g., preimage, second preimage and collision a acks) [35]. However, generic security in the multi-stage se ing using the indifferentiability framework is currently an open problem [41]. It is shown in [8] that the success probability of any single-stage generic a ack for differentiating the sponge construction calling a random permutation or transformation from a random oracle is upper bounded by 2 (c+1) N 2. Here N is the number of calls to the underlying permutation or its inverse. This implies that any single-stage generic a ack on a sponge function has success probability of at most 2 (c+1) N 2 plus the success probability of this a ack on a random oracle. In [10], we address the security of the sponge construction when the message is prefixed with a key, as it will be done in the mode of Section 5. In this specific case, the security proof goes beyond the 2 c/2 complexity if the number of input or output blocks for which the key is used (data complexity) is upper bounded by M < 2 c/2 1. In that case, distinguishing the keyed sponge from a random oracle has time complexity of at least 2 c 1 /M > 2 c/2. Hence, for keyed modes, one can reduce the capacity c for the same targeted security level. 3.3 Implementing authenticated encryption The simplest way to build an actual system that behaves as RO would be to replace the random oracles RO C and RO T by a sponge function with domain separation. The indifferentiability proof in [8] guarantees the result is secure if the permutation f of the sponge function has no structural distinguishers. However, such a solution requires two sponge function executions: one for the generation of the key stream and one for the generation of the tag, while we aim for a single-pass solution. To achieve this, we define a variant where the key stream blocks and tag are the responses of a sponge function to input sequences that are each other s prefix. This introduces a new construction that is closely related to the sponge construction: the duplex construction. Subsequently, we build an authenticated encryption mode on top of that. 6
7 4 The duplex construction Like the sponge construction, the duplex construction [ f, pad, r] uses a fixed-length transformation (or permutation) f, a padding rule pad and a parameter bitrate r. Unlike a sponge function that is stateless in between calls, the duplex construction accepts calls that take an input string and return an output string depending on all inputs received so far. We call an instance of the duplex construction a duplex object, which we denote D in our descriptions. We prefix the calls made to a specific duplex object D by its name D and a dot. Fig. 2. The duplex construction The duplex construction works as follows. A duplex object D has a state of b bits. Upon initialization all the bits of the state are set to zero. From then on one can send to it D.duplexing(σ, l) calls, with σ an input string and l the requested number of bits. Algorithm 2 The duplex construction [ f, pad, r] Require: r < b Require: ρ max (pad, r) > 0 Require: s Z2 b (maintained across calls) Interface: D.initialize() s = 0 b Interface: Z = D.duplexing(σ, l) with l r, σ ρ max (pad,r) n=0 Z n 2, and Z Zl 2 P = σ pad[r]( σ ) s = s (P 0 b r ) s = f (s) return s l The maximum number of bits l one can request is r and the input string σ shall be short enough such that a er padding it results in a single r-bit block. We call the maximum length of σ the maximum duplex rate and denote it by ρ max (pad, r). Formally: ρ max (pad, r) = min{x : x + pad[r](x) > r} 1. (4) Upon receipt of a D.duplexing(σ, l) call, the duplex object pads the input string σ and XORs it into the first r bits of the state. Then it applies f to the state and returns the first 7
8 Fig. 3. Generating the output of a duplexing call with a sponge l bits of the state at the output. We call a blank call a call with σ the empty string, and a mute call a call without output, l = 0. The duplex construction is illustrated in Figure 2, and Algorithm 2 provides a formal definition. The following lemma links the security of the duplex construction [ f, pad, r] to that of the sponge construction [ f, pad, r]. Generating the output of a D.duplexing() call using a sponge function is illustrated in Figure 3. Lemma 3. [Duplexing-sponge lemma] If we denote the input to the i-th call to a duplex object by (σ i, l i ) and the corresponding output by Z i we have: Z i = D.duplexing(σ i, l i ) = sponge(σ 0 pad 0 σ 1 pad 1... σ i, l i ) with pad i a shortcut notation for pad[r]( σ i ). Proof. The proof is by induction on the number of input strings σ i. First consider the case i = 0. We must prove D.duplexing(σ 0, l 0 ) = sponge(σ 0, l 0 ). The state of the duplex object before the call has value 0 b, the same as the initial state of the sponge function. Both in the case of the sponge function and the duplex object the input string is padded with pad resulting in a single r-bit block P. Then, in both cases P is XORed to the first r bits of the state and f is applied to the state. At this point the sponge function and the duplex object have the same state and both return the first l r bits of the state as output string. Since the sponge function does not do any additional iterations of f on the state, the state of the duplex object a er the call D.duplexing(σ 0, l 0 ) is equal to the state of the sponge construction a er absorbing a single block σ 0 pad 0. Now assume that a er the call D.duplexing(σ i 1, l i 1 ) the duplex object has the same state as the sponge function a er absorbing σ 0 pad 0 σ 1 pad 1... σ i 1 pad i 1. During the call D.duplexing(σ i, l i ), the block σ i pad i is XORed into the first r bits of the state and subsequently f is applied to the state. It follows that the state of the duplex object D a er the call D.duplexing(σ i, l i ) is equal to the state of the sponge function a er absorbing σ 0 pad 0 σ 1 pad 1... σ i pad i. As the output just consists of the first l i bits of the state, this proves Lemma 3. The output of a duplexing call is thus the output of a sponge function with an input σ 0 pad 0 σ 1 pad 1... σ i and from this input the exact sequence σ 0, σ 1,..., σ i can be recovered as shown in Lemma 4 below. As such, the duplex construction is as secure as 8
9 the sponge construction with the same parameters. In particular, it inherits its resistance against (single-stage) generic a acks. The reference point in this case is a random oracle whose input is the sequence of inputs to the duplexing calls since the initialization. Lemma 4. Let pad and r be fixed. Then, the mapping from a sequence (σ 0, σ 1,..., σ n ) of binary strings with σ i ρ max (pad, r) i to the binary string s = σ 0 pad 0 σ 1 pad 1... pad n 1 σ n is injective. Proof. The length of σ n can be determined as σ n = s mod r; this allows recovering σ n from s. Then, if n > 0, pad n 1 can be removed and the process continues recursively with s = σ 0 pad 0 σ 1 pad 1... σ n 1. In the following sections we will show that the duplex construction is a powerful tool for building modes of use. 5 The authenticated encryption mode S W We propose an authenticated encryption mode S W that realizes the authenticated encryption process defined in Section 2. Similarly to the duplex construction, we call an instance of the authenticated encryption mode a S W object. Upon initialization of a S W object, it loads the key K. From then on one can send requests to it for wrapping and/or unwrapping data. The key stream blocks used for encryption and the tags depend on the key K and the data sent in all previous requests. The authenticated encryption of a sequence of header-body pairs, as described in Section 2.1, can be performed with a sequence of wrap or unwrap requests to a S W object. 5.1 Definition A S W object W internally uses a duplex object D with parameters f, pad and r. Upon initialization of a S W object, it initializes D and forwards the (padded) key blocks K to D using mute D.duplexing() calls. When receiving a W.wrap(A, B, l) request, it forwards the blocks of the (padded) header A and the (padded) body B to D. It generates the cryptogram C block by block C i = B i Z i with Z i the response of D to the previous D.duplexing() call. The l-bit tag T is the response of D to the last body block (possibly extended with the response to additional blank D.duplexing() calls in case l > ρ). Finally it returns the cryptogram C and the tag T. When receiving a W.unwrap(A, C, T) request, it forwards the blocks of the (padded) header A to D. It decrypts the data body B block by block B i = C i Z i with Z i the response of D to the previous D.duplexing() call. The response of D to the last body block (possibly extended) is compared with the tag T received as input. If the tag is valid, it returns the data body B; otherwise, it returns an error. Note that in implementations one may impose additional constraints, such as S W objects dedicated to either wrapping or unwrapping. Additionally, the S W object should impose a minimum length t for the tag received before unwrapping and could break the entire session as soon as an incorrect tag is received. Before being forwarded to D, every key, header, data or cryptogram block is extended with a so-called frame bit. The rate ρ of the S W mode determines the size of the blocks and hence the maximum number of bits processed per call to f. Its upper bound is ρ max (pad, r) 1 due to the inclusion of one frame bit per block. A formal definition of S W is given in Algorithm 3. 9
10 Algorithm 3 The authenticated encryption mode S W [ f, pad, r, ρ]. Require: ρ ρ max (pad, r) 1 Require: D = [ f, pad, r] 1: Interface: W.initialize(K) with K Z 2 2: Let K = K 0 K 1... K u with K i = ρ for i < u, K u ρ and K u > 0 if u > 0 3: D.initialize() 4: for i = 0 to u 1 do 5: D.duplexing(K i 1, 0) 6: end for 7: D.duplexing(K u 0, 0) 8: Interface: (C, T) = W.wrap(A, B, l) with A, B Z2, l 0, C Z B 2 and T Z2 l 9: Let A = A 0 A 1... A v with A i = ρ for i < v, A v ρ and A v > 0 if v > 0 10: Let B = B 0 B 1... B w with B i = ρ for i < w, B w ρ and B w > 0 if w > 0 11: for i = 0 to v 1 do 12: D.duplexing(A i 0, 0) 13: end for 14: Z = D.duplexing(A v 1, B 0 ) 15: C = B 0 Z 16: for i = 0 to w 1 do 17: Z = D.duplexing(B i 1, B i+1 ) 18: C = C (B i+1 Z) 19: end for 20: Z = D.duplexing(B w 0, ρ) 21: while Z < l do 22: Z = Z D.duplexing(0, ρ) 23: end while 24: T = Z l 25: return (C, T) 26: Interface: B = W.unwrap(A, C, T) with A, C, T Z 2, B Z C 2 {error} 27: Let A = A 0 A 1... A v with A i = ρ for i < v, A v ρ and A v > 0 if v > 0 28: Let C = C 0 C 1... C w with C i = ρ for i < w, C w ρ and C w > 0 if w > 0 29: Let T = T 0 T 1... T x with T i = ρ for i < x, C x ρ and C x > 0 if x > 0 30: for i = 0 to v 1 do 31: D.duplexing(A i 0, 0) 32: end for 33: Z = D.duplexing(A v 1, C 0 ) 34: B 0 = C 0 Z 35: for i = 0 to w 1 do 36: Z = D.duplexing(B i 1, C i+1 ) 37: B i+1 = C i+1 Z 38: end for 39: Z = D.duplexing(B w 0, ρ) 40: while Z < l do 41: Z = Z D.duplexing(0, ρ) 42: end while 43: if T = Z l then 44: return B 0 B 1... B w 45: else 46: return Error 47: end if 10
11 5.2 Security In this section, we show the security of S W against generic a acks. To do so, we proceed in two steps. First, we define a variant of RO for which the key stream depends not only on A but also on previous blocks of B. Then, we quantify the increase in the adversary advantage when trading the random oracles RO C and RO T with a random sponge function and appropriate input mappings. For a fixed block length ρ, let pre i (A, B) = (A (1), B (1), A (2),..., B (n 1), A (n), B (n) iρ ), i.e., the last body B (n) is truncated to its first i blocks of ρ bits. We define RO [ρ] identically to RO, except that in the wrapping algorithm, we have C (n) = RO C (K, pre 0 (A, B)) (n) B 0 B(n) 0 RO C (K, pre 1 (A, B)) (n) B 1 B(n) 1... RO C (K, pre w (A, B)) (n) B w B(n) w for B (n) = B (n) 0 B (n) 1... B (n) w with B (n) i = ρ for i < w, B (n) w ρ and B (n) w > 0 if w > 0. The unwrap algorithm U is defined accordingly. The scheme RO [ρ] is as secure as RO, as expressed in the following two lemmas. We omit the proofs, as they are very similar to those of Lemma 1 and 2. Lemma 5. Let A[RO C, RO T ] be an adversary having access to RO C and RO T and respecting the nonce requirement. Then, Adv priv RO [ρ] (A) q2 k if the adversary makes no more than q queries to RO C or RO T. Lemma 6. Let A[RO C, RO T ] be an adversary having access to RO C and RO T. Then, RO satisfies Adv auth RO [ρ] (A) q2 k + 2 t if the adversary makes no more than q queries to RO C or RO T. Clearly, RO and RO [ρ] are equally secure if we implement RO C and RO T using a single random oracle with domain separation: RO C (x) = RO(x 1) and RO T (x) = RO(x 0). Notice that S W uses the same domain separation technique: the last bit of the input of the last duplexing call is always a 1 (resp. 0) to produce key stream bits (resp. to produce the tag). With this change, S W now works like RO [ρ], except that the input is forma ed differently and that a sponge function replaces RO. The next lemma focuses on the former aspect. Lemma 7. Let (K, A, B) be a sequence of strings composed by a key followed by header-body pairs. Then, the mapping from (K, A, B) to the corresponding sequence of inputs (σ 0, σ 1,..., σ n ) to the duplexing calls in Algorithm 3 is injective. Proof. We show that from (σ 0, σ 1,..., σ n ) we can always recover (K, A, B). The convention is that, when cu ing input strings into blocks of ρ bits, there is always at least one block (see, e.g., line 2 of Algorithm 3). Consequently, any (possibly empty) input string causes at least one duplexing call (e.g., see lines 7, 14 and 20) or equivalently at least one element σ i. The key K can be found by looking for the first block σ i that ends with frame bit 0; the key K is concatenation of the blocks σ j, j i, with their last bit removed. Then we look for the first block σ i, i > i, that ends with a frame bit 1; blocks from σ i+1 to σ i are concatenated with their last bit removed to give the first header A (1). To find the first body B (1), we follow the same procedure, except that we look for the first block σ i, i > i, that 11
12 ends with a bit 0. This operation is repeated to find the next header A (2) and the next body B (2). And so on. Note that the blocks σ produced by line 22 of Algorithm 3 do not contribute to neither a header nor a body as they contain only one bit, which is removed in the above procedure. We now have all the ingredients to prove the following theorem. Theorem 1. The authenticated encryption mode S W [ f, pad, r, ρ] defined in Algorithm 3 satisfies Adv priv S W [ f,pad,r,ρ] (A) < N(N + 1) q2 k + 2 c+1 and Adv auth S W [ f,pad,r,ρ] (A) < q2 k + 2 t + N(N + 1) 2 c+1, against any single adversary A if K $ Z2 k, tags of l t bits are used, f is a randomly chosen permutation, q is the number of queries and N is the number of times f is called. Proof. The scheme S W [ f, pad, r, ρ] uses a [ f, pad, r] object. Combining Lemmas 3, 4 and 7, we see that S W [ f, pad, r, ρ] works like RO [ρ] with a random oracle replaced by the sponge function [ f, pad, r] and an injective input function from (Z2 )+ to Z2. Compared to the expressions in Lemmas 5 and 6, the extra term in the advantages above accounts for the adversary being able to differentiate a random sponge from a random oracle. This follows from [35], formalized in [1, Theorem 2], and from the value of the RO-differentiating advantage of a random sponge [8]. Note that all the outputs of S W are equivalent to calls to a sponge function with the secret key blocks as a prefix. So the results of [10] can also be applied to S W as explained in Section Advantages and limitations The authenticated encryption mode S W has the following unique combination of advantages: While most other authenticated encryption modes are described in terms of a block cipher, S W only requires on a fixed-length permutation. It supports the alternation of strings that require authenticated encryption and strings that only require authentication. It can provide intermediate tags a er each W.wrap(A, B, l) request. It has a strong security bound against generic a acks with a very simple proof. It is single-pass and requires only a single call to the permutation f per ρ-bit block. It is flexible as the bitrate can be freely chosen as long as the capacity is larger than some lower bound. The encryption is not expanding. As compared to some block cipher based authenticated encryption modes, it has some limitations. First, the mode as such is serial and cannot be parallelized at algorithmic level. Some block cipher based modes do actually allow parallelization, for instance, the offset codebook (OCB) mode [44]. Yet, S W variants could be defined to support parallel streams in a fashion similar to tree hashing, but with some overhead. Second, if a system does not impose the nonce requirement on A, an a acker may send two requests (A, B) and (A, B ) with B = B. In this case, the first differing blocks 12
13 of B and B, say B i and B i, will be enciphered with the same key stream, making their bitwise XOR available to the a acker. Some block cipher based modes are misuse resistant, i.e., they are designed in such a way that in case the nonce requirement is not fulfilled, the only information an a acker can find out is whether B and B are equal or not [46]. Yet, many applications already provide a nonce, such as a packet number or a key ID, and can put it in A. 5.4 An application: key wrapping Key wrapping is the process of ensuring the secrecy and integrity of cryptographic keys in transport or storage, e.g., [37,18]. A payload key is wrapped with a key-encrypting key (KEK). We can use the S W mode with K equal to the KEK and let the data body be the payload key value. In a sound key management system every key has a unique identifier. It is sufficient to include the identifier of the payload key in the header A and two different payload keys will never be enciphered with the same key stream. When wrapping a private key, the corresponding public key or a digest computed from it can serve as identifier. 6 Other applications of the duplex construction Authenticated encryption is just one application of the duplex construction. In this section we illustrate it by providing two more examples: a pseudo-random bit sequence generator and a sponge-like construction that overwrites part of the state with the input block rather than to XOR it in. 6.1 A reseedable pseudo-random bit sequence generator In various cryptographic applications and protocols, random bits are used to generate keys or unpredictable challenges. While randomness can be extracted from a physical source, it is o en necessary to provide many more bits than the entropy of the physical source. A pseudo-random bit sequence generator (PRG) is initialized with a seed, generated in a secret or truly random way, and it then expands the seed into a sequence of bits. For cryptographic purposes, it is required that the generated bits cannot be predicted, even if subsets of the sequence are revealed. In this context, a PRG is similar to a stream cipher. A PRG is also similar to a cryptographic hash function when gathering entropy coming from different sources. Finally, some applications require a pseudo-random bit sequence generator to support forward security: The compromise of the current state does not enable the a acker to determine the previously generated pseudo-random bits [6,17]. Conveniently, a pseudo-random bit sequence generator can be reseedable, i.e., one can bring an additional source of entropy a er pseudo-random bits have been generated. Instead of throwing away the current state of the PRG, reseeding combines the current state of the generator with the new seed material. In [9] a reseedable PRG was defined based on the sponge construction that implements the required functionality. The ideas behind that PRG are very similar to the duplex construction. We however show that such a PRG can be defined on top of the duplex construction. A duplex object can readily be used as a reseedable PRG. Seed material can be fed via the σ inputs in D.duplexing() call and the responses can be used as pseudo-random bits. If pseudo-random bits are required and there is no seed available, one can simply send blank D.duplexing() calls. The only limitation of this is that the user must split his seed material in strings of at most ρ max bits and that at most r bits can be requested in a single call. 13
14 As a next step, we propose a reseedable pseudo-random bit sequence generator mode called S PRG. This mode is similar to the one proposed in [9] in that it minimizes the number of calls to f, although explicitly based on the duplex construction. Internally it makes use of a duplex object D and it has two buffers: an input buffer B in and an output buffer B out. During feed requests it accumulates seed material in B in and, if it has received at least ρ bits, it forwards them to D in a D.duplexing() call. Any surplus seed string is kept in the input buffer. Upon a fetch request, if the input buffer is not empty, it empties it by forwarding any remaining seed to D and returns the requested number of bits, performing more duplexing calls if necessary, each requesting ρ bits. The surplus of produced bits are kept in B out, which will be returned first upon the next fetch request. Note that at any moment, one of B in and B out is empty. As such, the operation of a S PRG object is based on a permutation and revealing the state allows the a acker to backtrack the generation back to the most recent unknown seed fed into it. Nevertheless, reseeding regularly with sufficient entropy already prevents the a acker from going backwards. Also, an embedded security device such as a smartcard in which such a PRG would be used is designed to protect the secrecy of keys and therefore reading out the state is expected to be difficult. Still, forward security can be explicitly enforced by means of a P.forget() request. The effect of this request is the rese ing to zero of the first ρ bits of the state, an application of the padding and a subsequent application of f. Under the condition that ρ c, guessing the state before this operation given the state a erwards requires guessing at least c bits and hence is infeasible for reasonable values of c. On a PC, which might be more vulnerable to a memory recovery a ack, this condition that ρ c can easily be satisfied by a suitable sponge function; e.g., this is the case for K [] with its default parameters. The S PRG mode is defined in Algorithm 4. Note that the buffers do not require separate storage but can be implemented merely as pointers to the state: The input buffer requires a pointer to the state indicating from where on new bits must be XORed into the state, while the output buffer pointer points in the state where the next output bit must be taken. The storage is thus limited to the b-bit state and two integers. It is clear that every bit returned by P.fetch() is part of the output of the sponge presented with a string that contains all seed material presented so far. The S PRG mode does not allow reconstructing the individual blocks σ i but does allow reconstructing their concatenation. 6.2 The mode O In [22] sponge-like constructions were proposed and cryptanalyzed. In some of these constructions, absorbing is done by overwriting part of the state by the message block rather than XORing it in, e.g., as in the hash function Grindahl [29]. These overwrite functions have the advantage over sponge functions that between calls to f, only c bits must be kept instead of b. This may not be useful when hashing in a continuous fashion, as b bits must be processed by f anyway. However, when hashing a partial message, then pu ing it aside to continue later on, storing only c bits may be useful on some platforms. The mode O differs from the sponge construction in that it overwrites part of the state with an input block instead of XORing it in. Such a mode can be analyzed by building it on top of the duplex construction. If the first ρ bits of the state are known to be Z, overwriting them with a message block P i is equivalent to XORing in Z P i. Note that this idea is also used in the forget call of the S PRG mode and is formally implemented in Algorithm 5. In practice, of course, the implementation can just overwrite the first ρ bits of the state by a message block. As a ma er of fact, Algorithm 5 can be rewri en to call f directly, similar to the sponge construction. We leave this as an exercise for the reader. 14
15 Algorithm 4 Pseudo-random bit sequence generator mode S PRG[ f, pad, r, ρ] Require: ρ ρ max (pad, r) Require: D = [ f, pad, r] Interface: P.initialize() D.initialize() B in = empty string B out = empty string Interface: P.feed(σ) with σ Z 2 M = B in σ Let M = M 0 M 1... M w with M i = ρ for i < w and 0 M w < ρ for i = 0 to w 1 do D.duplexing(M i, 0) end for B in = M w B out = empty string Interface: Z = P.fetch(l) with integer l 0 and Z Z l 2 while B out < l do B out = B out D.duplexing(B in, ρ) B in = empty string end while Z = B out l B out = last ( B out l) bits of B out return Z Interface: Z = P.forget() requiring ρ c Z = D.duplexing(B in, ρ) B in = empty string D.duplexing(Z, ρ) B out = empty string We define the mode O on top of the duplex construction. An O function internally uses a duplex object D. It pads the message M and splits it in ρ-bit blocks. Then it makes a sequence of D.duplexing() calls, each time with a message block XORed with the response of the previous D.duplexing() call and with a frame bit appended to it. This frame bit is equal to 1 for the last block and 0 for all other blocks. If the requested number of output bits l is larger than ρ, additional D.duplexing() calls are done where each time the response of the previous D.duplexing() call is fed back to D. Theorem 2. The construction O [ f, pad, r, ρ] is as secure as [ f, pad, r]. Proof. The construction O [ f, pad, r, ρ] is defined in terms of calls to [ f, pad, r]. From the sponge-duplexing lemma, the output of such a call is the output to [ f, pad, r] for a specific input. Hence, the theorem comes down to showing that the input M to O can be recovered from the inputs to the duplexing calls. The coding using the frame bits in Algorithm 5 allows, for any input sequence of D, finding the last block (P w Z) and the length of the original input M. To recover the message M from the input sequence, one can start with the first block. Since Z = 0 ρ in the first block, the first block in the D.duplexing() call allows recovering the first block of M. Then, this block allows determining the output Z that was XORed into the next block, and so on. We have thus proven that the security of O is equivalent to that of the sponge construction with the same parameter, but at a cost of 2 bits of bitrate (or equivalently, of capacity): one for the padding rule (assuming pad10 is used) and one for the frame bit. 15
16 Algorithm 5 The construction O [ f, pad, r, ρ] Require: ρ ρ max (pad, r) 1 Require: D = [ f, pad, r] Interface: Z = O (M, l) with M Z 2, integer l > 0 and Z Zl 2 P = M pad[ρ]( M ) Let P = P 0 P 1... P w with P i = ρ for i w D.initialize() Z = 0 ρ for i = 0 to w 1 do Z = D.duplexing((P i Z) 0, ρ) end for Z = D.duplexing((P w Z) 1, ρ) B out = Z while B out < l do Z = D.duplexing(Z 1, ρ) B out = B out Z end while return B out l 7 A flexible and compact padding rule Sponge functions and duplex objects feature the nice property of allowing a range of security-performance trade-offs, via capacity-rate pairs, using the same fixed permutation f. To be able to fully exploit this property in the scope of the duplex construction, and for performance reasons, the padding rule should be compact and should be suitable for a family of sponge functions with different rates. In this section, we introduce the multi-rate padding and prove that it is suitable for such a family. For a given capacity and width, the padding reduces the maximum bitrate of the duplex construction, as in Eq. (4). To minimize this effect, especially when the width of the permutation is relatively small, one should look for the most compact padding rule. The sponge-compliant padding scheme (see Section 3) with the smallest overhead is the wellknown simple reversible padding, which appends a single 1 and the smallest number of zeroes such that the length of the result is a multiple of the required block length. We denote it by pad10 [r](m). It satisfies ρ max (pad10, r) = r 1 and hence has only one bit of overhead. When considering the security of a set of sponge functions that make use of the same permutation f but with different bitrates, simple reversible padding is not sufficient. The indifferentiability proof of [8] actually only covers the indifferentiability of a single sponge function instance from a random oracle. As a solution, we propose the multirate padding, denoted pad10 1[r]( M ), which returns a bitstring 10 q 1 with q = ( M 2) mod r. This padding is sponge-compliant and has ρ max (pad10 1, r) = r 2. Hence, this padding scheme is compact as the duplex-level maximum rate differs from the spongelevel rate by only two bits. Furthermore, in Theorem 3 we will show it is sufficient for the indifferentiability of a set of sponge functions. The intuitive idea behind this is that, with the pad10 1 padding scheme, the last block absorbed has a bit with value 1 at position r 1, while any other function of the family with r < r this bit has value 0. Besides having a compact padding rule, it is also useful to allow the sponge function to have specific bitrate values. In many applications one prefers to have block lengths that are a multiple of 8 or even higher powers of two to avoid bit shi ing or misalignment issues. With modes using the duplex construction, one has to distinguish between the modelevel block size and the bitrate of the underlying sponge function. For instance in the authenticated encryption mode S W, the block size is at most ρ max (pad, r) 1. To have a block size with the desired value, it suffices to take a slightly higher value as 16
17 bitrate r; hence, the sponge-level bitrate may no longer be a multiple of 8 or of a higher power of two. Therefore it is meaningful to consider the security of a set of sponge functions with common f and different bitrates, including bitrates that are not multiples of 8 or of a higher power of two. For instance, the mode S W could be based on K [r = 1027, c = 573] so as to process application-level blocks of ρ max (pad10 1, 1027) 1 = 1024 bits [11]. Regarding the indifferentiability of a set of sponge functions, it is clear that the best one can achieve is bounded by the strength of the sponge construction with the lowest capacity (or, equivalently, the highest bitrate), as an adversary can always just try to differentiate the weakest construction from a random oracle. The next theorem states that we achieve this bound by using the multi-rate padding. Theorem 3. Given a random permutation (or transformation) f, differentiating the array of sponge functions [ f, pad10 1, r] with 0 < r r max from an array of independent random oracles (RO r ) has the same advantage as differentiating [ f, pad10, r max ] from a random oracle. Proof. We can implement the array of sponge functions [ f, pad10 1, r] using a single sponge function sponge max = [ f, pad10, r max ], a bitrate-dependent input preprocessing function I[r, r max ] and a bitrate-dependent output post-processing function O[r, r max ]. So we have: [ f, pad10 1, r] = O[r, r max ] [ f, pad10, r max ] I[r, r max ]. (5) The input pre-processing function M = I[r, r max ](M) consists of the following steps: 1. Construct Q by padding M with multi-rate padding: Q = M pad10 1[r]( M ). 2. Construct Q by spli ing Q in r-bit blocks, extending each block with 0 r max r and concatenating the blocks again. 3. Construct M by unpadding Q according to the padding rule pad10. Note that the third step removes the trailing r max r bits with value 0 and the bit with value 1 just before that. It follows that the length of M modulo r max is r 1, hence this pre-processing implements domain separation between the different r values for a given value of r max. Moreover, it is straightforward to extract M from I[r, r max ](M) and hence the pre-processing function is injective: (M 1, r 1 ) = (M 2, r 2 ) I[r 1, r max ](M 1 ) = I[r 2, r max ](M 2 ). The output post-processing function Z = O[r, r max ](Z ) consists of spli ing Z in r max - bit blocks Z i, truncating each block to its first r bits Z i = Z i r and concatenating the blocks again: Z = Z 0 Z 1... It is easy to verify that with these pre- and post-processing functions Eq. (5) holds. Any a ack that can differentiate the set of sponge functions [ f, pad10 1, r] from a set of random oracles with an advantage ϵ can be converted into an a ack on sponge max with the same advantage. Namely, the response Z (i) to a query M (i) to [ f, pad 1, r] can be obtained from sponge max by querying it with I[r, r max ](M (i) ) and applying O[r, r max ] to its response Z (i). Hence, differentiating the array [ f, pad10 1, r] from the array (RO r ) comes down to differentiating sponge max from RO, where sponge max has capacity c min = b r max. 17
18 8 Duplexing iterated functions in general The duplex construction can be seen as a way to use a sponge function in a cascaded way. The central idea is that a duplex object keeps a state equal to that of a sponge function that has absorbed the combination of all inputs to the duplex object so far. Clearly, the same principle can be applied to most other sequential hash function constructions that consist of the iterated application of a compression function or permutation f. In general, a duplex-like object corresponding to such a hash function would work as follows. Its state is the chaining value resulting from hashing all previous inputs and possibly a counter (e.g., if the hash function requires the message length for the padding or as input in the compression function). Upon presentation of an input σ, it performs two tasks. First, it generates an output: It pads σ with the padding rule of the hash function, applies the final compression function f or an output transformation g, and returns the result. Second, it updates its state by padding σ with reversible padding, applying f and updating the counter. The disadvantage of this method is that, in general, a single duplexing call to the object requires two calls to f, or in case of an output transformation g, one call to f and one to g. In contrast, for a sponge function, the generation of the output and the update of the state can be done in a single call to f. Three main obstacles may hinder the efficiency of duplexing. First, as already mentioned, the special processing done a er the last block prevents to update the state and produce output at the same time. For instance, some constructions have an output transformation, which must be applied before producing output, while the main compression function is applied to update the state. The same problem occurs in the HAIFA framework [12], which enforces domain separation between the final call to f and the previous ones. In some constructions, blank iterations are applied at the end, which must be performed every time output is requested. Second, the overhead due to the padding reduces the number of bits that can be input in a duplexing call. If the input block size is fixed to a power of two (or a small multiple of it), the place taken by the padding can break the alignment of input blocks. Flexibility on the input block size is thus an advantage in this respect, as it can restore their alignment. Third, the output length of the hash function may be smaller than the input block size. This can be another slowdown factor, as in the case of the S W mode, since as many output bits are needed as input bits. The last compression function, output transformation or blank iterations have then to be performed several times to produce output bits like in a mask generating function. Another possible solution is just to use shorter input blocks. The chop-md construction [16,15] is a good candidate for duplexing. Producing output and updating the state can be made in the same operation. However, for the duplexing to be as fast as hashing, the output length should be as large as the message block and the padding should be as compact as possible. 9 Conclusions We have defined a new construction, namely the duplex construction, and showed that its security is equivalent to that of a sponge function with the same parameters. This construction was then used to give an efficient (single-pass) authenticated encryption mode. We proposed a reseedable pseudo-random bit sequence generator as another application of the duplex construction and to use it to prove the security of a mode overwriting input blocks instead of XORing them in. We have showed that the duplex construction inherits 18
19 the flexibility of the sponge construction in terms of security-speed trade-offs. Finally, we have argued that duplexing with other hash function constructions is in most cases not as efficient as with the sponge construction. References 1. E. Andreeva, B. Mennink, and B. Preneel, Security reductions of the second round SHA-3 candidates, Cryptology eprint Archive, Report 2010/381, 2010, 2. J.-P. Aumasson, L. Henzen, W. Meier, and M. Naya-Plasencia, Quark: A lightweight hash, in Mangard and Standaert [33], pp M. Bellare and C. Namprempre, Authenticated encryption: Relations among notions and analysis of the generic composition paradigm, Asiacrypt (T. Okamoto, ed.), Lecture Notes in Computer Science, vol. 1976, Springer, 2000, pp M. Bellare and P. Rogaway, Random oracles are practical: A paradigm for designing efficient protocols, ACM Conference on Computer and Communications Security 1993 (ACM, ed.), 1993, pp M. Bellare, P. Rogaway, and D. Wagner, The EAX mode of operation, in Roy and Meier [47], pp M. Bellare and B. Yee, Forward-security in private-key cryptography, Cryptology eprint Archive, Report 2001/035, 2001, 7. G. Bertoni, J. Daemen, M. Peeters, and G. Van Assche, Sponge functions, Ecrypt Hash Workshop 2007, May 2007, also available as public comment to NIST from Public_Comments/2007_May.html. 8., On the indifferentiability of the sponge construction, Advances in Cryptology Eurocrypt 2008 (N. P. Smart, ed.), Lecture Notes in Computer Science, vol. 4965, Springer, 2008, pp , Sponge-based pseudo-random number generators, in Mangard and Standaert [33], pp , On the security of the keyed sponge construction, Symmetric Key Encryption Workshop (SKEW), February , The K reference, January 2011, 12. E. Biham and O. Dunkelman, A framework for iterative hash functions HAIFA, Second Cryptographic Hash Workshop, Santa Barbara, August A. Biryukov (ed.), Fast so ware encryption, 14th international workshop, FSE 2007, Luxembourg, Luxembourg, march 26-28, 2007, revised selected papers, Lecture Notes in Computer Science, vol. 4593, Springer, A. Bogdanov, M. Knezevic, G. Leander, D. Toz, K. Varici, and I. Verbauwhede, SPONGENT: A lightweight hash function, CHES (U. Parampalli and P. Hawkes, eds.), Lecture Notes in Computer Science, Springer, 2011, to appear. 15. D. Chang and M. Nandi, Improved indifferentiability security analysis of chopmd hash function, Fast So ware Encryption (K. Nyberg, ed.), Lecture Notes in Computer Science, vol. 5086, Springer, 2008, pp J. Coron, Y. Dodis, C. Malinaud, and P. Puniya, Merkle-Damgård revisited: How to construct a hash function, Advances in Cryptology Crypto 2005 (V. Shoup, ed.), LNCS, no. 3621, Springer-Verlag, 2005, pp A. Desai, A. Hevia, and Y. L. Yin, A practice-oriented treatment of pseudorandom number generators, Advances in Cryptology Eurocrypt 2002 (L. R. Knudsen, ed.), Lecture Notes in Computer Science, vol. 2332, Springer, 2002, pp M. Dworkin, Request for review of key wrap algorithms, Cryptology eprint Archive, Report 2004/340, 2004, 19. ECRYPT Network of excellence, The SHA-3 Zoo, 2011, SHA-3_Zoo. 20. N. Ferguson, D. Whiting, B. Schneier, J. Kelsey, S. Lucks, and T. Kohno, Helix: Fast encryption and authentication in a single cryptographic primitive, Fast So ware Encryption (T. Johansson, ed.), Lecture Notes in Computer Science, vol. 2887, Springer, 2003, pp V. D. Gligor and P. Donescu, Fast encryption and authentication: XCBC encryption and XECB authentication modes, Fast So ware Encryption 2001 (M. Matsui, ed.), Lecture Notes in Computer Science, vol. 2355, Springer, 2001, pp M. Gorski, S. Lucks, and T. Peyrin, Slide a acks on a class of hash functions, Asiacrypt ( J. Pieprzyk, ed.), Lecture Notes in Computer Science, vol. 5350, Springer, 2008, pp J. Guo, T. Peyrin, and A. Poschman, The PHOTON family of lightweight hash functions, Advances in Cryptology Crypto 2011 (P. Rogaway and R. Safavi-Naini, eds.), Lecture Notes in Computer Science, Springer, 2011, to appear. 24. T. Iwata, New blockcipher modes of operation with beyond the birthday bound security, Fast So ware Encryption 2006 (M. J. B. Robshaw, ed.), Lecture Notes in Computer Science, vol. 4047, Springer, 2006, pp , Authenticated encryption mode for beyond the birthday bound security, Africacrypt (S. Vaudenay, ed.), Lecture Notes in Computer Science, vol. 5023, Springer, 2008, pp
20 26. T. Iwata and K. Yasuda, BTM: A single-key, inverse-cipher-free mode for deterministic authenticated encryption, Selected Areas in Cryptography (M. J. Jacobson Jr., V. Rijmen, and R. Safavi-Naini, eds.), Lecture Notes in Computer Science, vol. 5867, Springer, 2009, pp , HBS: A single-key mode of operation for deterministic authenticated encryption, Fast So ware Encryption 2009 (O. Dunkelman, ed.), Lecture Notes in Computer Science, vol. 5665, Springer, 2009, pp C. S. Jutla, Encryption modes with almost free message integrity, Advances in Cryptology Eurocrypt 2001 (B. Pfitzmann, ed.), Lecture Notes in Computer Science, vol. 2045, Springer, 2001, pp L. Knudsen, C. Rechberger, and S. Thomsen, The Grindahl hash functions, in Biryukov [13], pp T. Kohno, J. Viega, and D. Whiting, CWC: A high-performance conventional authenticated encryption mode, in Roy and Meier [47], pp T. Krovetz and P. Rogaway, The so ware performance of authenticated-encryption modes, Fast So ware Encryption 2011, S. Lucks, Two-pass authenticated encryption faster than generic composition, Fast So ware Encryption (H. Gilbert and H. Handschuh, eds.), Lecture Notes in Computer Science, vol. 3557, Springer, 2005, pp S. Mangard and F.-X. Standaert (eds.), Cryptographic hardware and embedded systems, CHES 2010, 12th international workshop, Santa Barbara, CA, USA, August 17-20, 2010, Lecture Notes in Computer Science, vol. 6225, Springer, U. Maurer, Indistinguishability of random systems, Advances in Cryptology Eurocrypt 2002 (L. Knudsen, ed.), Lecture Notes in Computer Science, vol. 2332, Springer-Verlag, May 2002, pp U. Maurer, R. Renner, and C. Holenstein, Indifferentiability, impossibility results on reductions, and applications to the random oracle methodology, Theory of Cryptography - TCC 2004 (M. Naor, ed.), Lecture Notes in Computer Science, no. 2951, Springer-Verlag, 2004, pp F. Muller, Differential a acks against the Helix stream cipher, in Roy and Meier [47], pp NIST, AES key wrap specification, November , NIST special publication C, recommendation for block cipher modes of operation: The CCM mode for authentication and confidentiality, July , NIST special publication D, recommendation for block cipher modes of operation: Galois/counter mode (GCM) and GMAC, November S. Paul and B. Preneel, Solving systems of differential equations of addition, ACISP (C. Boyd and J. M. González Nieto, eds.), Lecture Notes in Computer Science, vol. 3574, Springer, 2005, pp T. Ristenpart, H. Shacham, and T. Shrimpton, Careful with composition: Limitations of the indifferentiability framework, Eurocrypt 2011 (K. G. Paterson, ed.), Lecture Notes in Computer Science, vol. 6632, Springer, 2011, pp P. Rogaway, Authenticated-encryption with associated-data, ACM Conference on Computer and Communications Security 2002 (CCS 02), ACM Press, 2002, pp , Efficient instantiations of tweakable blockciphers and refinements to modes OCB and PMAC, Asiacrypt (Pil Joong Lee, ed.), Lecture Notes in Computer Science, vol. 3329, Springer, 2004, pp P. Rogaway, M. Bellare, and J. Black, OCB: A block-cipher mode of operation for efficient authenticated encryption, ACM Trans. Inf. Syst. Secur. 6 (2003), no. 3, P. Rogaway, M. Bellare, J. Black, and T. Krovetz, OCB: A block-cipher mode of operation for efficient authenticated encryption, CCS 01: Proceedings of the 8th ACM conference on Computer and Communications Security (New York, NY, USA), ACM, 2001, pp P. Rogaway and T. Shrimpton, A provable-security treatment of the key-wrap problem, Eurocrypt (S. Vaudenay, ed.), Lecture Notes in Computer Science, vol. 4004, Springer, 2006, pp B. K. Roy and W. Meier (eds.), Fast so ware encryption, 11th international workshop, FSE 2004, Delhi, India, February 5-7, 2004, revised papers, Lecture Notes in Computer Science, vol. 3017, Springer, D. Whiting, B. Schneier, S. Lucks, and F. Muller, Fast encryption and authentication in a single cryptographic primitive, ECRYPT Stream Cipher Project Report 2005/027, 2005, phelixp2.html. 49. H. Wu and B. Preneel, Differential-linear a acks against the stream cipher Phelix, in Biryukov [13], pp M. Ågren, M. Hell, T. Johansson, and W. Meier, A new version of Grain-128 with authentication, Symmetric Key Encryption Workshop (SKEW), February.
Permutation Based Cryptography for IoT
Permutation Based Cryptography for IoT Guido Bertoni 1 Joint work with Joan Daemen 1, Michaël Peeters 2 and Gilles Van Assche 1 1 STMicroelectronics 2 NXP Semiconductors CIoT 2012, Antwerp, November,
Remotely Keyed Encryption Using Non-Encrypting Smart Cards
THE ADVANCED COMPUTING SYSTEMS ASSOCIATION The following paper was originally published in the USENIX Workshop on Smartcard Technology Chicago, Illinois, USA, May 10 11, 1999 Remotely Keyed Encryption
Lecture 9 - Message Authentication Codes
Lecture 9 - Message Authentication Codes Boaz Barak March 1, 2010 Reading: Boneh-Shoup chapter 6, Sections 9.1 9.3. Data integrity Until now we ve only been interested in protecting secrecy of data. However, Construction of CCA-secure encryption
CSCI 5440: Cryptography Lecture 5 The Chinese University of Hong Kong 10 October 2012 1 Construction of -secure encryption We now show how the MAC can be applied to obtain a -secure encryption scheme.
Message Authentication Code
Message Authentication Code Ali El Kaafarani Mathematical Institute Oxford University 1 of 44 Outline 1 CBC-MAC 2 Authenticated Encryption 3 Padding Oracle Attacks 4 Information Theoretic MACs 2 of 44,
S : a flexible coding for tree hashing
S : a flexible coding for tree hashing Guido Bertoni 1, Joan Daemen 1, Michaël Peeters 2, and Gilles Van Assche 1 1 STMicroelectronics 2 NXP Semiconductors Abstract. We propose a flexible, fairly general,
Authentication requirement Authentication function MAC Hash function Security of
UNIT 3 AUTHENTICATION Authentication requirement Authentication function MAC Hash function Security of hash function and MAC SHA HMAC CMAC Digital signature and authentication protocols DSS Slides Courtesy
Network Security Technology Network Management
COMPUTER NETWORKS Network Security Technology Network Management Source Encryption E(K,P) Decryption D(K,C) Destination The author of these slides is Dr. Mark Pullen of George Mason University. Permission
Specification of Cryptographic Technique PC-MAC-AES. NEC Corporation
Specification of Cryptographic Technique PC-MAC-AS NC Corporation Contents 1 Contents 1 Design Criteria 2 2 Specification 2 2.1 Notations............................................. 2 2.2 Basic Functions..........................................
CIS 6930 Emerging Topics in Network Security. Topic 2. Network Security Primitives
CIS 6930 Emerging Topics in Network Security Topic 2. Network Security Primitives 1 Outline Absolute basics Encryption/Decryption; Digital signatures; D-H key exchange; Hash functions; Application of hash
Submission to the CAESAR competition Affiliation: 1 Dept.
Message Authentication
Message Authentication message authentication is concerned with: protecting the integrity of a message validating identity of originator non-repudiation of origin (dispute resolution) will consider
MACs Message authentication and integrity. Table of contents
MACs Message authentication and integrity Foundations of Cryptography Computer Science Department Wellesley College Table of contents Introduction MACs Constructing Secure MACs Secure communication
FIPS 202 on the CSRC FIPS publications page:
The attached DRAFT FIPS 202 document (provided here for historical purposes) has been superseded by the following publication: Publication Number: FIPS 202 Title: SHA-3 Standard: Permutation-Based Hash
Cryptography Lecture 8. Digital signatures, hash functions
Cryptography Lecture 8 Digital signatures, hash functions A Message Authentication Code is what you get from symmetric cryptography A MAC is used to prevent Eve from creating a new message and inserting-
6.857 Computer and Network Security Fall Term, 1997 Lecture 4 : 16 September 1997 Lecturer: Ron Rivest Scribe: Michelle Goldberg 1 Conditionally Secure Cryptography Conditionally (or computationally)
HASH CODE BASED SECURITY IN CLOUD COMPUTING
ABSTRACT HASH CODE BASED SECURITY IN CLOUD COMPUTING Kaleem Ur Rehman M.Tech student (CSE), College of Engineering, TMU Moradabad (India) The Hash functions describe as a phenomenon of information security
Network Security. Chapter 6 Random Number Generation
Network Security Chapter 6 Random Number Generation 1 Tasks of Key Management (1)! Generation:! It is crucial to security, that keys are generated with a truly random or at least a pseudo-random generation.
CAESAR candidate PiCipher
CAESAR candidate PiCipher Danilo Gligoroski, ITEM, NTNU, Norway Hristina Mihajloska, FCSE, UKIM, Macedonia Simona Samardjiska, ITEM, NTNU, Norway and FCSE, UKIM, Macedonia Håkon Jacobsen, ITEM, NTNU, Norway
Network Security. Chapter 6 Random Number Generation. Prof. Dr.-Ing. Georg Carle
Network Security Chapter 6 Random Number Generation Prof. Dr.-Ing. Georg Carle Chair for Computer Networks & Internet Wilhelm-Schickard-Institute for Computer Science University of Tübingen
Cryptographic Hash Functions Message Authentication Digital Signatures
Cryptographic Hash Functions Message Authentication Digital Signatures Abstract We will discuss Cryptographic hash functions Message authentication codes HMAC and CBC-MAC Digital signatures 2 Encryption/Decryption
Indifferentiability Security of the Fast Wide Pipe Hash: Breaking the Birthday Barrier
Indifferentiability Security of the Fast Wide Pipe Hash: Breaking the Birthday Barrier Dustin Moody Souradyuti Paul Daniel Smith-Tone Abstract A hash function secure in the indifferentiability framework
1 Message Authentication
Theoretical Foundations of Cryptography Lecture Georgia Tech, Spring 200 Message Authentication Message Authentication Instructor: Chris Peikert Scribe: Daniel Dadush We start with some simple questions,
One-Way Encryption and Message Authentication
One-Way Encryption and Message Authentication Cryptographic Hash Functions Johannes Mittmann mittmann@in.tum.de Zentrum Mathematik Technische Universität München (TUM) 3 rd Joint Advanced Student School Symmetric Encryption
CS 361S Overview of Symmetric Encryption Vitaly Shmatikov Reading Assignment Read Kaufman 2.1-4 and 4.2 slide 2 Basic Problem ----- ----- -----? Given: both parties already know the same secret Goal: send
Introduction to SHA-3 and Keccak
Introduction to SHA-3 and Keccak Joan Daemen STMicroelectronics and Radboud University Crypto summer school 2015 Šibenik, Croatia, May 31 - June 5, 2015 1 / 45 Outline 1 The SHA-3 competition 2 The sponge,
A NEW HASH ALGORITHM: Khichidi-1
A NEW HASH ALGORITHM: Khichidi-1 Abstract This is a technical document describing a new hash algorithm called Khichidi-1 and has been written in response to a Hash competition (SHA-3) called by National
The Skein Hash Function Family
The Skein Hash Function Family Version 1.3 1 Oct 2010 Niels Ferguson Stefan Lucks Bruce Schneier Doug Whiting Mihir Bellare Tadayoshi Kohno Jon Callas Jesse Walker Microsoft Corp., niels@microsoft.com?
Hash Function of Finalist SHA-3: Analysis Study
International Journal of Advanced Computer Science and Information Technology (IJACSIT) Vol. 2, No. 2, April 2013, Page: 1-12, ISSN: 2296-1739 Helvetic Editions LTD, Switzerland Hash Function
Implementation and Comparison of Various Digital Signature Algorithms. -Nazia Sarang Boise State University
Implementation and Comparison of Various Digital Signature Algorithms -Nazia Sarang Boise State University What is a Digital Signature? A digital signature is used as a tool to authenticate the information
SECURITY EVALUATION OF EMAIL ENCRYPTION USING RANDOM NOISE GENERATED BY LCG
SECURITY EVALUATION OF EMAIL ENCRYPTION USING RANDOM NOISE GENERATED BY LCG Chung-Chih Li, Hema Sagar R. Kandati, Bo Sun Dept. of Computer Science, Lamar University, Beaumont, Texas, USA 409-880-8748,
CS 758: Cryptography / Network Security
CS 758: Cryptography / Network Security offered in the Fall Semester, 2003, by Doug Stinson my office: DC 3122 my email address: dstinson@uwaterloo.ca my web page:
Authentication, digital signatures, PRNG
Multimedia Security Authentication, digital signatures, PRNG Mauro Barni University of Siena Beyond confidentiality Up to now, we have been concerned with protecting message content (i.e. confidentiality)
Message Authentication Codes. Lecture Outline
Message Authentication Codes Murat Kantarcioglu Based on Prof. Ninghui Li s Slides Message Authentication Code Lecture Outline 1 Limitation of Using Hash Functions for Authentication Require an authentic
Recommendation for Applications Using Approved Hash Algorithms
NIST Special Publication 800-107 Revision 1 Recommendation for Applications Using Approved Hash Algorithms Quynh Dang Computer Security Division Information Technology Laboratory C O M P U T E R S E C...
Applying Symmetric Encryption
Applying Symmetric Encryption Technical Report Falko Strenzke cryptosource GmbH, Darmstadt fstrenzke@cryptosource.de February 13, 2015 This work, which is intended to close a gap left | http://docplayer.net/77429-Duplexing-the-sponge-single-pass-authenticated-encryption-and-other-applications.html | CC-MAIN-2017-09 | refinedweb | 12,875 | 60.35 |
The purpose of a HUD window is to create composited presentations, based on the use of the Desktop Window Manager (DWM), that is an important part of the OS, since Vista. It is based on the same concept that was used to create DreamScenes on Vista Pro.
It has been inspired by movies like Avatar, Iron man, and Oblivion, that are using plainty of transparent display, and like those HUD applications that can be found in modern jet fighter cockpit.
This article is closely related to the use of third party addons (BASS.dll, GDImage.dll, WinLIFT.dll), however I shall try to explain as much as i can the concept, because i am not aware of other tools able to do this using only the core API, but if there are some, i would gladly learn about them.
It is a mix of several languages: C++, PowerBASIC, OpenGL, and the whole application result in a cooperative work between 32 and 64-bit modules.
The desktop composition feature, introduced in Windows Vista,.
In Windows 8, Desktop Window Manager (DWM) is always ON and cannot be disabled by end users and apps. As in Windows 7, DWM is used to compose the desktop. In addition to experiences enabled in Windows 7, now DWM desktop composition enables desktop composition for all themes, support for Stereoscopic 3D, and management, separation, and protection of the experience with Windows Store apps.
In Windows Vista and Windows 7, desktop composition is enabled only with the AERO Glass Theme.
When using DWM, everything is rendered ultimately onto the DirectDraw surface, using the GPU rather than the CPU, this opens a wealth of capabilities for a graphic engine that is able to use it. To illustrate this article i am using the free trial version of my GDImage 7.00 and WinLIFT 5.00, because they are both designed to use the GPU rather than the CPU, and they are able to combine most of the graphic technologies altogether.
In order to work in composited mode, all drawing must be done in 32-bit, because the use of the alpha channel is a mandatory to handle individual variable opacity of child controls (we do not use a layered window, because we want to preserve each of the child control variable opacity).
In Windows 8:
DwmEnableComposition
In Vista and Seven, here are a few functions to enable/disable composition:
#include <Dwmapi.h>
#define long_proc typedef long (__stdcall *zProc)
// Check for the DWMAPI
HMODULE LoadDWM () {
static HMODULE hLib;
static long nChecked;
if (nChecked == 0) { nChecked = -1; hLib = LoadLibrary (L"DWMAPI"); }
return hLib;
}
// Enable DWM composition
long zDwmEnableComposition (IN long uCompositionAction) {
long nRet = 0;
HMODULE hLib = LoadDWM();
if (hLib) {
long_proc (long);
zProc hProc = (zProc) GetProcAddress(hLib, "DwmEnableComposition");
if (hProc) { nRet = hProc(uCompositionAction); }
}
return nRet;
}
// Check if DWM composition is enabled
long zDwmIsCompositionEnabled () {
long nRet = 0;
HMODULE hLib = LoadDWM();
if (hLib) {
long_proc (BOOL*);
zProc hProc = (zProc) GetProcAddress(hLib, "DwmIsCompositionEnabled");
if (hProc) {
BOOL bAero = FALSE;
if (hProc(&bAero) == 0) {
if (bAero) { nRet = -1; }
}
}
}
return nRet;
}
// This is the most IMPORTANT API to create a transparent window
// We fool DWM with a fake region, because we want to use the client, and the non-client area,
// just like with a transparent layered window.
// This one must also be used with Windows 8+
void zSetCrystalBehindMode (IN HWND hWnd, IN BOOL nBOOL) {
static FARPROC hProc;
HMODULE hLib = LoadDWM ();
if (hLib) {
if (zDwmIsCompositionEnabled()) {
long_proc (HWND, DWM_BLURBEHIND*);
zProc hProc = (zProc) GetProcAddress(hLib,
"DwmEnableBlurBehindWindow");
if (hProc) {
LRESULT nRet = S_OK;
HRGN hRgnBlur = 0;
DWM_BLURBEHIND bb = {0};
// Create and populate the BlurBehind structure.
// Set Blur Behind and Blur Region.
bb.fEnable = nBOOL;
bb.dwFlags = DWM_BB_ENABLE | DWM_BB_BLURREGION;
bb.fTransitionOnMaximized = 0;
// Fool DWM with a fake region
if (nBOOL) { hRgnBlur = CreateRectRgn(-1, -1, 0, 0); }
bb.hRgnBlur = hRgnBlur;
// Set Blur Behind mode.
nRet = hProc(hWnd, &bb);
}
}
}
}
In order to draw in composited mode on a DirectDraw surface, you could not use anymore GDI32, mainly because DirectDraw (and also OpenGL) do not store the 32-bit ARGB components in the same order, and also because RGB(0,0,0) doesn't produce a solid black color brush, but a transparent one.
The workaround is to use GDIPLUS rather than GDI32, and perform a Red and Green permutation when drawing directly onto the DD surface (like when creating OpenGL textures).
Another caveat is to draw all controls and objects from bottom to top with strict respect of the z-order, to render correctly the variable opacity.
This means you should not let Windows itself do the default painting process, but render
everything into a DIB memory then blit the resulting composited image onto the DD surface once all the
overlapping layers have been drawn in cached memory (this is the hard part that should be delegate to a SkinEngine).
In order to take full control of a popup window, I have written a tutorial that explains the basic steps to create a simple SkinEngine.
The tutorial is written in BASIC to ease the reading of the code, however because it uses only the low level SDK API, it is easy to translate from one language to another. The full tutorial can be found here:.
For those wanting to figure what could be done with a Skin Engine that is DWM compatible, without retyping/translating all the code, use the link at the end of this article to download the C++ demo project that is provided with a copy of the WinLIFT/GDImage C++ 64-bit trial version.
The demo is totaly atypical because it does use a mix of 32-bit and 64-bit inside of the same application altogether with multiple languages (C++ and PowerBASIC), the low level coding part is written in PowerBASIC 32-bit, while the main EXE is written in C++ 64-bit.
The purpose of keeping the low level coding part in 32-bit, is that the same code can be used with a final EXE written either in 32 or 64-bit.
In order to perform the intra process communication between the 64 and 32-bit coding sections, we use a runtime named ZWP.exe that is in charge to monitor and display the (BassBox) OpenGL plugins used to perform the visual background animations.
ZWP.exe itself, is written in PowerBASIC (source code in the ZIP file) to produce very small EXE (25 Kb) with no run-time requirement and no extra dependencies.
It is using a popup window with the WS_EX_TOOLWINDOW extension style, to avoid showing itself on the Windows task bar.
WS_EX_TOOLWINDOW
The ZWP {child popup} must always stay under the main HUD window and it must use the same size and location, than its parent window, this part of the code is handled inside of the ZWP_SizeMove procedure below:
void ZWP_SizeMove() {
RECT lpw;
int x, y, w, h;
HWND hWnd, hPopup;
GetWindowRect(gh_Main, &lpw);
x = lpw.left + skGetSystemMetrics(SK_DWM_LEFT);
y = lpw.top + skGetSystemMetrics(SK_DWM_TOP);
w = lpw.right - skGetSystemMetrics(SK_DWM_RIGHT) - x;
h = lpw.bottom - skGetSystemMetrics(SK_DWM_BOTTOM) - y;
// Use this to move the window in case of AERO shake
// because setting the new location with SetWindowPos
// may fail!
hWnd = ZWP_Handle();
if (hWnd) {
hPopup = skGetDWMregion(gh_Main); // This is the region drawn behind the child controls
if (hPopup == 0) { hPopup = gh_Main; }
SetWindowPos(hWnd, GetWindow(hPopup, GW_HWNDNEXT), 0, 0, 0, 0,
SWP_NOMOVE | SWP_NOSIZE | SWP_NOOWNERZORDER | SWP_NOACTIVATE | SWP_ASYNCWINDOWPOS);
MoveWindow(hWnd, x, y, w, h, 0);
}
}
skGetDWMregion is a special procedure of the SkinEngine that draws a black brush region behind the edit part of the Combo, because Windows still use the old edit control when creating a ComboBox using a GDI32 brush to paint the background, thus making it transparent in composited mode.
So far I didn't find an easy way to solve this behavior when redrawing the whole scene. Thus i choose to redraw the region only when the visual plugins are hidden.
skGetDWMregion
In order to communicate with ZWP.exe, we define our own private message
WM_ZWPNOTIFICATION, and we use SendMessageA altogether with the
WM_SETTEXT to exchange parameters, with this set of API (see details in the C++ project source code):
WM_ZWPNOTIFICATION
SendMessageA
WM_SETTEXT
ZWP.exe, itself is using a subset of the freeware BassBox audio player engine, that works closely with the BASS.dll from Ian Luck ().
Its purpose is to play the visual plugins and the audio altogether, in full
concurrent mode of the main window application, rather than like a child thread. I am doing this, because Windows does not handle a thread, and a distinct process, with the same level of priority.
Note: The complete PowerBASIC source code of ZWP.bas is provided altogether with the C++ project inside of the ZIP file.
They are all pieces of true artwork programming, Matrix is probably one of the best version you have ever seen, same for Attractor that never produces twice the same display. All the plugins are small individual 32-bit DLLs, their purpose is to render the animation in real time, based on the analyze of the audio signal. There are more visual plugins
I could share with those interested, and i could also explain how to create them if there is a need for it. You can already learn about the BassBox visual plugin concept here.
Using visual plugins and audio files altogether:
The demo is provided with a set of audio files stored in the "\Audio" subfolder, where you can add your own set of audio files, in either sound tracker music (xm, it, mod, s3m, mtm) or audio stream format (mp3, ogg, wav).
In order to play or stop audio, you just have to click on the "Use visual plugin" check box.
You can select a song from the first combo, or if you want you can use drag & drop from the Windows Explorer, to play a specific audio file.
To change the visual plugin, select a new one from the second combo.
In order to play the animations, i had to use GDImage version 7.00, because it is able to play a static animations (same as gif animation) in full 32-bit composited mode.
All the animations are draggable with the mouse inside of
the graphic container. Each of the graphic components could be anchored to a specific location, just like child controls in a window. The text "DWM composition" is using a private font, to ensure that it always look the same on all computers, without the need to install first the missing font. The main centered animation could be rotated at any angle from any of the round knob shown on the bottom of the window (click with the right mouse button on any of the knob control, to restore the initial angle). The vertical slider, on the right side of the window, can be used to resize the main animation on the fly. (Note: some of the GDImage features are disabled in the 64-bit trial version).
Is a SkinEngine designed specifically to cooperate with GDImage to render the whole interface in composited mode.
The fundamental of creating your own SkinEngine is explained in the serie of articles named "Take control of your window" here.
The SkinEngine is able to use multiple layered composited backgrounds, they are a few provided within the "\Background" subfolder.
To change them, click either with the left or the right mouse button on the Iron Man icon located on the top left corner.
To better see the change of the subtile layered background, it is better to use a dark wallpaper as main Windows background.
When this option is checked, the composited layered background is not applied behind the graphic container, to better see the visual plugin animation playing at the bottom of the z-order.
The project uses static animations, named "spinner", their initial purpose is to display a layered animation that works just like GIF animation, except that it is based on transparent .PNG or .SKI image (proprietary GDImage format) that could be used in full composited mode.
You can learn more about the use of a Spinner control, to inform the user that a lengthy process or critical task is running, here.
Here is an example of PNG file that could be used to perform a static animation:
Important: each of the frame must fit exactly within a square size, and use an horizontal
alignment from left to right, to ease computation.
Due to the multimedia nature of the demo, it is intended to run on I7 multi-core with a good graphic card (nVidia or AMD/ATI).
When used on modern hardware, and because of the use of the GPU, HUDplus.exe should use less than 2% of the CPU, and ZWP.exe 0%, except when using the Matrix plugin that is the most demanding and then it could rise up to 10 % of the CPU resource.
These values are from an ASUS N76V CORE i7 with a GEFORCE GT 650M and 2 GB of graphic ram.
If your hardware config matches the above specs, then you can use the built-in thread animation, instead of a timer, because this allow smooth dragging of the sprite while keeping the animation running.
long StartAnimation (IN long Delay) {
long nRet = LB_ERR;
gn_Delay = Delay;
DWORD dwThreadId = 0;
HANDLE hThread = CreateThread(NULL, // default security attributes
0, // use default stack size
(LPTHREAD_START_ROUTINE ) Animate, // thread function name
(LPVOID) Delay, // argument to thread function
0, // use default creation flags
&dwThreadId); // returns the thread identifier
if (hThread) { nRet = 0; Sleep(100); }
CloseHandle(hThread);
return nRet;
}
The project source code is using UNICODE, it is provided in pure SDK coding style like documented into Charles Petzold 5th edition (the SDK coder Bible).
To be cooperative with the other running applications, it doesn't not use a game loop, but a dedicated thread on multicore i7, that could be replaced with a simple timer on older machine.
For the purpose of inter-language compatibility, this project uses only procedural coding style and the core API.
This screen shot is just a limited subset preview of the DWM composition.
This kind of HUD window is the only way to mix easily 2D and 3D altogether, and it could be used to create a complex game interface.
If there is an interest for it, i could post another demo project using VIDEO instead of OpenGL plugins, and working exactly like the DreamScenes in Vista Pro, but less intruisive and specific to a single application, rather than using the whole desktop.
It does use the same API that the one used in the C# project here.
Why I am using it:
This is version 1.00, from January 2,. | http://www.codeproject.com/Articles/705243/HUD-window-bit-DWM-composition | CC-MAIN-2016-44 | refinedweb | 2,428 | 54.76 |
: ProcessorVersion.java,v 1.3 2004/02/16 20:54:58 minchau Exp $18 */19 20 package org.apache.xalan.xsltc;21 22 23 /**24 * Admin class that assigns a version number to the XSLTC software.25 * The version number is made up from three fields as in:26 * MAJOR.MINOR[.DELTA]. Fields are incremented based on the following:27 * DELTA field: changes for each bug fix, developer fixing the bug should28 * increment this field.29 * MINOR field: API changes or a milestone culminating from several30 * bug fixes. DELTA field goes to zero and MINOR is31 * incremented such as: {1.0,1.0.1,1.0.2,1.0.3,...1.0.18,1.1}32 * MAJOR field: milestone culminating in fundamental API changes or 33 * architectural changes. MINOR field goes to zero34 * and MAJOR is incremented such as: {...,1.1.14,1.2,2.0}35 * Stability of a release follows: X.0 > X.X > X.X.X 36 * @author G. Todd Miller 37 */38 public class ProcessorVersion {39 private static int MAJOR = 1;40 private static int MINOR = 0;41 private static int DELTA = 0;42 43 public static void main(String [] args) {44 System.out.println("XSLTC version " + MAJOR + "." + MINOR +45 ((DELTA > 0) ? ("."+DELTA) : ("")));46 }47 }48
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/apache/xalan/xsltc/ProcessorVersion.java.htm | CC-MAIN-2017-04 | refinedweb | 225 | 68.16 |
Opened 15 months ago
Closed 12 months ago
Last modified 12 months ago
#11334 closed bug (fixed)
GHC panic when calling typeOf on a promoted data constructor
Description)
Change History (23)
comment:1 Changed 15 months ago by
comment:2 Changed 15 months ago by
comment:3 Changed 15 months ago by
I dug a little more into the issue, and it looks like the issue is specifically with
typeOf, not, say,
typeRep. Compare:
$ /opt/ghc/head/bin/ghci GHCi, version 8.1.20160108: :? for help λ> :set -XDataKinds λ> :m Data.Proxy Data.Functor.Compose Data.Typeable λ> typeRep (Proxy :: Proxy 'Compose) 'Compose λ> typeRep (Proxy :: Proxy 'Just) 'Just λ> typeRep (Proxy :: Proxy 'Proxy) 'Proxy
versus
λ> typeOf (Proxy :: Proxy 'Compose) ghc: panic! (the 'impossible' happened) (GHC version 8.1.20160108 for x86_64-unknown-linux): piResultTy * TYPE 'Lifted * Please report this as a GHC bug: λ> typeOf (Proxy :: Proxy 'Just) Proxy (TYPE 'Lifted -> Maybe (TYPE 'Lifted)) 'Just λ> typeOf (Proxy :: Proxy 'Proxy) Proxy (Proxy (TYPE 'Lifted) (TYPE 'Lifted)) 'Proxy
Those last two results definitely look funny. I'm guessing that the issue pertains to levity polymorphism. I'll try to see what I can find about how
typeOf works.
comment:4 Changed 15 months ago by
I still haven't figured out what's going on with
typeOf (Proxy :: Proxy 'Compose), but we can at least be comforted by the fact that executing it wasn't possible before GHC 8.0, since
Compose is poly-kinded. The "regression" here is that
* now shows up as
TYPE 'Lifted, which I don't think is desirable. I've opened Phab:D1757 to fix that.
comment:5 Changed 15 months ago by
comment:6 Changed 14 months ago by
In 65b810b/ghc:
comment:7 Changed 14 months ago by
comment:8 Changed 14 months ago by
This fails,
{-# LANGUAGE DataKinds #-} import Data.Typeable import Data.Functor.Compose main :: IO () main = print $ typeOf (undefined :: Proxy 'Compose)
with a compiler panic due to
TcInteract.mk_typeable_pred calling
typeKind on an ill-kinded type,
TYPE 'Lifted (TYPE 'Lifted *) -> Compose * * *. Note the application
TYPE 'Lifted (TYPE 'Lifted *), which oversaturates
TYPE. The problem here is apparently that the kind variables of
Compose are all instantiated with
* with
-XNoPolyKinds I haven't yet determined where this occurs.
comment:9 Changed 14 months ago by
I believe this is happening in
TcMType.quantifyTyVars.
comment:10 Changed 14 months ago by
Oh dear.
Something seems terribly terribly wrong.
newtype Compose (f :: k -> *) (g :: k1 -> k) (a :: k1) = Compose {getCompose :: f (g a)}
This gives
Compose :: forall k k1. (k -> *) -> (k1 -> k) -> k1 -> *. But your code above has three
*s passed to
Compose. Somehow the third one is wrong. (I think the first two are correct, as set in
quantifyTyVars. I'm unconvinced that
quantifyTyVars is to blame.) GHC is getting duped into thinking
* :: * -> *, which would, actually, make the
TYPE 'Lifted (TYPE 'Lifted *) bit well kinded.
I've no clue where this is going wrong, but it's going wrong rather spectacularly. My next step would be
-ddump-tc-trace -fprint-explicit-kinds and search for the first occurrence of
Compose * * *.
comment:11 Changed 14 months ago by
Disclaimer: The following commentary is dangerously ignorant. I've glanced at a few papers and read a bit of code but otherwise have vanishingly little knowledge about the type checker.
I'm looking at this slightly easier example (which still replicates the failure),
{-# LANGUAGE PolyKinds #-} module Other where data Other (f :: k -> *) (a :: k) = Other (f a)
{-# LANGUAGE DataKinds #-} module Main where import Data.Typeable import Other main :: IO () main = let a = typeOf (undefined :: Proxy 'Other) in return ()
As before enabling
PolyKinds in
Main results in the expected insoluble
Typeable error.
-ddump-tc-trace -fprint-explicit-kinds produces the following suspicious output,
decideKindGeneralisationPlan type: (Proxy k1_aJt[tau:5] ('Other k_aJB[tau:5] f_aJC[tau:5] a_aJD[tau:5] |> <f_aJC[tau:5] a_aJD[tau:5] -> Other k_aJB[tau:5] f_aJC[tau:5] a_aJD[tau:5]>_N) |> <*>_N) ftvs: [k1_aJt[tau:5], f_aJC[tau:5], a_aJD[tau:5], k_aJB[tau:5]] should gen? True writeMetaTyVar k_aJB[tau:5] :: * := * writeMetaTyVar f_aJC[tau:5] :: k_aJB[tau:5] -> * := * writeMetaTyVar a_aJD[tau:5] :: k_aJB[tau:5] := *
If I'm interpreting this correctly
f (which is of kind
k -> *) is being instantiated as
*.
I'm going to eat some food; more later.
comment:12 Changed 14 months ago by
Easier example is good (comment:11).
Hint: always debug a compiler built with
-DDEBUG. My build shows
decideKindGeneralisationPlan type: (Proxy k1_aCf[tau:5] ('Other k_aCn[tau:5] f_aCo[tau:5] a_aCp[tau:5] |> <f_aCo[tau:5] a_aCp[tau:5] -> Other k_aCn[tau:5] f_aCo[tau:5] a_aCp[tau:5]>_N) |> <*>_N) ftvs: [k1_aCf[tau:5], f_aCo[tau:5], a_aCp[tau:5], k_aCn[tau:5]] should gen? True writeMetaTyVar k_aCn[tau:5] := * writeMetaTyVar f_aCo[tau:5] := * WARNING: file compiler\typecheck\TcMType.hs, line 553 Ill-kinded update to meta tyvar f_aCo[tau:5] :: k_aCn[tau:5] -> * * -> * := * :: * * writeMetaTyVar a_aCp[tau:5] := * quantifyTyVars globals: [] nondep: [] dep: [k_aCn[tau:5], f_aCo[tau:5], a_aCp[tau:5]] dep2: [] quant_vars: []
Note the
WARNING.
The bug is this code in
quantifyTyVars:
-- In the non-PolyKinds case, default the kind variables -- to *, and zonk the tyvars as usual. Notice that this -- may make quantifyTyVars return a shorter list -- than it was passed, but that's ok ; poly_kinds <- xoptM LangExt.PolyKinds ; dep_vars2 <- if poly_kinds then return dep_kvs else do { let (meta_kvs, skolem_kvs) = partition is_meta dep_kvs is_meta kv = isTcTyVar kv && isMetaTyVar kv ; mapM_ defaultKindVar meta_kvs ; return skolem_kvs } -- should be empty
But in this case the variables we are quantifying over (I hardly know whether to call them type or kind variables now) are:
k_aCn :: * f_aCo :: k_aCn -> * a_aCp :: k_aCn
These appear in the user-written type signature which elaborates to:
Proxy ... (Other k_aCn f_aCo a_aCp)
It's clearly NOT RIGHT in the non-poly-kind case to default the "kind" variables to
*. I guess we have to use
Any in the same way that we do for types.
Can't do more tonight. Richard how are you set?
comment:13 Changed 14 months ago by
Before I forget there is a bug in
decideKindGeneralisationPlan, which is taking the free vars of an unzonked type. But I don't understand that function really.
comment:14 Changed 14 months ago by
Yech.
First off, let's rename the data constructor and type constructor differently. I got quite confused about that!
data Ty (f :: k -> *) (a :: k) = Con (f a)
This produces
Ty :: forall k. (k -> *) -> k -> * Con :: forall k (f :: k -> *) (a :: k). (f a) -> Ty k f a
The question is: how should we interpret
Proxy 'Con with
-XNoPolyKinds?
The old rule for
-XNoPolyKinds was "set all kind variables to
*", which is still what
quantifyTyVars is doing. This used to make sense, because all kind variables used to have sort
BOX, never something like
BOX -> BOX. But those halcyon days are now gone. without having more direction. For example, the user could say
Proxy ('Con :: Maybe Bool -> Ty * Maybe Bool) to get the right instantiation.
I actually prefer the "issue an error" option. The user could use a kind signature, but more likely should just enable
-XPolyKinds. Using a promoted data constructor taken from a polykinded datatype without
-XPolyKinds is asking for trouble.
I'm happy to put this change in when I get to rounding up these tickets.
comment:15 Changed 14 months ago by
I'm very confused. Why would we error in the case of calling
print (typeOf (Proxy :: Proxy 'Comp)) whenever
-XNoPolyKinds is enabled, whereas something like
print (typeOf (Proxy :: Proxy 'Proxy)) is OK? The latter currently yields
"Proxy (Proxy * *) 'Proxy" when
-XNoPolyKinds is on (or is this a bug?).
comment:16 Changed 14 months ago by
Proxy :: Proxy 'Proxy should also be an error in my "issue an error" option. It's a use of a promoted data constructor of a datatype whose type parameters are not all
*.
To be honest,
-XNoPolyKinds always confuses me now. :)
comment:17 Changed 14 months ago by
Indeed I have also wondered how
NoPolyKinds is supposed to behave.
In fact, this is something that could probably use more explanation in the users guide. We discuss the behavior of
-XPolyKinds in great depth but hardly mention how poly-kinded types should behave in modules with
-XNoPolyKinds.
comment:18 Changed 14 months ago bywithout having more direction
I think I favour:
- Default kind vars of kind
*to
*
- Don't default others; instead error.
But note that if we have
{k1:k2, k2:*} then defaulting
k2 to
* might mean we could then default
k1.
We do need to do deafulting somehow because even without
PolyKinds consider
data T f a = MkT (f a)
We get
{f :: k -> *, a :: k} and we can't reject the program.
Anyway, Richard, thanks for saying you'll get to it.
Simon
comment:19 Changed 13 months ago by
There is a patch for this in goldfire's branch which he'll hopefully submit soon.
comment:20 Changed 12 months ago by
In 84c773e/ghc:
comment:21 Changed 12 months ago by
comment:22 Changed 12 months ago by
comment:23 Changed 12 months ago by
In f8ab575/ghc:
I'm going to set the milestone to 8.0.1, since this affects the upcoming GHC 8.0 and I'd hate to see so much
Typeable-related code break. Please change the milestone if you disagree. | https://ghc.haskell.org/trac/ghc/ticket/11334 | CC-MAIN-2017-13 | refinedweb | 1,566 | 63.19 |
03 February 2012 10:05 [Source: ICIS news]
SINGAPORE (ICIS)—China is targeting its fertilizer output to reach 69.1m tonnes in 2015, up by 2.9m or 4.4% from 2010, according to a report by the Ministry of Industry and Information Technology (MIIT) on Friday.
The country is targeting higher output over the next few years to secure its agricultural development since 90% of the country’s fertilizer is used in this sector, the ministry said.
?xml:namespace>
The country’s urea capacity is expected to reach 36m tonnes/year in 2015, while its nitrogen fertilizer capacity will reach 51.1m tonnes/year | http://www.icis.com/Articles/2012/02/03/9529102/chinas-fertilizer-output-to-rise-by-4.4-in-2015.html | CC-MAIN-2014-49 | refinedweb | 105 | 55.54 |
what - Why is reading lines from stdin much slower in C++ than Python?
stack overflow python (7)
I wanted to compare reading lines of string input from stdin using Python and C++ and was shocked to see my C++ code run an order of magnitude slower than the equivalent Python code. Since my C++ is rusty and I'm not yet an expert Pythonista, please tell me if I'm doing something wrong or if I'm misunderstanding something.
(TLDR answer: include the statement:
cin.sync_with_stdio(false) or just use
fgets instead.
TLDR results: scroll all the way down to the bottom of my question and look at the table.)
C++ code:
#include <iostream> #include <time.h> using namespace std; int main() { string input_line; long line_count = 0; time_t start = time(NULL); int sec; int lps; while (cin) { getline(cin, input_line); if (!cin.eof()) line_count++; }; sec = (int) time(NULL) - start; cerr << "Read " << line_count << " lines in " << sec << " seconds."; if (sec > 0) { lps = line_count / sec; cerr << " LPS: " << lps << endl; } else cerr << endl; return 0; } // Compiled with: // g++ -O3 -o readline_test_cpp foo.cpp
Python Equivalent:
#!/usr/bin/env python import time import sys count = 0 start = time.time() for line in sys.stdin: count += 1 delta_sec = int(time.time() - start_time) if delta_sec >= 0: lines_per_sec = int(round(count/delta_sec)) print("Read {0} lines in {1} seconds. LPS: {2}".format(count, delta_sec, lines_per_sec))
Here are my results:
$ cat test_lines | ./readline_test_cpp Read 5570000 lines in 9 seconds. LPS: 618889 $cat test_lines | ./readline_test.py Read 5570000 lines in 1 seconds. LPS: 5570000
I should note that I tried this both under Mac OS X v10.6.8 (Snow Leopard) and Linux 2.6.32 (Red Hat Linux 6.2). The former is a MacBook Pro, and the latter is a very beefy server, not that this is too pertinent.
$ for i in {1..5}; do echo "Test run $i at `date`"; echo -n "CPP:"; cat test_lines | ./readline_test_cpp ; echo -n "Python:"; cat test_lines | ./readline_test.py ; done Test run 1 at Mon Feb 20 21:29:28 EST 2012 CPP: Read 5570001 lines in 9 seconds. LPS: 618889 Python:Read 5570000 lines in 1 seconds. LPS: 5570000 Test run 2 at Mon Feb 20 21:29:39 EST 2012 CPP: Read 5570001 lines in 9 seconds. LPS: 618889 Python:Read 5570000 lines in 1 seconds. LPS: 5570000 Test run 3 at Mon Feb 20 21:29:50 EST 2012 CPP: Read 5570001 lines in 9 seconds. LPS: 618889 Python:Read 5570000 lines in 1 seconds. LPS: 5570000 Test run 4 at Mon Feb 20 21:30:01 EST 2012 CPP: Read 5570001 lines in 9 seconds. LPS: 618889 Python:Read 5570000 lines in 1 seconds. LPS: 5570000 Test run 5 at Mon Feb 20 21:30:11 EST 2012 CPP: Read 5570001 lines in 10 seconds. LPS: 557000 Python:Read 5570000 lines in 1 seconds. LPS: 5570000
Tiny benchmark addendum and recap
For completeness, I thought I'd update the read speed for the same file on the same box with the original (synced) C++ code. Again, this is for a 100M line file on a fast disk. Here's the comparison, with several solutions/approaches:
Implementation Lines per second python (default) 3,571,428 cin (default/naive) 819,672 cin (no sync) 12,500,000 fgets 14,285,714 wc (not fair comparison) 54,644,808
getline, stream operators,
scanf, can be convenient if you don't care about file loading time or if you are loading small text files. But, if the performance is something you care about, you should really just buffer the entire file into memory (assuming it will fit).
Here's an example:
//open file in binary mode std::fstream file( filename, std::ios::in|::std::ios::binary ); if( !file ) return NULL; //read the size... file.seekg(0, std::ios::end); size_t length = (size_t)file.tellg(); file.seekg(0, std::ios::beg); //read into memory buffer, then close it. char *filebuf = new char[length+1]; file.read(filebuf, length); filebuf[length] = '\0'; //make it null-terminated file.close();
If you want, you can wrap a stream around that buffer for more convenient access like this:
std::istrstream header(&filebuf[0], length);
Also, if you are in control of the file, consider using a flat binary data format instead of text. It's more reliable to read and write because you don't have to deal with all the ambiguities of whitespace. It's also smaller and much faster to parse.
A first element of an answer:
<iostream> is slow. Damn slow. I get a huge performance boost with
scanf as in the below, but it is still two times slower than Python.
#include <iostream> #include <time.h> #include <cstdio> using namespace std; int main() { char buffer[10000]; long line_count = 0; time_t start = time(NULL); int sec; int lps; int read = 1; while(read > 0) { read = scanf("%s", buffer); line_count++; }; sec = (int) time(NULL) - start; line_count--; cerr << "Saw " << line_count << " lines in " << sec << " seconds." ; if (sec > 0) { lps = line_count / sec; cerr << " Crunch speed: " << lps << endl; } else cerr << endl; return 0; }
By the way, the reason the line count for the C++ version is one greater than the count for the Python version is that the eof flag only gets set when an attempt is made to read beyond eof. So the correct loop would be:
while (cin) { getline(cin, input_line); if (!cin.eof()) line_count++; };
I reproduced the original result on my computer using g++ on a Mac.
Adding the following statements to the C++ version just before the
while loop brings it inline with the Python version:
std::ios_base::sync_with_stdio(false); char buffer[1048576]; std::cin.rdbuf()->pubsetbuf(buffer, sizeof(buffer));
sync_with_stdio improved speed to 2 seconds, and setting a larger buffer brought it down to 1 second.
In your second example (with scanf()) reason why this is still slower might be because scanf("%s") parses string and looks for any space char (space, tab, newline).
Also, yes, CPython does some caching to avoid harddisk reads.
Just out of curiosity I've taken a look at what happens under the hood, and I've used dtruss/strace on each test.
C++
./a.out < in Saw 6512403 lines in 8 seconds. Crunch speed: 814050
syscalls
sudo dtruss -c ./a.out < in
CALL COUNT __mac_syscall 1 <snip> open 6 pread 8 mprotect 17 mmap 22 stat64 30 read_nocancel 25958
Python
./a.py < in Read 6512402 lines in 1 seconds. LPS: 6512402
syscalls
sudo dtruss -c ./a.py < in
CALL COUNT __mac_syscall 1 <snip> open 5 pread 8 mprotect 17 mmap 21 stat64 29
Well, I see that in your second solution you switched from
cin to
scanf, which was the first suggestion I was going to make you (cin is sloooooooooooow). Now, if you switch from
scanf to
fgets, you would see another boost in performance:
fgets is the fastest C++ function for string input.
BTW, didn't know about that sync thing, nice. But you should still try
fgets. | https://code.i-harness.com/en/q/8efe66 | CC-MAIN-2021-39 | refinedweb | 1,161 | 81.73 |
Serial communications is needed in several types of applications, but the Win32 API isn't a very easy to use API to implement it. Things get even more complicated when you want to use serial communication in an MFC based program. The classes provided in the library try to make life a little easier. Its documentation is extensive, because I want to give you a good background. Serial communication is hard and good knowledge of its implementation saves you a lot of work, both now and in the future...
First I'll briefly discuss why serial communications is hard. After reading that chapter you'll probably be convinced as well that you need a class, which deals with serial communication. The classes provided in the library are not the only classes, which handle the serial communication. Many other programmers wrote their own classes, but I found many of them too inefficient or they weren't robust, scalable or suitable for non-MFC programs. I tried to make these classes as efficient, reliable and robust as possible, without sacrificing ease of use too much.
The library has been developed as a public domain library some time ago, but it has been used in several commercial applications. I think most bugs have been solved, but unfortunately I cannot guarantee that there are no bugs left. If you find one (or correct a bug), please inform me so I can update the library.
Serial communication in Win32 uses the standard
ReadFile/
WriteFile
functions to receive and transmit data, so why should serial
communication be any harder then just plain file I/O? There are
several reasons, which I'll try to explain. Some problems are solved
in this library, but some others cannot be solved by a library.
Serial communication uses different formats to transmit data on the wire. If both endpoints doesn't use the same setting you get garbled data. Unfortunately, no class can help you with these problems. The only way to cope with this is that you understand what these settings are all about. Baudrate, parity, databits and stopbits are often quite easy to find out, because when they match with the other endpoint, you won't have any problems (if your computer is fast enough to handle the amount of data at higher baudrates).
Handshaking is much more difficult, because it's more difficult to detect problems in this area. Handshaking is being used to control the amount of data that can be transmitted. If the sending machine can send data more quickly then the receiving machine can process we get more and more data in the receiver's buffer, which will overflow at a certain time. It would be nice when the receiving machine could tell the sending machine to stop sending data for a while, so it won't overflow the receiver's buffers. This process of controlling the transmission of data is called handshaking and there are basically three forms of handshaking:
Problems with handshaking are pretty hard to find, because it will often only fail in cases where buffers overflow. These situations are hard to reproduce so make sure that you did setup handshaking correctly and that the used cable is working correct (if you're using hardware handshaking) before you continue.
The Win32 API provides more handshaking options, which aren't directly supported by this library. These types of handshaking are rarely used, so it would probably only complicate the classes. If you do need these handshaking options, then you can use the Win32 API to do that and still use the classes provided by the library.
File I/O is relatively fast so if the call blocks for a while, this will probably only be a few milliseconds, which is acceptable for most programs. Serial I/O is much slower, which causes unacceptable delays in your program. Another problem is that you don't know when the data arrives and often you don't even know how much data will arrive.
Win32 provides asynchronous function calls (also known as overlapped operations) to circumvent these problems. Asynchronous programming is often an excellent way to increase performance, but it certainly increases complexity as well. This complexity is the reason that a lot of programs have bugs in their serial communication routines. This library solves some asynchronous I/O problems by allowing the programmer to use overlapped and non-overlapped operations mixed throughout the code, which is often quite convenient.
Things get even more complex in GUI applications, which uses the event
driven model that they're used to. This programming model is a
heritage of the old 16-bit days and it isn't even that bad. The basic
rule is simple... All events are send using a windows message, so you
need at least one window to receive the events. Most GUI applications
are single-threaded (which is often the best solution to avoid a lot
of complexity) and they use the following piece of code in the
WinMain function to process all messages:
// Start the message-pump until a WM_QUIT is received MSG msg; while (::GetMessage(&msg,0,0,0)) { ::TranslateMessage(&msg); ::DispatchMessage(&msg); }
Because the
GetMessage function blocks until there is a
message in the message queue, there's no way to wake up when a serial
event occurs. Of course you can set a timer and check the ports
there, but this kind of polling is bad design and certainly doesn't
scale well. Unfortunately the Win32 serial communication API doesn't
fit in this event driven model. It would be easier for GUI
applications that the Win32 API posted a message to a window when a
communication event occurred (this is exactly what the 16-bit
implementation looked like).
If you implement your own message-pump, you can use the
MsgWaitForMultipleObjects to wait for a windows message
or a windows object to become signaled. The following piece of code
demonstrates how to do this (it assumes that the event handle that is
being used for asynchronous events is stored in the variable
hevtCommEvent):
bool fQuit = false; while (!fQuit) { // Wait for a communication event or windows message switch (::MsgWaitForMultipleObjects(1,&hevtCommEvent,FALSE,INFINITE,QS_ALLEVENTS)) { case WAIT_OBJECT_0: { // There is a serial communication event, handle it... HandleSerialEvent(); } break; case WAIT_OBJECT_0+1: { // There is a windows message, handle it... MSG msg; while (::PeekMessage(&msg,0,0,0,PM_REMOVE)) { // Abort on a WM_QUIT message if (msg.message == WM_QUIT) { fQuit = true; break; } // Translate and dispatch the message ::TranslateMessage(&msg); ::DispatchMessage(&msg); } } break; default: { // Error handling... } break; } }
This code is much more complex then the simple message pump displayed above. This isn't that bad, but there is another problem with this code, which is much more serious. The message pump is normally in one of the main modules of your program. You don't want to pollute that piece of code with serial communication from a completely different module. The handle is probably not even valid at all times, which can cause problems of its own. This solution is therefore not recommended. MFC and OWL programmers cannot implement this at all, because these frameworks already their own message pumps. You might be able to override that message pump, but it probably requires a lot of tricky code and undocumented tricks.
Using serial communications in a single-threaded event-driven program
is difficult as I've just explained, but you probably found that out
yourself. How can we solve this problem for these types of
applications? The answer is in the
CSerialWnd class,
which posts a message to a window (both the message and window can be
specified by the programmer) whenever a serial event occurs. This
makes using a serial port in GUI based applications much easier.
There is also a very thin MFC wrapper class, which is called
CSerialMFC but it's that thin, that it's hardly worth
mentioning.
This library cannot perform magic, so how can it send messages without blocking the message pump? The answer is pretty simple. It uses a separate thread, which waits on communication events. If such an event occurs, it will notify the appropriate window. This is a very common approach, which is used by a lot of other (serial) libraries. It's not the best solution (in terms of performance), but it is suitable for 99% of the GUI based communication applications. The communication thread is entirely hidden for the programmer and doesn't need to affect your architecture in any way.
The current implementation contains four different classes, which all have their own purpose. The following three classes are available.
CSerialis the base serial class, which provides a wrapper around the Win32 API. It is a lot easier to use, because it combines all relevant calls in one single class. It allows the programmer to mix overlapped and non-overlapped calls, provides reasonable default settings, better readability, etc, etc.
CSerialExadds an additional thread to the serial class, which can be used to handle the serial events. This releases the main GUI thread from the serial burden. The main disadvantage of this class is that it introduces threading to your architecture, which might be hard for some people.
CSerialWndfits in the Windows event driven model. Whenever a communication event occurs a message is posted to the owner window, which can process the event.
CSerialMFCis an MFC wrapper around
CSerialWnd, which make the serial classes fit better in MFC based programs.
If you're not using a message pump in the thread that performs the
serial communication, then you should use the
CSerial
or
CSerialEx classes. You can use blocking calls (the
easiest solution) or one of the synchronization functions (i.e.
WaitForMultipleObjects) to wait for communication events.
This approach is also used in most Unix programs, which has a similar
function as
WaitForMultipleObjects called 'select'. This
approach is often the best solution in non-GUI applications, such as
NT services.
The
CSerialEx adds another thread to the serial object.
This frees the main thread from blocking, when waiting for serial
events. These events are received in the context of this worker thread,
so the programmer needs to know the impact of multi-threading. If all
processing can be done in this thread, then this is a pretty efficient
solution. You need some kind of thread synchronization, when you need
to communicate with the main GUI thread (i.e. for progress indication).
If you need to communicate a lot with the main GUI thread, then it is
probably better to use the
CSerialWnd class. However, if
you don't communicate a lot with the main thread, then this class can
be a good alternative.
GUI applications, which want to use the event-driven programming model
for serial communications should use
CSerialWnd. It is a
little less efficient, but the performance degradation is minimal
if you read the port efficiently. Because it fits perfectly in the
event-driven paradigm the slight performance degradation is a minimal
sacrifice. Note that you can use
CSerial in GUI based
applications (even MFC/WTL based), but then you might block the
message pump. This is, of course, bad practice in in a commercial
application (blocking the message pump hangs the application from the
user's point of view for a certain time). As long as you know what the
impact is of blocking the message pump, you can decide for yourself if
it is acceptable in your case (could be fine for testing).
MFC application should use the
CSerialMFC wrapper if
they want to pass
CWnd pointers instead of handles. Because this
wrapper is very thin you can also choose to use
CSerialWnd directly.
Using the serial classes can be divided into several parts. First you need to open the serial port, then you set the appropriate baudrate, databits, handshaking, etc... This is pretty straightforward. The tricky part is actually transmitting and receiving the data, which will probably cause the most time to implement. At last you need to close the serial port and as a bonus if you don't then the library will do it for you.
Let's start with a classic example from K&R and be polite and say hello. The implementation is very straightforward and looks like this (there is no error checking here for simplicity, it is there in the actual project):
#define STRICT #include <tchar.h> #include <windows.h> #include "Serial.h" int WINAPI _tWinMain ( HINSTANCE /*hInst*/, HINSTANCE /*hInstPrev*/, LPTSTR /*lptszCmdLine*/, int /*nCmdShow*/ ) { CSerial serial; // Attempt to open the serial port (COM1) serial.Open(_T("COM1")); // Setup the serial port (9600,N81) using hardware handshaking serial.Setup(CSerial::EBaud9600,CSerial::EData8,CSerial::EParNone,CSerial::EStop1); serial.SetupHandshaking(CSerial::EHandshakeHardware); // The serial port is now ready and we can send/receive data. If // the following call blocks, then the other side doesn't support // hardware handshaking. serial.Write("Hello world"); // Close the port again serial.Close(); return 0; }
Of course you need to include the serial class' header-file. Make sure that the header-files of this library are in your compiler's include path. All classes depend on the Win32 API, so make sure that you have included them as well. I try to make all of my programs ANSI and Unicode compatible, so that's why the tchar stuff is in there. So far about the header-files.
The interesting part is inside the main routine. At the top we declare
the
serial variable, which represents exactly one COM
port. Before you can use it, you need to open the port. Of course
there should be some error handling in the code, but that's left as an
exercise for the reader. Besides specifying the COM port, you can
also specify the input and output buffer sizes. If you don't specify
anything, then the default OS buffer sizes are being used (older
versions of the library used 2KB as the default buffer size, but this
has been changed). If you need larger buffers, then specify them
yourself.
Setting up the serial port is also pretty straightforward. The
settings from the control panel (or Device Manager) are being used as
the port's default settings. Call
Setup if these settings
do not apply for your application. If you prefer to use integers
instead of the enumerated types then just cast the integer to the
required type. So the following two initializations are equivalent:
Setup(CSerial::EBaud9600,CSerial::EData8,CSerial::EParNone,CSerial::EStop1); Setup(CSerial::EBaudrate(9600), CSerial::EDataBits(8), CSerial::EParity(NOPARITY), CSerial::EStopBits(ONESTOPBIT));
In the latter case, the types are not validated. So make sure that you
specify the appropriate values. Once you know which type of
handshaking you need, then just call
SetupHandshaking
with one of the appropriate handshaking.
Writing data is also very easy. Just call the
Write
method and supply a string. The
Write routine will detect
how long the string is and send these bytes across the cable. If you
have written Unicode applications (like this one) then you might have
noticed that I didn't send a Unicode string. I think that it's pretty
useless to send Unicode strings, so you need to send them as binary or
convert them back to ANSI yourself. Because we are using hardware
handshaking and the operation is non-overlapped, the
Write method won't return until all bytes have been sent
to the receiver. If there is no other side, then you might block
forever at this point.
Finally, the port is closed and the program exits. This program is nice to display how easy it is to open and setup the serial communication, but it's not really useful. The more interesting programs will be discussed later.
Like in real life it's easier to tell something what to do then listening to another and take appropriate actions. The same holds for serial communication. As we saw in the Hello world example writing to the port is just as straightforward as writing to a file. Receiving data is a little more difficult. Reading the data is not that hard, but knowing that there is data and how much makes it more difficult. You'll have to wait until data arrives and when you're waiting you cannot do something else. That is exactly what causes problems in single-threaded applications. There are three common approaches to solve this.
The first solution is easy. Just block until some data arrives on the
serial port. Just call
WaitEvent without specifying the
overlapped structure. This function blocks until a communication event
occurs (or an optional time-out expires). Easy, but the thread is
blocked and only wakes up for communication events or a time-out.
The second solution is to use the synchronization objects of Win32.
Whenever something happens, the appropriate event handles are signaled
and you can take appropriate action to handle the event. This method
is available in most modern operating systems, but the details vary.
Unix systems use the
select call, where Win32
applications mostly use
WaitForMultipleObjects or one of
the related functions. The trick is to call the
WaitEvent
function asynchronously by supplying an overlapped structure, which
contains a handle which will be signaled when an event occurred. Using
WaitForMultipleObjects you can wait until one of the
handles become signaled. I think this is the most suitable for most
non-GUI applications. It's definitely the most efficient option
available. When you choose to use this option, you'll notice that the
serial classes are only a thin layer around the Win32 API.
The last solution is one which will be appreciated by most Windows GUI
programmers. Whenever something happens a message is posted to the
application's message queue indicating what happened. Using the
standard message dispatching this message will be processed
eventually. This solution fits perfect in the event-driven programming
environment and is therefore useful for most GUI (both non-MFC and
MFC) applications. Unfortunately, the Win32 API offers no support to
accomplish this, which is the primary reasons why the serial classes
were created. The old Win16 API uses the
SetCommEventMask
and
EnableCommNotification to do exactly this, but these
were dropped from the Win32 API.
Blocking is the easiest way to wait for data and will therefore be
discussed first. The
CSerial class exposes a method
called
WaitEvent, which will block until an event has
been received. You can (optionally) specify a time-out for this call
(if overlapped I/O is enabled), so it won't block forever if no data
arrives anymore. The
WaitEvent method can wait for
several events, which must be registered during setup. The following
events can occur on a COM port:
EEventBreakis sent whenever a break was detected on input.
EEventCTSmeans that the CTS (clear to sent) signal has changed.
EEventDSRmeans that the DSR (data set ready) signal has changed.
EEventErrorindicates that a line-status error has occurred.
EEventRingindicates that the ring indicator was set high. Only transitions from low to high will generate this event.
EEventRLSDmeans that the RLSD (receive line signal detect) signal has changed. Note that this signal is often called CD (carrier detect).
EEventRecvis probably one of the most important events, because it signals that data has been received on the COM-port.
EEventRcvEvindicates that a certain character (the event character) has been received. This character can be set using the
SetEventCharmethod.
EEventSendindicates that the entire output buffer has been sent to the other side.
When a serial port is opened, then the
EEventBreak,
EEventError and
EEventRecv are being
registered. If you would like to receive the other events then you
have to register them using the
SetMask method.
Now you can use the
WaitEvent method to wait for an
event. You can then call
GetEventType to obtain the
actual event. This function will reset the event, so make sure you
call it only once after each
WaitEvent call. Multiple
events can be received simultaneously (i.e. when the event character
is being received, then
(EEventRecv|EEventRcvEv) is
returned. Never use the
== operator to check for events,
but use the
& operator instead.
Reading can be done using the
Read method, but reading is
trickier then you might think at first. You get only an event that
there is some data, but not how much. It could be a single byte, but
it can also be several kilobytes. There is only one way to deal with
this. Just read as much as you can handle (efficiently) and process
it.
First make sure that the port is in
EReadTimeoutNonblocking mode by issuing the following
call:
// Use 'non-blocking' reads, because we don't know how many bytes // will be received. This is normally the most convenient mode // (and also the default mode for reading data). serial.SetupReadTimeouts(CSerial::EReadTimeoutNonblocking);
The
Read method will now read as much as possible, but
will never block. If you would like
Read to block, then
specify
EReadTimeoutBlocking.
Read always
returns the number of bytes read, so you can determine whether you have
read the entire buffer. Make sure you always read the entire buffer
after receiving the
EEventRecv event to avoid you lose
data. A typical
EEventRecv will look something like this:
// Read data, until there is nothing left DWORD dwBytesRead = 0; BYTE abBuffer[100]; do { // Read data from the COM-port serial.Read(abBuffer,sizeof(abBuffer),&dwBytesRead); if (dwBytesRead > 0) { // TODO: Process the data } } while (dwBytesRead == sizeof(abBuffer));
The Listener sample (included in the ZIP-file) demonstrates the technique as described above. The entire sample code isn't listed in this document, because it would take too much space.
In most cases, blocking for a single event (as described above) isn't appropriate. When the application blocks, then it is completely out of your control. Suppose you have created a service which listens on multiple COM-ports and also monitors a Win32 event (used to indicate that the service should stop). In such a case, you'll need multithreading, message queues or the Win32 function for synchronization. The synchronization objects are the most efficient method to implement this, so I'll try to explain them. Before you continue reading I assume you're a bit familiar with the use of the synchronization objects and overlapped operations. If you're not, then first read the section about Synchronization in the Win32 API.
The only call that blocks for a fairly long time is the
WaitEvent method. In the next paragraphs, I will show you
how to implement this call using the Win32 synchronization objects
(all other overlapped calls work identical). A complete implementation
can be found in the Overlapped project, which is quite similar to the
Listener project, but it now uses overlapped I/O.
First the the COM-port needs to be initialized. This works identical
as in the Listener sample. Then two events are created. The first
event will be used in the overlapped structure. Note that it should
be a manual reset event, which is initially not signaled. The second
one is an external event, which is used to stop the program. The first
event will be stored inside the
OVERLAPPED structure.
// Create a handle for the overlapped operations HANDLE hevtOverlapped = ::CreateEvent(0,TRUE,FALSE,0);; // Open the "STOP" handle HANDLE hevtStop = ::CreateEvent(0,TRUE,FALSE,_T("Overlapped_Stop_Event")); // Setup the overlapped structure OVERLAPPED ov = {0}; ov.hEvent = hevtOverlapped;
All events have been setup correctly and the overlapped structure has
been initialized. We can now call the
WaitEvent method
in overlapped mode.
// Wait for an event serial.WaitEvent(&ov);
The overlapped I/O operation is now in progress and whenever an event occurs, that would normally unblock this call, the event handle in the overlapped structure will become signalled. It is not allowed to perform an I/O operation on this port, before it has completed, so we will wait until the event arrives or the stop event has been set.
// Setup array of handles in which we are interested HANDLE ahWait[2]; ahWait[0] = hevtOverlapped; ahWait[1] = hevtStop; // Wait until something happens switch (::WaitForMultipleObjects(2,ahWait,FALSE,INFINITE)) { case WAIT_OBJECT_0: // Serial port event occurred ... case WAIT_OBJECT_0+1: // Stop event raised ... }
That's all you need to do, when you want to use the serial class in overlapped I/O mode. N
Most Windows developers are used to receive a Windows message,
whenever a certain event occurs. This fits perfectly in the Windows
event-driven model, but the Win32 API doesn't provide such a
mechanism for serial communication. This library includes a class
called
CSerialWnd, which will send a special message
whenever a serial event occurs. It is pretty simple, when you are
already familiar with the event-driven programming model of Windows.
Instead of using the
CSerial class, you must use the
CSerialWnd class (which is in fact derived from
CSerial).
CSerialWnd works just like
CSerial, but there are some tiny differences in opening
the port and waiting on its events. Note that the
CSerialWnd doesn't have a window itself and neither
should you derive from it, when you want to use it. Just define a
member variable and use that from within your window.
Because
CSerialWnd posts its messages to a window, it
requires additional information. Therefore the
Open
method accepts three additional parameters, which specify the
window handle, message and optional argument. The prototype
looks like:
LONG Open ( LPCTSTR lpszDevice, HWND hwndDest, UINT nComMsg = WM_NULL, LPARAM lParam = 0, DWORD dwInQueue = 0, DWORD dwOutQueue = 0 )
The
lpszDevice,
dwInQueue and
dwOutQueue are used as in
CSerial. The
hwndDest argument specifies the window, where the message
should be sent to. The library registers a default message during
startup, which can be used in most cases. Simply pass
WM_NULL to use this message. The value of this message
is stored in the
CSerialWnd::mg_nDefaultComMsg variable,
which is a static member variable of
CSerialWnd. If
you prefer one of your own messages, then you can use that instead.
The optional
lParam argument is sent as the second
parameter (
lParam) in each message that is being sent
by
CSerial. The serial library doesn't do anything with
this value, so be free to use it as you like.
Sending data and setting up the serial port is exactly the same as
with
CSerial, so I won't discuss that again anymore. The
biggest difference is the way you receive the events, but that is
exactly why you want to use this class anyway.
If everything is fine, then you have registered all interesting events
with the
SetMask method. Whenever one of these events
occur, the specified message will be sent to the window you have
registered before. The
wParam will contain the event and
error-code. The
lParam contains whatever you passed to
CSerialWnd::Open, when you have opened the port. A
typical handler for these messages looks like:
LRESULT CALLBACK MyWndProc (HWND hwnd, UINT nMsg, WPARAM wParam, LPARAM lParam) { if (nMsg ==: // TODO: Read data from the port break; ... } // Return successful return 0; } // Perform other window processing ... }
The methods
WaitEvent,
GetEventType and
GetError from
CSerial are hidden in the
CSerialWnd class, because they cannot be used anymore.
All the information is passed in the window message, so it shouldn't
be necessary to use them anymore.
Personally, I don't like MFC, but I know many people out there use it
so there is also support in this library for MFC. Instead of using
CSerialWnd, you can use
CSerialMFC. It works
exactly the same, but it can also handle a
CWnd pointer
and it provides a macro, which can be used in the message map for
better readability. The message map of a window, which can receive
events from
CSerialMFC should look like this:
BEGIN_MESSAGE_MAP(CMyClass,CWnd) //{{AFX_MSG_MAP(CMyClass) ... //}}AFX_MSG_MAP ... ON_WM_SERIAL(OnSerialMsg) ... END_MESSAGE_MAP()
Note that the
ON_WM_SERIAL macro is placed outside the
AFX_MSG_MAP block, otherwise the MFC Class Wizard
becomes confused. The handler itself looks something like this:
afx_msg LRESULT CMyClass::OnSerialMsg (WPARAM wParam, LPARAM lParam) { const CSerialMFC::EEvent eEvent = CSerialMFC::EEvent(LOWORD(wParam)); const CSerialMFC::EError eError = CSerialMFC::EError(HIWORD(wParam)); switch (eEvent) { case CSerialMFC::EEventRecv: // TODO: Read data from the port break; ... } // Return successful return 0; }
A complete sample, including property sheets for setting up the COM-port, is shipped with this library. Look for the SerialTestMFC project for an example how to use this library in your MFC programs.
This library is very lightweight, so it can easily be integrated into
your application without using a separate DLL. I used a static library
for the
CSerial (and derived) classes, because I think
that is exactly where a library is meant for. Just insert the Serial
project into your workspace and make a dependency to it. The linker
will then automatically compile and link the serial classes to your
application. Some people don't like libraries. In that case you can
just add the Serial files to your project and recompile.
If you use precompiled headers, then you need to remove the following lines from both Serial.cpp and SerialWnd.cpp:
#define STRICT #include <crtdbg.h> #include <tchar.h> #include <windows.h>
Replace these lines with the following line:
#include "StdAfx.h"
The Serial library comes with some sample programs, which can all be compiled from the Serial.dsw workspace. Make sure that you always open the Serial.dsw file, when building the library and/or sample applications. Opening the individual .dsp files won't work, because these files don't have the dependencies, which results in unresolved externals.
Note that all samples can be compiled as ANSI or Unicode versions. The Unicode versions don't run on the Windows 95/98/ME platforms, because they don't support Unicode. The SerialTestMFC sample uses the Unicode version of the MFC library (when compiled with Unicode enabled). The default installation options of Visual C++ v6.0 don't install the Unicode libraries of MFC, so you might get an error that mfc42ud.dll or mfc42u.dll cannot be found.
A lot of people still need to support the Windows 95 environment,
which doesn't support the
CancelIo function. When an
overlapped operation times out, then the pending call need to be
cancelled with the
CancelIo call. Therefore time-outs
(other then 0 and INFINTE) cannot be used when Windows 95
compatibility is enabled. Fortunately, the
CSerialEx,
CSerialWnd and
CSerialMFC don't rely on
this call, so these classes can be used on Windows 95 without any
restrictions. If you define the
SERIAL_NO_CANCELIO
symbol, then I/O cancellation is disabled, so you should always
define this symbol when you target Windows 95.
Other problems, specific to Windows 95/98/ME are the queue sizes. If the buffer size is far too small then you might get a blue screen, when receiving or transmitting data. The default settings should be fine for normal serial communication.
I have got a lot of requests to implement a windows CE version of the
library. The old version of this library always used overlapped
I/O, so it didn't work on Windows CE. Due to the huge amount of
requests I started working on this issue. I have rewritten the
CSerialEx class, so it doesn't rely on overlapped I/O
internally. Canceling the
WaitCommEvent now uses a
documented trick, which sets the event mask to its current value. This
effectively cancels the
WaitCommEvent method and makes
the use of overlapped I/O unnecessary in
CSerialEx.
<code>SetCommMask call blocks, when it is already waiting for
an event (using
WaitCommEvent). This would render this
method useless on the Windows CE platform, because it doesn't support
overlapped I/O. Fortunately, the serial driver is implemented as an
overlapped driver internally on Windows CE, so it allows multiple
calls (described in the KB, article Q175551). Using this Windows CE
feature it is also possible to use all the classes on the Windows CE
platform.
To include this library into you Windows CE project, just add the Serial.dsp project file to your own workspace. If you use
eMbedded Visual C++, then the project file is automatically
converted. Make sure you define the
SERIAL_NO_OVERLAPPED
symbol to avoid the use of overlapped I/O.
This paragraph describes some porting issues that you should be aware
of when porting from an older version to a newer version. Always
retest your application, when using a different version of the serial
library. I always try to keep the code source-level compatible, but if
you use a new library version, then make sure you recompile your code
completely with the same
SERIAL_xxx symbols
defined as were used to compile the library itself.
CSerialExclass has been added, which might better fit your needs.
Some people use virtual COM-ports to emulate an ordinary serial port (i.e. USB serial dongle, Bluetooth dongle, ...). These so-called virtual COM ports use a different driver then ordinary serial ports. Unfortunately, most drivers are pretty buggy. In most cases normal communication using the default settings works pretty good, but you run into problems when you need more sophisticated communication.
Most virtual COM ports have difficulties with the
CancelIo function. In some cases the process freezes
completely (and cannot even be killed by the task manager) and I
have even seen Windows XP systems reboot after calling CancelIo. Some
drivers can handle
CancelIo for an overlapped
ReadFile call, but fail when canceling a
WaitCommEvent request. Terminating a thread with a
pending I/O request also implies an implicit call of
CancelIo, so if you have problems terminating a thread
then you might have pending requests that cannot be cancelled. The
CSerialEx (and derived classes) effectively cancels the
pending
WaitEvent, so you should be pretty safe when you
use these classes.
Receiving spurious events is another problem, which is quite common when using virtual COM ports. Even events that have been masked out can be sent by some virtual COM ports. For this reason the events are filtered inside the library, but you might get some empty events in your code (you can simply ignore these events).
Some virtual COM ports also have problems when you don't use the common 8 databit, no parity and 1 stopbit settings. Sending a break is also problematical for some virtual COM ports. There is no standard workaround possible for these bugs, so make sure you test these features, when you intend to use your application with a virtual COM port.
Some people claim that the dongle and its driver is fine, because
HyperTerminal works fine. This isn't correct in most cases.
HyperTerminal only uses a small subset of the Win32 API for serial
communication and doesn't seem to use
CancelIo very
much. The most important lesson from the last few paragraphs is that
you need to test your application very thorough with each driver that
can be used. Although the Win32 API is the same for each (virtual) COM
port, the implementation below the API might be completely different.
This serial library utilizes the overlapped I/O mechanism, which should be supported by each Win32 driver. Unfortunately, a lot of drivers don't implement this feature correct. However, I keep trying to improve this library so if you have any suggestions, please contact me.
Of course the first place to look for information about serial communications is in the Platform SDK section "Windows Base Services, Files and I/O, Communications". There's probably enough in there to implement your own serial communications, but for a better explanation read Allen Denver's article called "Serial Communications in Win32", which is in the MSDN's Technical Articles.
WaitCommEventmethod when using CSerialEx (or derived class). Thanks to Mike_WX88 and Spainman for reporting the problem and pointing out where the problem was.
SERIAL_NO_OVERLAPPEDsymbol to disable overlapped I/O support. This reduces the memory footprint of the library on systems that don't support overlapped I/O (Windows CE).
CSerialExcannot utilize this functionality on non-Windows CE platforms. It seems that there is no way to cancel the
WaitEventWin32 API call on these platforms, which is required for a proper termination of the
CSerialExworker thread.
CSerialEx(and derived) classes use an alternative way to indicate that the listener thread should be closed (setting the event mask forces the
WaitCommEventto return).
CSerialEx::Openmethod now accepts an additional parameter to automatically start the listener thread. The default is not to start the listener thread (this would break existing code).
CSerialExclass. This has simplified the
CSerialWndclass, because all threading has been moved to the
CSerialExclass.
SendBreakmethod to the
CSerialclass. The SerialTestMFC application can now also send a break.
EV_PERR,
EV_RX80FULL,
EV_EVENT1and
EV_EVENT2events.
SERIAL_NO_CANCELIOmacro is defined. Now the library can be used on Windows 95 as well.
CSerialWndderived classes). This fixes a handle leak in the application.
Flushfunction to the name
Purge, which is a better name for that function.
ON_WM_SERIALmacro should be outside the MFC comment blocks.
MsgWaitForMultipleObjectssample. It used an
ifinstead of a
whileconstruction to dispatch messages from the message queue. This was wrong and has been corrected.
CheckPortmethod to enable/disable the available ports. If a port cannot be opened, the user will now receive an error message.
FAILEDmacro in
CSerialWnd.cppand replaced it with a generic
ERROR_SUCCESScomparison. The
FAILEDmacro should only be used with COM functions. Thanks to Toni Bezjak for pointing this out.
CheckPorta static method (as it was always intended to be).
DECLARE_MESSAGE_MAPto
BEGIN_MESSAGE_MAPin the documentation.
If you have any comments or questions about these classes, then you can reach me using email at Ramon.de.Klein@ict.nl. This library is free for both commercial and non-commercial use (if you insist, you can send me money though). Unfortunately, I cannot guarantee any support for these classes. I cannot test every situation and the use of these classes is at your own risk.
Because this library is distributed with full source-code included, I cannot stop you from changing the code. I don't mind if you change the code, but I don't want to be blamed for your bugs. So please mark your changes and keep my name in the copyrights as well. If you added a cool feature, then please let me know so I might integrate it with a new version of this library. The library is released under the conditions of the Lesser GNU Public License (LGPL). The sample programs are distributed under the terms of the GNU Public License (GPL).
Please don't mirror this code or documentation on another website or removable media (such as a CD-ROM) with the intent to redistribute it. I don't want to have old versions floating around, which might contain bugs that are solved in later versions. Just mention the URL where users can download the archive and documentation.
I would like to thank my friend and ex-colleague Remon Spekreijse for pointing out some problems, adding some features in this library and encouraging me to put this library on the net. I also want to thank all other people who have shown their appreciation one way or another.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/system/serial.aspx | crawl-002 | refinedweb | 6,514 | 55.44 |
Algorithm, Code, Quiz, Math, Simple, Programming, Easy Questions
Budget $30-250 USD
Hello,
I am looking for some one who can code excellent algorithms, have clear concepts in programming, good with data structures and can code with good design.
Who has good understanding about Big O notations and time/space complexity. If you can answer below questions, and provide support (online) for an hour. You will be awarded this project and good money, excellent feedback and bonus on good work. I am 5.0/5.0 employer. I will create 100% Milestone Money. It will be fun, exciting to work together.
****** Please Refer to the attached File.
BASIC ALGORITHMS
1. What is the best data structure to implement priority queue?
2. What are the worst case time complexity of the following algorithms, performed on containers of size N:
(a) Locating a number in an unsorted array.
(b) Locating a number in a sorted array.
(c) Inserting a number in a balanced binary tree
(f) Deleting a number from an unbalanced binary tree
(g) Building a heap
(h) Adding a number to a hash table
(i) Sorting an array
3. What is the relation (less, greater, equal) between O(n) and O(2n)? O(log2 N) and O(log10 N)?
CODING
EXPECTED TIME TO COMPLETE: 20-30 minutes
1. In a two dimensional array of integers of size m x n, for each element which value is zero, set to zero the entire row and
the column where this element is located, and leave the rest of the elements untouched.
2. Write a function that takes an integer argument and returns the corresponding Excel column name.
For instance 1 would return 'A', 2 would return 'B', ...., 27 would return 'AA' and so on
3. Write code to merge 3 sorted arrays of integers (all different sizes) into a single sorted array.
DEBUGGING
1. Find the bug in the following code:
// Find the first break in monotonically increasing sequence Returns c if there is no break.
int r(int *p, size_t c)
{
int i = 0;
while(p[i + 1] >= p[i] && i < c - 1)
++i;
return i + 1;
}
MATH, PROBABILITY, COMPLEXITY
1. A rare disease afflicts 1% of the population. A medical test for this disease has 1% false positive rate (if a person is healthy, there is a 1% probability that the test will show that the person is ill), and 1% false negative rate (if a person is ill, there is a 1% probability that the test will that the person is healthy). A person tests as having the disease. What is the probability that the person actually has the disease?
2. An evil dictator captured you and made you play a game. You are in front of three glasses of wine. Two of them are poisoned; one is not. You must pick one can and drink it. If you survive, the evil dictator will release you. When you pick one of the glasses, the dictator reveals which one of the other two is poisoned, and offers you to stay with your original choice, or switch. Should you switch?
3. In front of you there is a black box. The box can perform two operations: push(N) adds a number to its internal storage; pop-min() extracts the current minimum of all numbers that are currently stored, and makes the box forget it. The numbers are mathematical objects: there is no upper bound. Both push(N) and pop-min() execute in O(1) time. Design and algorithm that could be used to implement such a box.
4. You are asked to design a plotter. A plotter is a computer-controlled device that picks a pen, carries it to a point on paper using mechanical maniplator, lowers it so that it touches the paper, and drags it to the next point drawing a line. In your plotter there will be 3 pens, red, green, and blue. Computer uploads a picture to the plotter which consists of list of segments and colors in which these segments must be drawn. You are asked to reorder the segments such that the work performed by mechanical manipulator is optimal.
Can you design an algorithm that would do so?
5. What is the time complexity of the following algorithm:
unsigned int Rabbits(unsigned int r)
{
return (r < 2) ? r : Rabbits(r - 1) + Rabbits(r - 2);
}
Awarded to:
14 freelancers are bidding on average $225 for this job
Hi, I am Algorithm expert and can surely help you with this project. Please let me know if you are interested. Thank you
plz discuss....................................................................................................................................................................
Hi, I already finished few such projects. I am expert in algorithms. I can do it. If you have question you can ask me | https://www.freelancer.com/projects/Mathematics-Algorithm/Algorithm-Code-Quiz-Math-Simple/ | CC-MAIN-2017-43 | refinedweb | 790 | 73.68 |
Converting Ionic 3 Push/Pop Navigation to Angular Routing in Ionic 4
By Josh Morony
Most of the changes required for upgrading from Ionic 3 to Ionic 4 are going to be simple find and replace style changes – things like changing
<button> to
<ion-button> and
ionViewDidLoad() to
ngOnInit(). Although it will look slightly different, the code that you are using in Ionic 3 will more or less look the same in Ionic 4.
The most significant change, and one that may require a little more thought is the move to Angular routing. The recommended approach moving forward for Ionic/Angular applications will be to use the Angular routing system to define the navigation in the application. If you are not already familiar with using Angular routing in Ionic 4 I would recommend reading these articles first for a bit of background:
-. There are also other significant benefits to using Angular routing:
- You need to use Angular routing in Ionic 4 in order to enable lazy loading (
@IonicPageis no longer available)
- Having routes defined with the Angular router makes the application much more usable in a web/PWA environment
In this tutorial, we are going to look at a hypothetical scenario of creating routes for an existing Ionic 3 application. We will consider an application with a typical structure, and discuss how to create the routes for the application in Ionic 4, and how navigation would need to be modified.
What is Angular Routing?
I intend to keep this tutorial reasonably high level, so if you are not already familiar with Angular routing in a general sense, I would recommend reading the two articles I linked above. At its core, a route is responsible for indicating which component in your application needs to be activated. The active route is determined by whatever path is present in the URL.
In our application we will have some routes defined as follows (e.g. in a app-routing.module.ts file):
import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; const routes: Routes = [ { path: '', redirectTo: '/home', pathMatch: 'full' }, { path: 'home', loadChildren: './pages/home/home.module#HomeModule' }, { path: 'about', loadChildren: './pages/about/about.module#AboutModule' } ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { }
This is just a simple application that has two pages. In the case that no path is supplied in the URL, e.g:
The first route will be matched, which simply redirects to the
home route. The
home route would also be activated by going to:
and the
about route could be activated by going to:
Each route will have an associated component (i.e. the page in your application) that will be displayed when the route is activated. Since we will generally be using lazy loading, we use the
loadChildren property instead of
component because we want to load the pages module file rather than the page component itself.
In an Ionic/Angular application, whatever route is activated will display the associated page/component inside of:
<ion-router-outlet></ion-router-outlet>
which you will find in the root component’s template (app.component.html). Although there is more you can do with routes, that’s pretty much the gist of it – you link URL paths to specific components. Navigation in the application is achieved by switching between these URL paths.
Setting up the Routes
Technically, it doesn’t matter how you structure your routes or what you call them. You could name all of your routes like this:
const routes: Routes = [ { path: '', redirectTo: '/a', pathMatch: 'full' }, { path: 'a', loadChildren: './pages/home/home.module#HomeModule' }, { path: 'b', loadChildren: './pages/about/about.module#AboutModule' }, { path: 'c', loadChildren: './pages/detail/detail.module#DetailModule' }, ];
and it would work just fine. As long as each page you want to switch to has a route you will be able to navigate to it. This isn’t a very nice or organised way to do it, though. We should take the time to give our application a sensical route structure so that the application is easier to develop and maintain, and so that it is more useful to users.
It helps to have a big picture understanding of the structure of your application. The example we are going to walk through will look like this:
We have a home page that will serve as a sort of “welcome” splash page. We then have three main pages which are products, about, and support. The products page will also be able to activate a detail page and a category page. The support page will be able to launch the ticket page as a modal.
As I mentioned, each page we want to display in the router needs its own route, so each of the pages in the diagram above should have its own route. Since we are going to be launching the ticket page as a modal from the support page, we will not be creating a route for it.
Given the structure of the diagram, at first glance you might visualise the paths for the routes like this:
/home /home/products /home/products/detail /home/products/categories /home/about /home/support
You certainly could do that if you wanted, but the
home in every URL is a bit redundant. We might then change it to this instead:
/home /products /products/detail /products/categories /about /support
This is pretty good, but since our detail page is going to display details for specific products, we are going to pass the
id of the product to the detail page. Instead of having a static route we would change it to accept a parameter:
/home /products /products/:id /products/categories /about /support
or if you prefer:
/home /products /products/detail/:id /products/categories /about /support
Again, if you are unfamiliar with the use of
:id here, I would recommend reading the tutorial I linked above. We now have our routes, and I think that they are well structured and make sense. It is important to remember that there is no specific structure you have to follow. If I decided I didn’t like the way those routes looked I could just as easily do this:
/home /detail/:id /products /categories /about /support
I’m just of the opinion that it makes sense for your routes to follow the structure of your application. Especially for larger applications, a sensical hierarchical route structure is going to be easier to reason about. For smaller applications with just a few different routes, it probably isn’t going to really matter.
Now, to complete the final step and implement those routes in our app-routing.module.ts file, they would look like this:
const routes: Routes = [ { path: '', redirectTo: '/home', pathMatch: 'full' }, { path: 'home', loadChildren: './pages/home/home.module#HomeModule' }, { path: 'products', loadChildren: './pages/products/products.module#ProductsModule' }, { path: 'products/:id', loadChildren: './pages/product-detail/product-detail.module#ProductDetailModule' }, { path: 'products/categories', loadChildren: './pages/product-categories/product-categories.module#ProductCategoriesModule' }, { path: 'about', loadChildren: './pages/about/about.module#AboutModule' }, { path: 'support', loadChildren: './pages/support/support.module#SupportModule' } ];
With the routes in place, we have everything we need to navigate about the application.
Navigating the Routes
Let’s think about the structure we created in terms of navigating an Ionic 3 application:
Since we have a home page that will serve as a welcome page, this would be our initial root page. Once the user is done with this page, we would use
setRoot to change the root page to the products page. We would switch between the products/about/support pages by calling
setRoot. If we wanted to view the detail or categories pages, we would use
push to push those pages onto the navigation stack. In the case of the ticket page, in this example, we would launch that as a modal from the support page.
In Ionic 4 with Angular routing, there is no root page to be defined. Our default route will be activated and the associated page/component will be displayed, which in this case would be the home page. Now let’s consider how we would navigate about the application using routes, compared to how we would do it in an Ionic 3 application.
setRoot
In Ionic 3, we would navigate to our products, support, or about pages by using
setRoot on the
NavController. In Ionic 4 with Angular routing we would simply link to the page from the template using
routerLink:
<ion-button
or if we wanted to trigger that programmatically, we would link to the appropriate URL using the
navigateRoot method of
NavController:
this.navCtrl.navigateRoot('/support');
Notice that in the
routerLink example, we also specify the
routerDirection. This isn’t strictly required, but it helps with the page transition animations. Since we are just linking to various routes it can be hard for the router outlet to determine the “direction” of the navigation is (i.e. are we navigating “backward” or “forward”?). This will make sure the correct page transition animation is applied. This is also the reason that we use Ionic’s own navigation method of
navigateRoot rather than just using the Angular router directly because it allows us to specify the intent of the navigation so that the appropriate animation can be applied.
NOTE: The methods we will be using like
navigateRoot are available on a new version of the
NavController. This isn’t to be confused with the existing
NavController in Ionic 3.
Push
Where we would “push” pages like the detail page or the categories page in Ionic 3, with Ionic 4 and Angular routing we still just link to the route as usual using
routerLink:
<ion-button
However, this time we supply a
routerDirection of
forward. When we are transitioning from the products page to the product-detail page, we are traversing forward through our applications structure.
If we wanted to perform the navigation programmatically, we would do this:
this.navCtrl.navigateForward('/products/12');
When pushing pages in Ionic 3, especially when pushing detail pages like this, we would also often supply an object to the pushed page through
NavParams (e.g. the product that we want to display). When using Angular routing, any information we want to pass from one page to the next needs to be supplied through the URL. Whilst it is possible to send large amounts of information through the URL, it isn’t entirely practical to send entire objects as a JSON string through the URL.
Instead, we would just pass something simple like an
id (
12 in this example) that we can retrieve on the next page, and then use a service to grab the rest of the details for that particular object. For more details on how to implement this master/detail pattern in Ionic 4, I recommend reading: Implementing a Master/Detail Pattern in Ionic 4.
Pop
Once again, navigating backward is a case of linking to the appropriate route and specifying the direction:
<ion-button
However, with backward navigation, you will commonly use Ionic’s back button component to do this. In that case, all you need to do is add the component to your template:
<ion-back-button</ion-back-button>
This will automatically navigate back to the previous page. However, in the case that the user navigates directly to a particular URL, there will be no history information and the back button won’t know where to navigate back to. This is why we supply the
defaultHref, if there is no history information then this is the route that will be navigated to.
Of course, you can also navigate backwards programatically:
this.navCtrl.navigateBack('/products');
Modal
We’ve covered all the basic navigation methods, but what about our ticket modal? In this case, we want to be able to launch the ticket page as a modal from the support page. In this case, we can just import the component we want to launch as a modal:
import { TicketPage } from '../ticket/ticket.page';
and then use Ionic’s
ModalController to launch that component as a modal:
this.modalCtrl.create({ component: TicketPage }).then((modal) => { modal.present(); });
Inside of the ticket page, to dismiss the modal once you are done you just need to call
dismiss on the
ModalController:
this.modalCtrl.dismiss();
Unlike transitioning between normal pages, it is possible to pass data objects through to a component launched as a modal by using
componentProps and
NavParams.
Summary
Once you have the routes figured out, it isn’t too much fuss to actually change the navigation in your application. For the most part, it’s just a matter of swapping our your push/pop/setRoot calls with the methods that I have outlined above. Likely the most time-consuming aspect people will run into is having to refactor the code to have services return information to detail pages, rather than passing the objects directly to the page using
NavParams. | https://www.joshmorony.com/converting-ionic-3-push-pop-navigation-to-angular-routing-in-ionic-4/ | CC-MAIN-2020-10 | refinedweb | 2,146 | 50.77 |
Silverlight 4 + RIA Services - Ready for Business: Authentication and Personalization
To continue our series, In real business applications our data is often very valuable and as such we need to know who is accessing what data and control certain data access to only users with privilege. Luckily this is very easy to do with RIA Services. For example, say we want to let only authenticated users access our data in this example. That is as easy to accomplish as adding an attribute, see line 2 below.
1: [EnableClientAccess]
2: [RequiresAuthentication]
3: public class DishViewDomainService : LinqToEntitiesDomainService<DishViewEntities>
4: {
5:
When we run the application, we now get an error. Clearly you can do a bit better from a user experience angle… but the message is clear enough.
Notice there is a login option, so we can log in…
and even create a new user.
and with a refresh we now get our data
And the application knows who i am on the client and gives me a way to log out.
Now you can also easily interact with the current user on the server. So for example, only return records that they have edited, or, in this case, log every access:
1: public IQueryable<Restaurant> GetRestaurants()
2: {
3: File.AppendAllLines(@"C:\Users\brada\Desktop\log.txt", new string[] {
4: String.Format("{0}:{1}", DateTime.Now,
5: this.ServiceContext.User.Identity.Name)});
6: return this.ObjectContext.Restaurants
7: .Where (r=>r.Region != "NC")
8: .OrderBy(r=>r.ID);
9: }
10:
Line 5 is the key one.. we are accessing the current users on the server. This gives us a nice simple log.
3/7/2010 9:42:57 PM:darb
3/7/2010 9:43:05 PM:darb
Now we can also personalize this a bit. Say we want our users to be able to give us a favorite color and we keep track of that on the server and the client, so it works seamlessly from any machine.
First we need to add BackgroundColor to our backing store. I this case I am using ASP.NET profile storage, so I add the right stuff to web.config
Then I need to access this from the Silverlight client, so I add a property to the User instance in the Models\User.cs
public partial class User : UserBase
{
public string FriendlyName { get; set; }
public string BackgroundColor { get; set; }
}
Finally, we need to access it on the client. In main.xaml add lines 2 and 3..
1: <Grid x:Name="LayoutRoot" Style="{StaticResource LayoutRootGridStyle}"
2: Background="{Binding Path=User.BackgroundColor}"
3:
4:
5:
Run it and we get our great default background color!
Now, that is nice, but it would be even better to give the user a chance to actually edit their settings. So in About.xaml, we use a very similar model as above.
<Grid x:
and
<sdk:Label
<TextBox Text="{Binding Path=User.BackgroundColor, Mode=TwoWay}" Height="23" />
Then wire up a save button
private void button1_Click(object sender, System.Windows.RoutedEventArgs e)
{
WebContext.Current.Authentication.SaveUser(false);
}
And it works!
And what’s better is if you run it from another browser, on another machine, once you log in you get the exact same preferences! | https://docs.microsoft.com/en-us/archive/blogs/brada/silverlight-4-ria-services-ready-for-business-authentication-and-personalization | CC-MAIN-2022-33 | refinedweb | 535 | 57.27 |
SYNOPSIS#include <sys/types.h>
#include "lfc_api.h"
int lfc_getcomment (const char *path, char *comment)
DESCRIPTIONlfc_getcomment gets the comment associated with a LFC file/directory in the name server.
- path
- specifies the logical pathname relative to the current LFC directory or the full LFC pathname.
- comment
- points at a buffer to receive the comment. The buffer must be at least CA_MAXCOMMENTLEN+1 characters long.
RETURN VALUEThis routine returns 0 if the operation was successful or -1 if the operation failed. In the latter case, serrno is set appropriately.
ERRORS
- ENOENT
- The named directory does not exist or is a null pathname or there is no comment associated with this path.
- EACCES
- Search permission is denied on a component of the path prefix or the caller effective user ID does not match the owner ID of the file or read permission on the file/directory itself is denied.
- EFAULT
- path or comment is a NULL pointer.
- | http://manpages.org/lfc_getcomment/3 | CC-MAIN-2021-21 | refinedweb | 155 | 56.45 |
OverView:-
Google Map Integration in iOS and SDK, Google Maps is a web mapping service developed by Google. It offer satellite imagery, street maps 360 degree panoramic views of streets, real- time traffic condition, and route planning for travelling by foot, car, cycle, or public transportation
Working with maps in consists of an entire programming as there are tons of things that a developer can do with them. From just presenting a location on a map to drawing a journey’s route with intermediate positions, or even exploiting a map’s possibilities in totally different approach, dealing with all these undoubtly may be a great experience that leads to superb results.
Getting Start
Step 1:- Get the latest version of Xcode
Step 2:- Install the SDK
Create a Podfile for the Google Maps SDK for IOS and use it to install the API and its dependencies:
- If you do not have an Xcode project yet, create one now and save it to your local machine. (If you are new to iOS development, create a Single View Application.)
- Create a file named Podfile in your project directory. This file defines your project’s dependencies.
- Edit the Podfile and add your dependencies. which includes the dependencies you need for the Google Maps SDK for iOS and Places API for iOS (optional):
- Add this pod in Podfile in your application
- do
pod ‘GoogleMaps’
pod ‘GooglePlaces’
end
6. Save the podfile
7.Open a terminal and go to the directory containing the Podfil
8. Cd <path-to- project>
9. Run pod install command
Pod install
10. Close Xcode, and then open (double-click) your project’s .xcworkspace file to launch Xcode. From this time onwards, you must use the .xcworkspace file to open the project.
Step 3:- Get an API key
1.Go to the Google API Console.
2.Create or select a project.
3.Click Continue to enable the Google Maps SDK for iOS.
4. On the Credentials page, get an API key.
Note: If you have a key with iOS restrictions, you can use that key. You can use the same key with any of your iOS apps within the same project.
5. From the dialog displaying the API key, select Restrict key to set an iOS restriction on the API key.
6.In the Restrictions section, select iOS applications, then enter your app’s bundle identifier. For example: example.hellomap.
you get the bundle identifier from your project go to project then click on the project name folder then select project-> target-> general-> identity->bundle identifier -> copy your that bundle identifier then paste that identifier those restriction section
-
Step 4:- Add API key in your application
Add your API key to your AppDelegate.swift
- Import GoogleMaps
- Add the following to your application(_:didFinishLaunchingWithOptions:) method,
replacing YOUR_API_KEY with your API key:
GMSServices.provideAPIKey(“YOUR_API_KEY ”)
- if you are also using the Places API
GMSPlacesClient.provideAPIKey(“YOUR_API_KEY ”)
Step 5:- ADD Map
Now, add or update a few methods inside your app’s default ViewController to create and initialize an instance of GMSMapView.
import UIKIT
import GoogleMaps
class GoogleMapViewController: UIViewController{
override fun loadView(){
//Mark:- Create a GMSCameraPosition there tells the map to displaying the Map
// coordinate set you want suppose Coordinate latitude -33.86, longitude – 151.20 at zoom level 10.
let camera = GMsCameraPosition.Camera(withLatitude: -33.86, longitude: 151.20, zoom: 10.0)
let mapView = GMSMapView.map(withFrame: CGRect.Zero, camera: camera)
// Mark:- Create a Marker in the Centre of the map you are designing
let marker = GMSMarker()
marker.position = CLLocationCoordinate2D(latitude: -33.86, longitude: 151.20)
marker.tittle = “Sydney”
marker.snippet = “Australia”
marker.map = mapView
}
}
Step 6:- Declare the URL scheme used by API
Application must be declare the URL schemes that they intend to open, by specifying the schemes in the app’s Info.plist file.
To declare the URL schemes used by the Google Maps SDK for iOS add the following lines to your Info.plist
<key>LSApplicationQueriesSchemes</key>
<array>
<string>googlechromes</string>
<string>comgooglemaps</string>
</array>
Step 7:- Run your project
Also Read-Swift Tips for Those Getting Started “Google Map Integration in Swift IOS” | https://nodejsdevelopmentcompany.wordpress.com/2017/02/27/google-map-integration-in-swift-ios/ | CC-MAIN-2018-09 | refinedweb | 686 | 65.93 |
Viget Intern Prank: How to Improve any Website
During my internship, I was given a summer checklist in the form of a so-called “Vigtories card” — the idea was to accomplish as many non-work-related activities (some quirkier than others) as possible. You can see the interns’ final Vigtories on the intern tumblr blog.
Throughout the summer, I made some decent progress (bike to work, go hiking with a fellow Viget, learn something new about Brian or Andy), but one particular task remained unchecked for the majority of the summer - “pull one harmless prank in coordination with your mentor”.
THE IDEA
While celebrating Intern Appreciation Day at a local brewpub with the Boulder developers, we decided to hijack Viget’s DNS and redirect the intern tumblr blog to something funny. Weeks went by however and as the internship neared its end, we’d yet to decide what would be something funny. With one day left however a formal plan was crafted: we would manipulate the intern blog itself.
INITIAL IMPLEMENTATION
Since the task at hand was fairly lightweight, I went with building up a simple Sinatra proxy app. The basic idea was to load the original source using
Nokogiri and fiddle with all the images to our liking. We targeted author images first, but what to do with them? Mustachify.
The following snippet was the first implementation. It flipped through the HTML and replaced every image of class ‘author-photo’ with the mustachifyed version of itself, then returned the new HTML for eyes of the world to see.
get '/' do doc = Nokogiri::HTML(open("")) doc.css('.author-photo img').each do |image| image['src'] = "" + image['src'] end halt 200, {'Content-Type' => 'text/html'}, doc.to_html end
STEPPING THINGS UP
Things then got real as Pat jumped in with some
MiniMagick and
Digest::MD5 craftiness. This time the manipulation was targeted towards images in the bodies of every post. Since the new site was already riddled with mustaches, we decided to flip these images upside down. Using
Nokogiri still to pluck all the posted images from the HTML,
MiniMagick was used to make flipped copies of everything, and
Digest::MD5 was used to save the new and improved copies to our own directory.
def flip_images document.css('.content img').each do |image_node| image_source = image_node['src'] extension = File.extname(image_source) output_name = Digest::MD5.hexdigest(image_source) + extension image = MiniMagick::Image.open(image_source) image.flip image.write "./public/#{output_name}" image_node['src'] = output_name end end
The last line in this method adjusts the HTML so pictures are loaded from our new directory and tada! - images were flipped! But not all of them…
ONE LAST ROADBLOCK
If you upload multiple images into one Tumblr post, an iframe is created in the post instead of a simple image. This difference was allowing these iframe photos to slip under the radar. To fix this, first we broadened the scope of our image flipping function from
'.content img' to
'.content img,.photoset_row img' allowing us to capture, flip, and save the iframe images with the rest. We then wrote a method to change the source of the iframe images away from the tumblr blog server and to our own directory.
def rewrite_iframe_source document.css('iframe.photoset').each do |iframe_node| iframe_node['src'] = iframe_node['src'].sub!("", "") end end
THE RESULT
A glorious manipulation of the intern tumblr blog. The last step in the prank was to hijack the Viget DNS and redirect anyone who visits vignettesfromviget.tumblr.com to the IP of the modified site. This requires some administrative access to the DNS and thus was a job for Pat. But, after all, this was a “prank in collaboration with my mentor”. This prank was purely local as we could only redirect people to the new IP address if they were on a Viget network. But, for your viewing pleasure, feel free to enjoy the fruits of our labor - before and after
BONUS - CAGEME
You’ll notice a few posts do not have a mustachifyed young adult picture as the author, but instead a mustachifyed picture of Nicolas Cage as someone else. This indicates a post by Jack: one of the other Rails interns here this last summer, and the creator of one of the greatest websites around - | https://www.viget.com/articles/viget-intern-prank-how-to-improve-any-website/ | CC-MAIN-2021-43 | refinedweb | 708 | 62.68 |
Asked by:
Failed to connect to policy namespace. Error 0x8004100e
>
Question
All replies
Hi,
Error 0x8004100e= "Unvalid Namespace - Source:WMI"
Please kindly first check that your boundary and boundary group are configured correctly.
Also, ensure the WMI is allowed on the client firewall. Here is link for your reference.
Best Regards,
Tina
Please remember to mark the replies as answers if they help. If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.
- I have the same error.
I've verified WMI connectivity (using WBEMTEST) between client and server and it's OK.
I've been pushing clients without any problem until now (but affected client is 1903).
SCCM server 1902
Client is Windows 10 1903
I checked client logs and found out error 0x87d00215.
Somehow client package was not updated on DP so I did it manually and now clients push OK again.
Shouldn't client package automatically update on DP after SCCM upgrade (from 1810 to 1902)? | https://social.technet.microsoft.com/Forums/sharepoint/en-US/735f1ce7-b054-452e-93eb-da74f03bc13a/failed-to-connect-to-policy-namespace-error-0x8004100e?forum=ConfigMgrCBGeneral | CC-MAIN-2020-34 | refinedweb | 161 | 66.94 |
.
I soldered 3.5mm TS Jacks to the Digital out pins of an arduino and replaced the USB-serial firmware with a USB-midi implementation - on each note-on signal a short trigger is sent to the corresponding out pin.
Finally I created a box for the arduino and the Jacks using OpenSCAD and the OpenSCAD arduino library and printed it on my 3D printer unfortunately the walls are a bit thin because I had to fit the screws on the Jacks and they only allow for 1mm walls. So the box is a bit brittle.
The volcas use one trigger signal for 2 beats so its not really possible to change the speed during one pattern or create different rytms by timing the trigger signals like on the DFAM. But it still allows me to create half or double time patterns on all my volcas.
On the DFAM a pattern always has to use 8 notes - there is no possibility to reset the pattern to the first slot - and you have to set it manually to the last slot before starting the sequencer to make sure everything is in sync and starts at beat one.
Since some Synths (like the DFAM) react to the up-phase of the trigger signal and some synths (like the volcas) react to the down phase of the singal, I made the trigger pulses as short as possible to prevent sync issues.
I used the arduino midi library to create the program that reacts to the midi events on the serial port and used the Midi-firmware from the HIDuino projekct to turn my ardunino serial port into a class compliant USB-Midi port - so I don't need to run any midi to serial translation programs on my computer and can use it out of the box with any DAW. Only downside is, that I need to use the ISP-Headers and a programmer to update the arduino code
If you want to build your own trigger box, you can use my sourcecode for the arduino
#include <MIDI.h> MIDI_CREATE_DEFAULT_INSTANCE(); int on[] = {0,0,0,0}; void HandleNoteOn(byte channel, byte pitch, byte velocity) { if ( pitch >= 48 && pitch < 52) { if ( velocity > 0 ) { on[pitch-48]=1; } } } void setup() { MIDI.begin(MIDI_CHANNEL_OMNI); MIDI.setHandleNoteOn(HandleNoteOn); pinMode( 3, OUTPUT); pinMode( 4, OUTPUT); pinMode( 5, OUTPUT); pinMode( 6, OUTPUT); } void loop() { MIDI.read(); for(int i=0;i<4;i++) { digitalWrite(3+i, on[i]==1?HIGH:LOW); on[i]=0; } }
And here is the OpenSCAD Code I used to generate the enclosure for my trigger-box
include <arduino.scad> difference() { enclosure(UNO, 1.5, 3, 30, 3, TAPHOLE ); for( i=[0:3]) { translate([0,12+i*15,30]) { rotate([0,-90,0]) { cylinder(h=20, r=3.2); } } } }
See also:
New Track: Seven meets Four
Synth Jam - Mar 27
New Track: Melody in D-Minor
How to modulate Midi-CC values in Bitwig Studio | https://www.local-guru.net/blog/2018/9/14/Arduino-based-Midi-Trigger-box-for-analog-synths | CC-MAIN-2018-51 | refinedweb | 489 | 54.05 |
How do I verify a
download?.
=== 0.60 - Tue 30 Apr 2013 ===
* In this release the required python version is changed from 2.5 to 2.6 !
* Added a Recent Changes dialog and a Recent Changes pathbar option
* Added search entry to toolbar
* Added function to attachment browser plugin to zoom icon size
* Added new template by Robert Welch
* by Stéphane Aulery
* Removed custom zim. class in favor of standard library version
* New translations for Korean and Norwegian Bokmal
This release fixes a critical bug in the editor widget that can lead to loss of content for specific combinations of formatting. In addition week numbers in Journal pages are fixed and Tasklist tag inheritance is improved..
=== 0.58 - Sat 15 Dec 2012 ===
* Added new plugin for distraction free fullscreen mode
* Added options to limit tasklist plugin to certain namespaces -
Pierre-Antoine Champin
* Added option to tasklist plugin to flag non-actionable tasks by a special tag
* Added prompt for filename for Insert New File action
* Added template option to list attachments in export
* Added class attributes to links in HTML output
* Added two more commandline options to quicknote plugin
* Made sidepanes more compact by embedding close buttons in widgets
* Critical fix for restarting zim after a crash (cleaning up socket)
* | https://launchpad.net/zim/+download | CC-MAIN-2017-09 | refinedweb | 210 | 50.4 |
In this chapter, we will create our own threads. As we shall see, Java threads are easy to use and well integrated with the Java environment.
In the last chapter, we considered threads as separate tasks that execute in parallel. These tasks are simply code executed by the thread, and this code is actually part of our program. The code may download an image from the server or may play an audio file on the speakers or any other task; because it is code, it can be executed by our original thread. To introduce the parallelism we desire, we must create a new thread and arrange for the new thread to execute the appropriate code.
Let’s start by looking at the execution of a single thread in the following example:
public class OurClass { public void run() { for (int I = 0; I < 100; I++) { System.out.println("Hello"); } } }
In this
example, we have a class called OurClass. The OurClass class has a
single public method called
run() that simply
writes a string 100 times to the Java console or to the standard
output. If we execute this code from an applet as shown here, it runs
in the applet’s thread:
import java.applet.Applet; public class OurApplet extends Applet { public void init() { OurClass oc = new OurClass(); oc.run(); } }
If we instantiate an OurClass object and call its
run() method, nothing unusual happens. An object
is created, its
run() method is called, and the “Hello” message prints 100 times. Just like other method ...
No credit card required | https://www.safaribooksonline.com/library/view/java-threads-second/1565924185/ch02.html | CC-MAIN-2018-30 | refinedweb | 257 | 71.55 |
A DIY Vacuum Fluorescent Display Driver
In my previous post, I showed a simple vacuum fluorescent display filament driver built using a 555 timer and a custom hand-wound, center-tapped toroidal pulse transformer. And as promised in my earlier comment, I am going to show you the remainder of the VFD driving circuit here.
As I mentioned before, there are many driver chips (e.g. MAX6921, MAX6931) readily available for interfacing with VFDs. The driver circuit I am showing here is a bit more complex as it uses only the generic 74HC’s and discrete transistors.
The basic technique for driving a multiplexed VFD is very similar to that for driving a 7-segment LED display. The major difference is the voltage requirement for the control gates and segment. For a VFD, the driving voltage is typically between 20V and 30V instead of the TTL or CMOS logic level used for driving 7-segment LED displays. Thus we must employee some kind of voltage shifting circuitry to translate the logic level signal into the voltage required by the VFD.
Here is the schematic of the VFD driver circuit I built:
In this schematic, IC5 and IC6 are two 74HC138 3-8 line decoders, they are chained together to generate the strobe signal needed for driving the gate. a couple of 74HC04 inverter chips are used to invert the outputs from 74HC138’s so that the input sequence to the 3-8 line decoders corresponds directly to the grid that is being driven high. Of course, the inverters are not strictly necessary as you could easily invert the control signals in software.
IC7 and IC8 are two 74HC595 shift registers, their outputs are used to drive the nine segments (i.e. a-g, comma and decimal point).
The TTL outputs from IC5-IC8 are translated to higher driving voltages required by the VFD via four ULN2003A darlington driver chips. The output signals from the ULN2003A’s are then inverted by the PNP transistors (all of which are 2N3906). Each PNP transistor controls a control grid or a display segment. Technically speaking, using ULN2003A is a bit of an overkill as the load current is negligible in this case. You could easily achieve the same result by replacing the darlington driver chips with individual NPN transistors such as 2N3904. But using ULN2003A does simplify the circuit a little bit as the base resistors are included on the chip.
Because the current requirement for driving the gates and segments is very low for vacuum fluorescent displays, we can use relatively large load and base resistors. The values of these resistors are not critical and can range from 100K to 1M. Typically, we want to choose larger resistors to limit current consumption. But on the flip side, large resistance value does have some negative impact on signal transition time (e.g. rise time) due to the effect of the large RC constants formed with respect to the BJT junction capacitance. When the refresh rate is relatively low (50Hz to 100Hz), this RC constant has little effect on the quality of the driving signal. But as the refresh rate increases, the RC constant’s effect on the signal slew rate will become more pronounced and causing noticeable ghosting effect. Of course, this issue can be easily addressed by using a complementary output stage, but the resulting circuit will become much more complex and you may as well just use a dedicated driver chip at that point.
Here are a couple of pictures of the driver circuit I built on a perfboard:
And here is a picture showing the wired-up VFD. The VFD I salvaged was connected to a small PCB. I decided keep part of the PCB and used it as a breakout board for the pins.
To drive the VFD, a strobe signal (generated via the two 74HC138’s) is needed to assert each of the control grid and while the grid is asserted the corresponding segments to be displayed are driven high. The following oscilloscope capture shows the gate waveforms of two adjacent digits. The gate waveforms are also illustrated in the schematic above. To prevent two adjacent VFD segments being addressed at the same time due to degraded slew rate mentioned earlier, there should be some delays between adjacent pulses. A few microseconds is usually sufficient.
This circuit can be used to interface with a microcontroller directly. To test it out, I used an Arduino to generate the following display sequences (code included at the end). The picture on the left shows what the display looks like without a filter, and the image on the right shows the same display with a blue filter. As you can see even without a filter, the numbers displayed on the VFD are pretty clear:
Here is the simple test program I used to generate the numbers displayed above. Note that only the first 12 digits are used and the last segment is purposely left blank.
#include <Arduino.h> const int NUM_OF_GRIDS = 13; const int PIN_B0 = 2; const int PIN_B1 = 3; const int PIN_B2 = 4; const int PIN_B3 = 5; const int PIN_DS = 6; const int PIN_OE = 7; const int PIN_LATCH = 8; const int PIN_CLK = 9; //'0'..'9' const int DIGITS[]={63, 6, 91, 79, 102, 109, 125, 7, 127, 111, 63}; const int DASH = 64; const int DOT = 128; const int COMMA = 256; int gridCounter = 0; int b = 1; void setup() { for (int i = 2; i<=9; i++) pinMode(i, OUTPUT); } void loop() { //generate pulse to drive each grid digitalWrite(PIN_B0, gridCounter & 1); digitalWrite(PIN_B1, (gridCounter & 2) >> 1); digitalWrite(PIN_B2, (gridCounter & 4) >> 2); digitalWrite(PIN_B3, (gridCounter & 8) >> 3); digitalWrite(PIN_OE, LOW); if (gridCounter % NUM_OF_GRIDS < 10) { b = DIGITS[gridCounter]; } else { if (gridCounter == NUM_OF_GRIDS - 1) { b = 0; //skip the last segment } else { b = DIGITS[gridCounter - 10]; } } //displays DIGITS[gridCounter] digitalWrite(PIN_LATCH, LOW); shiftOut(PIN_DS, PIN_CLK, MSBFIRST, b >> 8); shiftOut(PIN_DS, PIN_CLK, MSBFIRST, b); digitalWrite(PIN_LATCH, HIGH); delayMicroseconds(1000); //clears current digit to prevent ghosting digitalWrite(PIN_LATCH, LOW); shiftOut(PIN_DS, PIN_CLK, MSBFIRST, 0); shiftOut(PIN_DS, PIN_CLK, MSBFIRST, 0); digitalWrite(PIN_LATCH, HIGH); digitalWrite(PIN_OE, HIGH); digitalWrite(PIN_B0, LOW); digitalWrite(PIN_B1, LOW); digitalWrite(PIN_B2, LOW); digitalWrite(PIN_B3, LOW); gridCounter++; if (gridCounter >= NUM_OF_GRIDS) gridCounter = 0; }
Nice project, I was looking into something similar as I’m using VFD displays in my current project too.
Why not use the 74*238 which has active-high outputs and do away with the inverters? Why not use a proper BCD to 7-segment decoder instead of the 74*595 hack? You could’ve then focused on the output stage :-)
Thanks Radu,
Well, of course you could use a BCD to 7-segment chip. I didn’t have any. Also, I wanted to drive the two extra segments (the comma and dot) and typical decoders (e.g. CD4511) does not have the extra segments built-in.
[…] […]
i am curious, couldn’t you have gotten away with only the PNP transistors, directly driven by the shift registers(or hex inverters, if you are so keen on using them)?
The output voltage from the TTL inverter is not high enough to turn those PNP transistors off. Since the VFD grid voltage is relatively high (around 27V in my case), you can’t use a CMOS inverter (e.g. CD4069) either.
I am trying to drive some IV-11 VFD tubes for a clock. These are being driven statically (not Multiplexed). The anode voltages are 25 volts DC and the grids are tied high since the operation of them is on at all times. I want to use 74ls247 which are BCD to 7 segment decoder/drivers. What type of transistor or ULN2003a must I use to make these work? Can you help me out?
I am not an engineer but just a hobbyist who like to build clocks.
I have not used 1V-11, but the principle should be pretty similar. Even though they are typically driven statically, you can still multiplex them (you can take a look at this blog post)
Kerry,
Good morning. Thanks for the circuit to switch the anode voltages of 25 volts. Can I use 2N3904 and 2N3906 transistors for this circuit? Can I feed the inputs from an LS247 decoder driver (open collector outputs)? If so dont I really need a base resistor and a pullup resistor on the base to insure that they switch on and off properly?
Hi Ed,
Yes, you don’t necessarily need the Darlington driver and you certainly can use a 2n3904 in its place for each channel and use it to translate the output voltage from 74LS247 to drive the 2n3906 for the VFD grid.
Hi kwong! I have a Display similar to your (DISPLAY ITRON FG1013/ RS1) and I would like to put the photo here but I do not know.
My display has 10 digits but the name of the pins that is marked on the pcb is the following:
Pins 1-11: a, b, c, d, e, P, f, g, dot, down arrow, -VF.
Pins 12-22: 0,1,2,3,4,5,6,7,8,9, +VF.
It has nothing marked as: DS, OE, LATCH, CLK;
It’s the same as eBay’s: % 3DSOI.DEFAULT% 26a% 3D1% 26asc% 3D50546% 26meid% 3D958831e511d349638fb7ad0e41153449% 26pid% 3D100752% 26rk% 3D6% 26sd% 3D113004079683% 261% 3D112991529999 & _trksid = p2047675.c100752.m1982
and already comes soldier in a pcb; I’d like to use your code but it certainly will not be possible. If you can help me, I thank you very much; | http://www.kerrywong.com/2013/06/13/a-diy-vacuum-fluorescent-display-driver/?replytocom=769925 | CC-MAIN-2019-04 | refinedweb | 1,584 | 67.49 |
table of contents
NAME¶
futimens,
utimensat—
LIBRARY¶Standard C Library (libc, -lc)
SYNOPSIS¶
#include <sys/stat.h>
int
futimens(int
fd, const struct timespec
times[2]);
int
utimensat(int fd,
const char *path, const struct
timespec times[2], int flag);
DESCRIPTION¶The access and modification times of the file named by path or referenced by fd are changed as specified by the argument times. The inode-change-time of the file is set to the current time.
If path specifies a relative path, it is
relative to the current working directory if fd is
AT_FDCWD and otherwise relative to the directory
associated with the file descriptor fd.
The tv_nsec field of a
timespec structure can be set to the special value
UTIME_NOW to set the current time, or to
UTIME_OMIT to leave the time unchanged. permissions constructed by a
bitwise-inclusive OR of flags from the following list, defined in
<fcntl.h>:
AT_SYMLINK_NOFOLLOW
- If path names a symbolic link, the symbolic link's times are changed. By default,
utimensat() changes the times of the file referenced by the symbolic link.
RETURN VALUES¶Upon successful completion, the value 0 is returned; otherwise the value -1 is returned and the global variable errno is set to indicate the error.
COMPATIBILITY¶If the running kernel does not support this system call, a wrapper emulates it using fstatat(2), futimesat(2) and lutimes(2). As a result, timestamps will be rounded down to the nearest microsecond,
UTIME_OMITis not atomic and
AT_SYMLINK_NOFOLLOWis not available with a path relative to a file descriptor.
ERRORS¶These allocated address space.
- [
EINVAL]
- The tv_nsec component of at least one of the values specified by the times argument has a value less than 0 or greater than 999999999 and is not equal to
UTIME_NOWor
UTIME_OMIT.
- [
EIO]
- An I/O error occurred while reading or writing the affected inode.
- [
EPERM]
- The times argument is not
NULLnor are both tv_nsec values
UTIME_NOW, nor are both tv_nsec values
UTIME_OMITand the calling process's effective user ID does not match the owner of the file and is not the super-user.
- [
EPERM]
- The named file has its immutable or append-only flag set, see the chflags(2) manual page for more information.
- [
EROFS]
- The file system containing the file is mounted read-only.
The
futimens() system call will fail
if:
The
utimensat() system call will fail
if:
- [
EACCES]
- Search permission is denied for a component of the path prefix.
- [
EBADF]
- The path argument does not specify an absolute path and the fd argument is neither
AT_FDCWDnor a valid file descriptor.
- [
EFAULT]
- The path argument points outside the process's allocated address space.
- [
ELOOP]
- Too many symbolic links were encountered in translating the pathname.
- [
ENAMETOOLONG]
- A component of a pathname exceeded
NAME_MAXcharacters, or an entire path name exceeded
PATH_MAXcharacters.
- [
ENOENT]
- The named file does not exist.
- [
ENOTDIR]
- A component of the path prefix is not a directory.
- [
ENOTDIR]
- The path argument is not an absolute path and fd is neither
AT_FDCWDnor a file descriptor associated with a directory.
- [
ENOTSUP]
- The running kernel does not support this system call and
AT_SYMLINK_NOFOLLOWis used with a path relative to a file descriptor.
SEE ALSO¶chflags(2), stat(2), symlink(2), utimes(2), utime(3), symlink(7)
STANDARDS¶The
futimens() and
utimensat() system calls are expected to conform to IEEE Std 1003.1-2008 (“POSIX.1”).
HISTORY¶The
futimens() and
utimensat() system calls appeared in FreeBSD 10.3. | https://manpages.debian.org/testing/freebsd-manpages/utimensat.2freebsd.en.html | CC-MAIN-2021-39 | refinedweb | 569 | 52.29 |
Generally, Windows programs are very event based systems - a user presses a key, the key down event is fired with the information on what key was pressed, and the application reacts accordingly. There are sometimes cases, however, when the program needs to know the current state of the mouse/keyboard, but we don't have a handy event with the info about which keys are down or mouse buttons are pressed. Now you could always maintain this state yourself in some sort of data structure and updating it on key/mouse up and down events - but that is a lot of work, and, as it turns out, unnecessary. There are already functions built into .NET that let you access most (but not all) of this info - and in the case where there is not a built in function, the answer is just a simple interop away! And that is what we are going to take a look at today.
Below we have a screen shot of the little app we are going to build today. All it does is print out the current state of the mouse and keyboard when the button is clicked. Not a very useful app by any means, but what is useful is the code that populates those visible fields. By looking at the screen shot, you can probably tell what we are going to look at today: getting the current mouse position, the current mouse buttons held down, the current modifier keys pressed (ctrl, alt, shift), the current keys pressed, and the current keys toggled (useful for numlock, capslock, etc).
![ Screenshot Of Input State
App]()
Ok, so now lets walk through how we get this information. The first three are easy - there are ways built into .NET to get the info. First we have mouse position:
_MousePosLabel.Text = Control.MousePosition.X + ", " + Control.MousePosition.Y;
That's right - the current position of the mouse is always available in
the static property
MousePosition off of
Control. The only special
thing to note here is that the value is in screen coordinates, so you
will probably need to use a
PointToClient call to get it into useful
coordinates for your application. Next we have mouse buttons:
_MouseButtonsLabel.Text = Control.MouseButtons.ToString();
Again, available off
Control as a static property. This returns an
instance of the
MouseButtons enum containing the currently pressed
mouse buttons. Checking this enum to see if a specific mouse button is
pressed takes a little bit of boolean logic, but it isn't that bad. For
instance, to check and see if the left mouse button is pressed, you
would do something like this:
if((Control.MouseButtons & MouseButtons.Left) == MouseButtons.Left) { //Do stuff here }
Next we have another easy one - modifier keys:
_ModifierKeysLabel.Text = Control.ModifierKeys.ToString();
Yet again, it is a static property off of
Control. Don't worry, this
is the last easy one. Here, the property returns an instance of the
Keys enum. Even though it is an instance of the full blow keys enum
(i.e., it has entries for every key), only
Keys.Control,
Keys.Shift,
or
Keys.Alt will ever be set (since this only returns modifier keys).
Once again, like the mouse buttons, you need to do a bit of logic to see
if particular modifier keys are pressed (since more than one can be
pressed at once):
//Checks to see that Control is pressed, but doesn't care // about other modifier keys if((Control.ModifierKeys & Keys.Control) == Keys.Control) { //Do stuff here } //Checks to see Control is pressed and //Alt and Shift are not pressed if(Control.ModifierKeys == Keys.Control) { //Do stuff here } //Checks to see that Control and Shift are pressed, //but not Alt if(Control.ModifierKeys == (Keys.Control | Keys.Shift)) { //Do stuff here }
Ok, now we are on to the hard items - checking the down/toggle state of
any key. Sadly, .NET does not expose this, even though it is a standard
Win32 function. It is called
GetKeyState. Below you can see a small
class that wraps it up nicely:
using System; using System.Windows.Forms; using System.Runtime.InteropServices; namespace MouseKeyboardStateTest { public abstract class Keyboard { [Flags] private enum KeyStates { None = 0, Down = 1, Toggled = 2 } [DllImport("user32.dll", CharSet = CharSet.Auto, ExactSpelling = true)] private static extern short GetKeyState(int keyCode); private static KeyStates GetKeyState(Keys key) { KeyStates state = KeyStates.None; short retVal = GetKeyState((int)key); //If the high-order bit is 1, the key is down //otherwise, it is up. if ((retVal & 0x8000) == 0x8000) state |= KeyStates.Down; //If the low-order bit is 1, the key is toggled. if ((retVal & 1) == 1) state |= KeyStates.Toggled; return state; } public static bool IsKeyDown(Keys key) { return KeyStates.Down == (GetKeyState(key) & KeyStates.Down); } public static bool IsKeyToggled(Keys key) { return KeyStates.Toggled == (GetKeyState(key) & KeyStates.Toggled); } } }
The important part of this code is the
GetKeyState function (the one
with code, not the interoped one). Essentially, we take a
Keys enum
and cast it as an int, and pass it to the Win32
GetKeyState function.
Fortunately, the int cast of the
Keys enum does exactly what we want -
it gives us the value that the underlying API needs (otherwise we would
have to some sort of tedious mapping).
Now, the value that comes back is a little odd. It is a short (so 16
bits) - but all but two of those bytes don't matter. We care about the
high bit and the low bit. If the high bit is 1, then the key is
currently down. If the low bit is 1, then the key is currently toggled.
Granted, toggling doesn't really make sense for things other than caps
lock, num lock, or scroll lock, but Windows keeps track of a toggled
state for all keys. So with those two values, we compile a
KeyState
enum that we hand back.
That
KeyState enum is used by two functions here - the
IsKeyDown and
IsKeyToggled function. All they do is take a look at that enum and
return true or false as appropriate.
So how do we use this in actual code? Well, its pretty simple. Heres the code for the Down and Toggled fields in the little app shown above:();
Pretty much, we just check
IsKeyDown and
IsKeyToggled for every key
in the
Keys enum, and compile a string (using a string builder) with
the result. Granted, the loops are kind of annoying, but its rare that
you would have to know the state of every key at once, right? Anyway,
even if you did, there is a different Win32 function you could pull in
(which we aren't going to talk about here) called
GetKeyboardState
that does exactly that.
So here is all the code that executes when the "Check Now" button is pressed:
private void CheckBtn_Click(object sender, EventArgs e) { _MousePosLabel.Text = Control.MousePosition.X + ", " + Control.MousePosition.Y; _MouseButtonsLabel.Text = Control.MouseButtons.ToString(); _ModifierKeysLabel.Text = Control.ModifierKeys.ToString();(); }
And that is it for today! You can download the Visual Studio project with all this code here.
Source Files:
in my opinnion when i die you will answer my question
I believe it is because you can press multiple keys at the same time.
hi i 'dont understand this --->
foreach (Keys k in Enum.GetValues(typeof(Keys))) why is it Looping?
if((Control.ModifierKeys & Keys.Control) == Keys.Control)
can be written as this in .NET 4.0
if (Control.ModifierKeys.HasFlag(Keys.Control)) { }
Awesome Keyboard class; was perfect for my needs. Thank you!
excelent. thank you
Pretty usefull, thanks!
Very nice. A project with test code was a big help.
Thanks to whoever's page this is.
Chris
Thanks ;-)
excellent article
I have just stumbled upon this website...your tutorials are great!
Thanks for the article. It helped me a lot.
This is a great article. It is very interesting and informative! | http://tech.pro/tutorial/760/winforms-accessing-mouse-and-keyboard-state | CC-MAIN-2014-42 | refinedweb | 1,306 | 73.58 |
HTML5, Older Browsers and the Shiv
HTML5 introduced a few semantic elements that are not supported in older browsers. Some of these new elements are no different than generic block elements so they don’t pose any compatibility problems. All you need to ensure compatibility is to add a CSS rule to your website that causes the relevant elements to behave like block elements.
But Internet Explorer versions 8 and under pose a challenge. Any element not in the official roster of elements cannot be styled with CSS. That means we cannot make then behave like block elements or give them any formatting.
For example, the following code will not work.
<style> section {display: block} </style> <section>This is on its own line.</section> <section>This should appear on a separate line.</section>
But that’s not all. These new elements behave as if they don’t exist. For example, the following CSS won’t work, since the
section element won’t match the universal selector.
<style> body * span {color: red} </style> <body> <section> <span>This should be red, but won't be red in IE 8.</span> </section> </body>
Fortunately for us, a workaround exists that allows Internet Explorer (IE) to recognize these new elements allowing them to be styled, and thus giving us full use of these new semantic tags. It’s a tool called HTML5Shiv.
As noted on the linked Google page, “shiv” and “shim” are interchangeable terms in this context.
But how did we go from IE not even acknowledging the existence of this element, to now being able to use it?
The trick is that calling
document.createElement("section") will suddenly cause IE to recognize the
section element. No one knows why, but it works and you don’t even need to use the node returned by that function.
But you need to make sure to call it early on in your website before any of those elements are used, otherwise it won’t work.
You will need to call it for each and every new HTML5 elements like so:
"abbr article aside audio bdi canvas data datalist details figcaption figure "+ "footer header hgroup main mark meter nav output progress section " + "summary template time video" .replace(/w+/g, function(a){ document.createElement(a) });
Notice we’re using the
replace method of the
string object to succinctly iterate over each contiguous length of characters matched by the regular expression and executing the callback function for each character block which in turn calls
createElement.
Here on in, we’ll call this method, “shivving the document”, so that the document can render the new HTML5 elements.
Now our previous two HTML examples work. But that’s not all there is to it.
Pitfall 1: HTML5 and innerHTML
If HTML is being generated using
innerHTML and it is called on a node not currently attached to a document (AKA an orphaned node), then it’s deja vu all over again. The following two examples will not render the
section element, even though it’s run on a document already shivved.
var n1 = document.getElementById("n1"); n1.parentNode.removeChild(n1); n1.innerHTML = "<section>Sect 1</section>"; //won't work
var n2 = document.createElement("div"); n2.innerHTML = "<section>Sect 2</section>"; //won't work
In the second example above, if we append the node to the document first before calling
innerHTML, then it will work:
var n2 = document.createElement("div"); document.body.appendChild(n2); n2.innerHTML = "<section>Sect 2</section>"; //works
We can conclude that although we shivved the document earlier on, orphaned elements do not benefit from the shiv when calling
innerHTML.
What can we do? For starters, whenever we need to set
innerHTML we should append it to the document first. An alternative is to first shiv the
document property of the orphan before working with the orphan.
First let’s put our shiv in its own function.
function iehtml5shiv(doc) { "abbr article aside audio bdi canvas data datalist details " + "figcaption figure footer header hgroup main mark meter nav " + "output progress section summary template time video" .replace(/w+/g, function(a){ doc.createElement(a) }); }
The next time we have an orphan element, we can do this:
var n1 = document.createElement("div"); iehtml5shiv(n1.document); n1.innerHTML = "<section>Sect 1</section>"; //works
Notice how it’s just like shivving the document, but on the
document property of the element. And notice we’re accessing
document instead of
ownerDocument. Both are different things as shown here:
alert(n1.document == document); //false alert(n1.ownerDocument == document); //true
Now we have two methods to make sure our call to
innerHTML works when handling HTML5 elements.
Pitfall 2: cloneNode
It appears our cousin
cloneNode is also susceptible to losing its shiv. Any HTML5 elements which are cloned, or have had their parents cloned, will lose their identity.
Notice how the below element has colons in its
nodeName, meaning it’s being confused for an element from another namespace.
var n2 = n1.cloneNode(true); alert(n2.innerHTML); //outputs: <:section>Sect 1</:section>
This happens even if the node was already attached to the document.
There isn’t much we can do here except roll out your own implementation of
cloneNode, which is trivial enough.
Pitfall 3: Printing
Whenever you print a webpage, IE appears to generate a new document before printing which means all the shiv workarounds are not preserved.
There isn’t much you can do to mitigate this. The HTML5Shiv tool works around this by listening for the
onbeforeprint event and replacing all the HTML5 elements on the page with normal elements and then doing the reverse on the
onafterprint event.
Thankfully, the HTML5Shiv tool does that nicely for us.
References
- The HTML5Shiv tool:
- The story of the HTML5 Shiv:
- Ron
- Axis Multimedia
- liza
- Michael Morris
- Ubaidullah Butt
- Mauricio
- carlos
- simon
- Marri | http://www.sitepoint.com/html5-older-browsers-and-the-shiv/ | CC-MAIN-2014-41 | refinedweb | 969 | 55.34 |
The objective of this post is to explain how to send data to the ESP32 using the Bluetooth RFCOMM protocol. The tests of this tutorial were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board.
Introduction
The objective of this post is to explain how to send data to the ESP32 using the Bluetooth RFCOMM protocol.
RFCOMM allows the emulation of serial ports [1] over Bluetooth, and thus we can use it to exchange data, for example, with a computer program. In our case, we will be using Pybluez, a Python module that allows to use the Bluetooth functionalities of our machine. Please consult this tutorial which explains how to set Pybluez.
For the ESP32 part, we will be using the BTstack library and the ESP IDF. Please check this previous tutorial for a detailed explanation on how to set the BTstack library on the IDF.
The code shown here is based on the spp_counter example from the BTstack library. It is a more complete example that covers some more functionalities and I encourage you to try it.
The tests of this tutorial were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board.
The Python code
As usual, we will start by doing the import of the Pybluez module, so we can get the functions needed to access all the Bluetooth functionality.
from bluetooth import *
Next we will create an object of class BluetoothSocket, which will allow us to establish the communication with our ESP32. In the constructor, we need to specify the transport protocol. In our case, we will pass RFCOMM, since it is the protocol we want to use.
BTsocket=BluetoothSocket( RFCOMM )
Next we need to connect to the Bluetooth device by calling the connect method on our BluetoothSocket object. This method receives as input both the address of the Bluetooth device and the port/channel where it is available.
If you have followed this previous tutorial, you should already know the Bluetooth address of your ESP32. If not, we will have a way of discovering upon executing the ESP32 code from this tutorial, which we will specify in the next section. We will try to connect to channel 1, since it will be the one we will specify for the ESP32.
BTsocket.connect(("30:AE:A4:03:CD:6E", 1))
Now we will send a message to the ESP32 by calling the send method of the BluetoothSocket object. This function receives as input the string to be sent.
BTsocket.send("Hello world")
To finalize, we will close the connection by calling the close method, which receives no arguments. The full source code can be seen bellow and already includes this call. As can be seen, the code for this is very simple.
from bluetooth import * BTsocket=BluetoothSocket( RFCOMM ) BTsocket.connect(("30:AE:A4:03:CD:6E", 1)) BTsocket.send("Hello world") BTsocket.close()
The ESP32 code setup
If you have been following the previous tutorials about the ESP32 and the Bluetooth, you should already have an Hello_World BTstack project configured. You can keep using that project’s configurations. If you haven’t yet a project configured, please consult this previous post.
We start our code by including the btstack.h header file, which will be needed for our program.
#include "btstack.h"
Next we need to specify the header of a function that will be the handler executed upon receiving packets. As said in previous tutorials, this is a very low level API, so we need to implement these kind of handling functions. Although it makes the code more difficult to learn, it will also give us much more freedom and control, which will be beneficial in the long term.
This function needs to follow the prototype specified here, as the btstack_packet_handler_t type. So, this function will receive as input the packet type, the channel, a pointer to the actual packet and the size of the packet.
static void packet_handler (uint8_t packet_type, uint16_t channel, uint8_t *packet, uint16_t size);
Next we also need to declare a variable of type btstack_packet_callback_registration_t, which is declared here. We will later assign the previous packet handler function to a field of this structure called callback, which is precisely of type btstack_packet_handler_t, mentioned in the previous paragraph.
static btstack_packet_callback_registration_t hci_event_callback_registration;
We will handle the implementation of the packet handling function in other section of this tutorial.
The ESP32 main function
First we need to register our handler function for the Bluetooth HCI layer events. As we will see in the next sections, we need to handle the reception of the RFCOMM connection and accept it, so we can start receiving information from the Python program.
So, as we said before, we will assign the address of our packet handler function to the callback field of the btstack_packet_callback_registration_t variable we specified.
hci_event_callback_registration.callback = &packet_handler;
After that, we register the packet event handler function by calling the hci_add_event_hander function, which receives a pointer to a btstack_packet_callback_registration_t structure. Naturally, we will use the structure to which we assigned the packet handler function.
hci_add_event_handler(&hci_event_callback_registration);
Next we call the l2cap_init function, to initialize the L2CAP layer of the Bluetooth protocol. The same way, we call the rfcomm_init function to initialize the RFCOMM layer of the protocol, which we will use to establish the communication.
l2cap_init(); rfcomm_init();
Then we need to register our RFCOMM service. To do so, we call the rfcomm_register_service function, which receives as input a function of type btstack_packet_handler_t. Note that this is the same type of the function we declared in the beginning of our code, so we will reuse it and implement all the functionality for also handling the RFCOMM packets there.
The rfcomm_register_service function also receives as input the channel for the RFCOMM communication and the maximum frame size. Remember that in the Python code we used the channel 1, so it is the same we will specify here.
rfcomm_register_service(packet_handler, 1, 0xffff);
Next in our main function, we will initialize the SDP layer with the sdp_init function, and we will make the device discoverable with the gap_discoverable_control function.
Note that although we are initializing SDP, we are not registering our RFCOMM as discoverable for keeping this tutorial simple. This is the reason why we directly connected to the device using its address in the Python code, instead of first making a lookup for the available services.
We are also not setting the local name of the device so when we pair with it from the computer its address is displayed. That way, you will be able to retrieve this value and use it on the Python code.
sdp_init(); gap_discoverable_control(1);
To finalize, we will power on the Bluetooth hardware controller with a call to the hci_power_control function.
hci_power_control(HCI_POWER_ON);
The packet handler function
Now we will specify the packet handler function. First we will need to declare a variable to hold the channel id for when we receive the RFCOMM connection.
uint16_t rfcomm_channel_id;
Remember that in the prototype of our function one of the input parameters was the packet type. So we will use that value to look for a specific packet type we want. The first one we will search is the RFCOMM incoming connection event. We need to accept the connection upon receiving it.
This will correspond to a HCI packet event packet. Fortunately there are some defines on this library that help us finding the correct packet type. In this case we will use the HCI_EVENT_PACKET define.
Nonetheless, we still just know that it is a HCI event packet. To find the type of packet, we call the hci_event_packet_get_type function, passing as input the pointer for the packet. Remember that this pointer was also an input parameter for our packet handler function.
Once we get the event type, we can check if it is a RFCOMM incoming connection event. Again, we have the RFCOMM_EVENT_INCOMING_CONNECTION define to find this type of event.
if (packet_type == HCI_EVENT_PACKET && hci_event_packet_get_type(packet) == RFCOMM_EVENT_INCOMING_CONNECTION) { //Event handling code here }
The handling of this event will correspond to getting the RFCOMM channel id, by calling the rfcomm_event_incoming_connection_get_rfcomm_cid function and passing as input the pointer to the packet.
rfcomm_channel_id = rfcomm_event_incoming_connection_get_rfcomm_cid(packet);
Final, we use the channel id retrieved to accept the connection, by calling the rfcomm_accept_connection function.
rfcomm_accept_connection(rfcomm_channel_id);
The other type of packet that we need to look for is a RFCOMM data packet. We have the RFCOMM_DATA_PACKET define for the corresponding packet type value. If we get that type of packet, then we can iterate over the packet received and print it to the serial port.
Note that we receive a pointer to the packet, so we can access it as an array. Since we also have the size of the packet (another parameter of the packet handling function), we know when to stop to avoid reading outside the boundaries of the array.
if (packet_type == RFCOMM_DATA_PACKET){ printf("Received data: '"); for (int i=0;i<size;i++){ putchar(packet[i]); } printf("\n----------------\n"); }
The final code
You can check the final source code for the ESP32 bellow.
#include "btstack.h" static void packet_handler (uint8_t packet_type, uint16_t channel, uint8_t *packet, uint16_t size); static btstack_packet_callback_registration_t hci_event_callback_registration; static void packet_handler (uint8_t packet_type, uint16_t channel, uint8_t *packet, uint16_t size){ uint16_t rfcomm_channel_id; if (packet_type == HCI_EVENT_PACKET && hci_event_packet_get_type(packet) == RFCOMM_EVENT_INCOMING_CONNECTION){ rfcomm_channel_id = rfcomm_event_incoming_connection_get_rfcomm_cid(packet); rfcomm_accept_connection(rfcomm_channel_id); }else if (packet_type == RFCOMM_DATA_PACKET){ printf("Received data: '"); for (int i=0;i<size;i++){ putchar(packet[i]); } printf("\n----------------\n"); } } int btstack_main(int argc, const char * argv[]){ hci_event_callback_registration.callback = &packet_handler; hci_add_event_handler(&hci_event_callback_registration); l2cap_init(); rfcomm_init(); rfcomm_register_service(packet_handler, 1, 0xffff); sdp_init(); gap_discoverable_control(1); hci_power_control(HCI_POWER_ON); return 0; }
Testing the code
To test the whole system, we first need to compile and upload the code to the ESP32. Please consult this previous tutorial, which explains in detail how to compile and flash the FireBeetle board.
Note that after uploading the code, you may have to reset it or plug and re-plug the board so it leaves the program download mode. After successfully running the code, you should be able to detect it from your computer, as shown in figure 1.
Note that since we didn’t specify a display name, we see the name BTstack followed by the address of the device. This is the address you should use on your Python code.
Figure 1 – ESP32 being detected by Windows as Bluetooth device.
Upon pairing with the device on your computer, open a serial monitor tool to receive data from the ESP32. In my case, I’m using the Arduino IDE serial monitor since I’m not being able to receive the data on the IDF serial monitor.
After receiving the “BTstack up and running” message, run the Python Pybluez code, in order to communicate with the ESP32 using RFCOMM. You should get a result similar to figure 2. Note that I ran the Python code twice in that example, which is why two “Hello World” messages are shown. not be able to establish the BluetoothSocket connection. In order to solve this, you need to forget the device and repair it again. Please consult this GitHub issue to check the thread about it.
Related posts
- ESP32 Bluetooth: Using the BTstack library
- ESP32 Bluetooth: Finding the device with Python and BTStack
References
[1]
12 Replies to “ESP32 Bluetooth: Receiving data through RFCOMM”
Hi,
I’d like to use a ESP32 with a SerialTerm in my Android Phone with Classic Bluetooh, no BLE.
But the conexion is lost and it is impossible for using it.
Is there a solution for keeping the link between esp32 and phone or other device ?
Thanks a lot.
Hi! Sorry, I’ve never tested it with an Android Phone and that software.
But were you able to test with the Python code and a computer or do you also have problems on that environment?
Best regards,
Nuno Santos
No working in this conf. Any help ?
Debian 9 64bit , python 2.7 and ESP32-WROOM-32
ESP32 log >
Python log >
Hi! I did all the tests on Windows, so it may be a linux specific issue.
Can you test it on Windows to see if the problem persists, to confirm if the problem is related with the operating system?
Best regards,
Nuno Santos
Hi! Sorry, I’ve never tested it with an Android Phone and that software.
But were you able to test with the Python code and a computer or do you also have problems on that environment?
Best regards,
Nuno Santos
Hi! I did all the tests on Windows, so it may be a linux specific issue.
Can you test it on Windows to see if the problem persists, to confirm if the problem is related with the operating system?
Best regards,
Nuno Santos | https://techtutorialsx.com/2017/07/09/esp32-bluetooth-rfcomm/ | CC-MAIN-2020-40 | refinedweb | 2,119 | 53.51 |
Laravel ships with a fairly nice default password broker. It creates an auto-expiring reset token, associates that with the user, and sends an email to the user all in one nice fairly modular system. But, there's a problem...
How do you add dynamic data to your reset view.
By default, Laravel will pass through the requesting user and the new token that was created. But, there are times where you need more.
Looking around the usual haunts of Stack Exchange, and Laravel.io: most people recommend firing the underlying token logic yourself, then calling mail send.
It's a bit of smell to have to replicate half of the logic that is available from
PasswordBroker.sendResetLink.
Or they say to create a view composer, on the fly and I'm not a huge fan of that either (view composers should be available for all logic not just a single flow).
Instead, I wanted a way to be able to add data to my reset email without ripping out the guts of the rest of Laravel's existing logic.
At this point, you might be asking "why would I want to do this"? So consider the following system.
An app you are building has a closed registration process (only admins can create users). You don't want admins to set user passwords because they will most likely never be reset to something else.
Instead, you fill in the password with a random string and hash that (or even better a random string that isn't a valid hash so it never matches if someone tried to brute force guess passwords). Then you send the user a reset email to set the password on their new account.
This flow is great because you use existing tested logic to bring a new user into your system instead of having to maintain a password reminder AND a confirmation setup.
But, for most users, they'll be more than a little confused when they see an email telling them to reset a password on a site they've never been to or created an account for.
So, if we want to use the reset system and keep the nice syntax of
PasswordBroker.sendResetLink.
The magic comes from the optional callback that is accepted as the second argument to
sendResetLink.
Usually you would use this to change the
replyTo or subject for a Swift Mail Message.
But, there's more going on under the hood.
The callback has access to the
Swift_Message instance that eventually is used to send the email.
And in the
send function in Laravel's
Mailer, this
Swift_Message instance is set to the
message variable in our template.
Most often, this is used to use the email's subject in the template.
But, we can do a lot more than this.
By setting undefined public properties on
Swift_Message in this callback, we can pass almost ANYTHING into our email template.
So, for the example above with creating a User with a random password, and sending an email, my code can look like this:
public function createWithRandomPassword($attributes = []) { $attributes['password'] = Hash::make(str_random()); $user = $this->user->create($attributes); return $this->passwordBroker->sendResetLink($user->toArray(), function($message) { $message->newUser = true; }); }
And my template can grab that
newUser property:
@if ($message->newUser) <h1>Welcome New user!</h1> @endif <p>Click here to reset your password: {{ url('password/reset/'.$token) }}</p>
Do you have any hacks in Laravel that you like? Share them in the comments below!
Are you a Laravel or Ember developer wanting to branch out and learn how to build awesome apps in Node and Express? Sign up for the newsletter at embergrep.com to learn more about my upcoming course "Express.js: Zero to Sixty"! | http://ryantablada.com/post/add-variables-to-your-password-reset-emails | CC-MAIN-2020-05 | refinedweb | 625 | 63.39 |
#include <MCF2_lp.hpp>
Inheritance diagram for MCF2_lp:
Definition at line 12 of file MCF2_lp.hpp.
Definition at line 25 of file MCF2_lp.hpp.
Definition at line 26 of file MCF2_lp.hpp.
References branch_history, cg_lp, flows, gen_vars, and purge_ptr_vector(). from BCP_lp_user.
Pack an algorithmic variable.
Reimplemented from BCP_lp_user.
Definition at line 34 of file MCF2_lp.hpp.
References MCF2_pack_var().
Unpack an algorithmic variable.
Reimplemented from BCP_lp_user.
Definition at line 37 of file MCF2_lp.hpp.
References MCF2_unpack_var(). from BCP_lp_user. from BCP_lp_user. from BCP_lp_user. from BCP_lp_user. from BCP_lp_user. from BCP_lp_user. from BCP_lp_user.
Definition at line 14 of file MCF2_lp.hpp.
Referenced by ~MCF2_lp().
Definition at line 15 of file MCF2_lp.hpp.
Definition at line 16 of file MCF2_lp.hpp.
Definition at line 17 of file MCF2_lp.hpp.
Referenced by ~MCF2_lp().
Definition at line 19 of file MCF2_lp.hpp.
Referenced by ~MCF2_lp().
Definition at line 21 of file MCF2_lp.hpp.
Referenced by ~MCF2_lp().
Definition at line 22 of file MCF2_lp.hpp. | http://www.coin-or.org/Doxygen/CoinAll/class_m_c_f2__lp.html | crawl-003 | refinedweb | 156 | 56.11 |
I have to deal with a program on arrays doing different things with such as finding out if the row contains a number in the first n rows, put all the numbers listed in the arrays without repeating numbers that are listed twice, finding numbers that are in common and finally how long a array is without using .length. I have most of the setup correct but I do not understand how to input numbers into an array to test my method, its giving me an error the way im doing it and im not sure if im supposed to leave it blank or how im supposed to give it values. Any help is appreciated, thank you!
public class ArrayMethods { public static void main(String[] args) { contains(null, 5, 5); } public static boolean contains (int[] a, int v, int n ) { for (int i = 0; i >= n; i++) { if (a[i] == v) { return true; } } return false; } public static int[] union ( int[] a1, int[] a2) { int total[] = new int [a1.length + a2.length]; int current = 0; for (int i = 0; i > a1.length; i++) { if (a1.length == a2.length) { int a = 0; a = a1.length - 0; System.out.println(a); } System.out.println(total); } return a2; } public static int[] intersect (int[] a1, int[] a2) { int[] result; for (int i = 0; i > a1.length; i++) { } return a2; } public static int length (int[] a) { int count = 0; for (int i = 0; true; i++) { count = count + 1; return count; } } } | https://www.daniweb.com/programming/software-development/threads/385915/how-to-setup-arrays | CC-MAIN-2017-47 | refinedweb | 244 | 65.25 |
I2C SDA/SCL toggling when being programmed and connected to a hostuser_105642624 Mar 15, 2017 2:39 PM
On our custom board, BCM20737S is connected to PIC32 processor over I2C.
BCM20737S is the I2C host as it can only work as a host.
I can send and receive data using the cfa_bsc_OpExtended coomand.
So the I2C interface is working.
The only issue is that when the BCM20737S gets connected to an Android (BLE host), it seems to send out some packets (NULL characters?) over the I2C to the PIC.
I observed/verified the line toggling by monitoring the SDA/SCL lines on a scope.
Also I can use a debugger in the PIC32 and detect I2C packets. The content of the I2C packets looks like NULL characters, when falsely transmitted.
I triggerred the detection of I2C packets by setting up an IF statement with monitoring the STOP status.
However, due to the NULL character transmission, the I2C detection occurs undesirably, overwhelming the true packet detection.
I've added another condition to filter out the undesirable packets when the first byte is a NULL character, and this seems to work reliably.
However, I would like to really understand why getting connected to a host or getting the BCM20737S programmed would cause the I2C bus to be active, making the I2C slave to respond unnecessarily.
Thanks,
Andrew
1. Re: I2C SDA/SCL toggling when being programmed and connected to a hostBoonT_56 Mar 20, 2017 1:48 AM (in response to user_105642624)
I can't see the link...
Did you enable any debug trace? Disable it.
Is it possible to connect to a iphone host using an app like "LightBlue"?
If yes, does the issue still exist?
Do you see the issue if I2C is connected to something else eg a sensor (or nothing for that matter)?
2. Re: I2C SDA/SCL toggling when being programmed and connected to a hostuser_105642624 Mar 20, 2017 10:42 AM (in response to BoonT_56)
I disabled ble_trace's by adding the BLE_APP_DISABLE_TRACING() . The problem still existed.
#include "sparcommon.h"
...
APPLICATION_INIT() {
.......
BLE_APP_DISABLE_TRACING();
}
Then I commented all ble_trace just to be sure. The SCL/SDA lines are still toggling when getting connected.
I then commented out the I2C read/write commands. The problem is still there.
And yes, I used "LightBlue" in addition to our custom app.
I've also downloaded the Wiced Sense code. The same thing.
Attached is an photo of the scope traces (1 for SCL and 2 for SDA), when the BCM20737S gets connected, i.e.when you click on the BLE device name on LightBlue to connect.
3. Re: I2C SDA/SCL toggling when being programmed and connected to a hostBoonT_56 Mar 20, 2017 7:14 PM (in response to user_105642624)
Is it possible to provide a USBee trace? Btw, what is your external pull-ups on the SDA/SCL? 10K?
4. Re: I2C SDA/SCL toggling when being programmed and connected to a hostuser_105642624 Mar 20, 2017 10:20 PM (in response to BoonT_56)
I monitored the I2C data that was detected in the interfaced MCU. The data was 'NULL'. I believe the I2C address was correct.
Yes, the pull-ups are 10K.
What I can do is to monitor at the SDA/SCL lines on Wiced Sense kit, which I have.
I supposed I can do something similar with TAG03, which I also have. But I do not believe the I2C on TAG03 is connected to sensors.
Would it be possible for you to do something similar on your side? I am just going to solder two wires on SDA and SCL and look at them on the scope.
5. Re: I2C SDA/SCL toggling when being programmed and connected to a hostuser_105642624 Jun 6, 2017 6:08 PM (in response to user_105642624)
I just revisited this issue with TAG3 board.
The same thing occurred. When you press the RESET or reprogram, the I2C signals toggle, similar to what I observed on our custom board.
I think what may be happening is that somehow the BCM20737S is trying to talk to an EEPROM or some configuration device.
Is there anything in the code that may cause this, an automatic search for programming device?
Thanks,
Andrew
6. Re: I2C SDA/SCL toggling when being programmed and connected to a hostBoonT_56 Jun 6, 2017 11:37 PM (in response to user_105642624)
Yes. When 20737S boots up, it will start with the boot rom code, followed by discovery of any external storage. It will look out for an eeprom on the I2C, and if one is not found, it will then look for sflash on SPI. In most cases, it will detect the eeprom and carry on from there.
7. Re: I2C SDA/SCL toggling when being programmed and connected to a hostMichaelF_56
Jun 7, 2017 6:54 AM (in response to user_105642624)1 of 1 people found this helpful
The Boot Sequence is described here in the July 31st, 2015 response: BCM2073XS Boot Sequence: Can't download to board (device not found)
Step 2a. | https://community.cypress.com/message/32573 | CC-MAIN-2019-18 | refinedweb | 841 | 72.66 |
ZeroMQ defines a Protocol called ZMTP (ZeroMQ Message Transport Protocol) as a transport layer protocol for exchanging messages between two peers over a connected transport layer such as TCP (see RFCs: 23/ZMTP and 37/ZMTP).
A Protocol is a set of rules (from Greek protocollon, first leaf glued to a manuscript describing its consents) that governs data exchanging between two end points. Each protocol has its own set of rules of how data is formatted, when to send, how to manage it once received, etc.
In this article, we will explain the ZMTP protocol using a simple ZeroMQ Push-Pull example. In order to capture the data exchanged between the two sockets, we will use Wireshark.
Wireshark is a free and open source network protocol analyzer. It captures and decodes network packets. One of its intended purposes is to learn network protocol internals.
Wireshark's decoding process uses dissectors that can identify and display protocol's fields and values into human-readable format frames. It supports thousands of dissectors that parse and decode common protocols.
The following Wireshark screenshot shows captured packets exchanged between a ZeroMQ Push and Pull sockets.
The main window is composed of three panes:
The protocol hierarchy in the above Wireshark screenshot is Ethernet-Internet Protocol-Transmission Control Protocol. In each protocol level, the protocol dissector decode its part of the protocol and it passes the data on to the lowest-level data dissector.
One of the key strengths of Wireshark is the ability to add new custom dissectors to it, either as plugins or built into the source.
We notice that our socket message's data displayed in the TCP level is not decoded because the ZMTP dissector has not yet been installed. The ZMTP dissector (zmtp-wireshark) is a Wireshark plugin written in Lua and supports ZMTP 3.0 and later.
After installing this dissector, here is the same conversation between the Push and Pull sockets as shown above but with ZMTP dissector this time.
We notice that the packets in the “Packet List” pane has been decoded and displayed ZMTP informations in the columns. We notice also how this dissector decoded the ZMTP data and displayed its fields in a readable and elegant format in the “Packet Details” pane.
We will explain each ZMTP element in the following sections.
In this example, we have one push socket connected to one pull socket.
Here is the code:
#include "zhelpers.hpp"
int main() {
// Create context
zmq::context_t context(1);
// Create server socket
zmq::socket_t server (context, ZMQ_PUSH);
server.bind("tcp://*:5557");
// Create client socket
zmq::socket_t client (context, ZMQ_PULL);
client.connect("tcp://localhost:5557");
// Send message from server to client
s_send (server, "My Message");
std::string msg = s_recv (client);
std::cout << "Received: " << msg << std::endl;
server.close();
client.close();
context.close();
}
The Push socket will send a single message to Pull socket.
Push
Pull
Start Wireshark and add 'zmtp' filter to filter only the zmpt packets, then run the Push-Pull example. The captured packets are shown in the following screenshot:
zmtp
zmpt
The ZMTP dissector has recognized its packets, decoded and displayed them in a readable format in the 'Packet List' and 'Packet Details' panes.
ZMTP packets exchanged between two sockets are called 'connection'. A 'connection' is composed of three groups of packets: greeting, handshake and traffic as shown in the above screenshot.
The ABNF grammar that defines ZMTP is:
zmtp = *connection
connection = greeting handshake traffic
Now, let us examine each packet:
A greeting is composed of 64 octets containing data sent by peers in order to agree on version and security mechanism of the connection.
A greeting consists of signature, version, mechanism, as-server and filler. The ABNF grammar that defines the greeting is:
greeting = signature version mechanism as-server filler
Each greeting displayed by the dissector is composed (in this example) of three packets exchanged between the two sockets, to see these packets:
In the 'Follow TCP Stream' dialog box, the packets data are colored with blue and red. The blue color indicates packets from Push to Pull socket while packets from Pull to Push socket is marked in red.
The ZMTP signature is 10 bytes followed by the ZMTP version of two bytes. The ABNF grammar that defines the signature & version is:
signature = %xFF padding %x7F
padding = 8OCTET
version = version-major version-minor
version-major = %x03
version-minor = %x00
The ZMTP signature and version are partial parts of the greeting enabling a peer to detect and work with older versions of the protocol, that means a peer may downgrade its protocol to talk to a lower protocol peer. But, if a peer cannot downgrade its protocol to match its peer, it will close the connection.
In our example, the ZMTP version used by the two peers is 3.0 (major = 3, minor = 0).
Noting that padding field in the signature may be used for older protocol detection.
The security mechanism is an ASCII string null-padded as needed to fit 20 octets. The ABNF grammar that defines the security mechanism is:
string null
mechanism = 20mechanism-char
mechanism-char = "A"-"Z" | DIGIT | "-" | "_" | "." | "+" | %x0
The security mechanism ensures for a peer the identity of the other peer it talks to, so that messages cannot be tampered with, nor inspected, by third parties. The security mechanism defines also the handshake phase which is composed of some packets exchanged between peers after greetings. If a peer receives a security mechanism that does not exactly match the security mechanism it sent, it will close the connection.
In our example, the sockets define no security mechanism “NULL”, that means there is no authentication and no confidentiality.
NULL
The “NULL” security mechanism defines one command exchanged between the peers forming the handshaking phase which we will see later in this article.
The 'as-server' is composed of one byte. The ABNF grammar that defines the as-server is:
as-server
as-server = %x00 | %x01
The “as-server” indicates if the peer is acting as a server (value is 1) or as a client (value is 0). These values are defined by the security mechanism and they are not related to socket bind/connection direction (for example in the 'PLAIN' security mechanism the peer defined as a client authenticates itself to the peer defined as server by sending a HELLO command. The server accepts or rejects this authentication).
PLAIN
HELLO
The NULL security mechanism dose not specify a client and a server topology, so “as-server” field should be always zero for all peers.
The “filler” extends the greeting to 64 octets with zeros and its grammar is:
filler
filler = 31%x00
Framing Data
After greetings, all data is sent as frames. A frame consists of a flags field (1 octet), followed by a size field (one octet or eight octets) and a frame body of size octets. The size does not include the flags field, nor itself, so an empty frame has a size of zero.
The flags consists of a single octet containing various control flags:
Bit 0 (MORE)
0 : indicates that there are no frames to follow
1 : indicates that there is another frame to follow.
Bit 1 (LONG):
0 : indicates that the frame size is encoded as a 64-bit unsigned integer in network byte order
1 : indicates that the frame size is encoded as a single octet
Bit 2 (COMMAND)
0 : indicates that the frame is a message frame
1 : indicates that this frame is a command frame
Bits 3-7: reserved for future use and must be zero.
Examples of frames are discussed in following sections.
Handshake is composed of zero or more commands defined by the security mechanism in the greetings. A command is a single long or short frame. The ABNF grammar that defines any command is:
command = command-size command-body
command-size = %x04 short-size | %x06 long-size
short-size = OCTET ; Body is 0 to 255 octets
long-size = 8OCTET ; Body is 0 to 2^63-1 octets
command-body = command-name command-data
command-name = OCTET 1*255command-name-char
command-name-char = ALPHA
command-data = *OCTET
Handshake is an extension protocol allowing peers to create a secure connection. If the security handshake is successful, the peers continue the discussion, otherwise one or both peers closes the connection.
We can see that the rule “command-size” in the above grammar starts either by 0x04 or by 0x06 which represents the flags field of a frame. The flags 0x04 has only Bit2 set to 1, which means that this frame is a single short command frame. The flags 0x06 has Bit1 and Bit2 set to 1, which means that this frame is a single large frame.
command-size
The NULL security mechanism defines a READY command exchanged between peers. A READY command consists of a list of properties. Each property consists of a name-value pair.
READY
The ABNF grammar that defines the NULL security mechanism is:
null = ready *message | error
ready = command-size %d5 "READY" metadata
command-size = %x04 short-size | %x06 long-size
short-size = OCTET ; Body is 0 to 255 octets
long-size = 8OCTET ; Body is 0 to 2^63-1 octets
metadata = *property
property = name value
name = OCTET 1*255name-char
name-char = ALPHA | DIGIT | "-" | "_" | "." | "+"
value = 4OCTET *OCTET ; Size in network byte order
The READY command in our example contains a property named “Socket-Type” which defines the sender's socket type. The value of this property is “PUSH” when the push socket sends this command and “PULL” when the pull socket sends it.
Socket-Type
PUSH
PULL
A peer validates that the other peer is using a valid socket type (valid combination of sockets). In our example, Push peer validates that the other peer has a Pull socket type and vice versa. If the validation is not succeeded, then the connection will be closed.
A traffic consists of commands and messages intermixed.
The ABNF grammar that defines the traffic is:
traffic = *(command | message)
The grammar of “command” is already defined above, and here is the ABNF grammar of a message:
command
message = *message-more message-last
message-more = ( %x01 short-size | %x03 long-size ) message-body
message-last = ( %x00 short-size | %x02 long-size ) message-body
short-size = OCTET ; Body is 0 to 255 octets
long-size = 8OCTET ; Body is 0 to 2^63-1 octets
message-body= *OCTET
The flags byte of the message frame is defined in the rules “message-more” and “message-last”. This field can take four values:
flags
message-more
message-last
0x00: indicates that this frames is a short last-frame message
0x02: indicates that this frames is a long last-frame message
0x01: indicates that this frames is a short more-frame message
0x03: indicates that this frames is a long more-frame message
In our example, the traffic consists of one message:
Now, we will send a multi frame message. The first frame is a long message (its length > 255 bytes) and the second frame is a short message (the same message that we sent before).
s_sendmore (server, std::string(256, 'a'));
s_send (server, "My Message");
In the above screenshot, I didn't display all long message bytes in the 'Packet Bytes” pane. We notice that the first frame's flags indicate that this frame is followed by another one (More) and it's a long frame (bit 1 is set to one). The payload length is encoded as a 64-bit unsigned integer (8 bytes) because it's greater that 255 bytes.
Packet Bytes
The second frame is followed directly after the first one. Its flags indicate that it's the last frame (bit 0 is set to zero) and that it's a short frame (bit 1 is set to zero) since the payload length is less than 256 bytes.
ZMTP is a protocol that governs data exchanging between ZeroMQ sockets. It defines a certain number of rules: protocol version, security mechanism, defining discrete messages (frames), metadata (single/multi frames, short/long message), etc.
This article, along with any associated source code and files, is licensed under The Creative Commons Attribution-Share Alike 3.0 Unported License | https://www.codeproject.com/Articles/863889/ZeroMQ-Diving-into-the-Wire | CC-MAIN-2019-09 | refinedweb | 2,026 | 58.01 |
Rails comes with a router in
config/routes.rb. This file contains all the
the urls your application will respond to. This Rails guide
is a good reference to using Rails' router. Let's create some routes that our
Angular application will eventually interface with. Rails' router comes with a
useful
resources function which we can use to specify RESTful routes, which
we can also nest in blocks to create nested routes.
routes.rbfor
:postsand
resources. We only need some of the routes that
resourcesprovides us with by default, so we'll use the
only:option specify the actions we want. We'll need to create our own
putroutes for upvoting. Putting it in a
memberblocks makes it so our url parameters map to
:idinstead of
:post_id.
root to: 'application#angular' resources :posts, only: [:create, :index, :show] do resources :comments, only: [:show, :create] do member do put '/upvote' => 'comments#upvote' end end member do put '/upvote' => 'posts#upvote' end end
rake routesto see all the routes you created.
Once you've seen the routes we've configured, let's generate our controllers for
posts and comments. We'll need to use the
--skip-assets and
--skip-template-engine flags since we'll be creating our own javascript files
and templates.
rails generate controller Posts --skip-assets --skip-template-engine
rails generate controller Comments --skip-assets --skip-template-engine
jsonformat, we need to add the following
respond_tostatement in
ApplicationController:
protect_from_forgery with: :exception respond_to :json
Let's start adding actions to our controllers. We'll be using the
respond_with method in our actions to return json to our endpoints. Don't
forget to permit the data coming from the user with strong parameters.
We'll need an
index,
create,
show, and
upvote action to correspond with
the routes we just created. Since we're using Rails 4, we'll need to also specify
which parameters we want permitted in our controllers
:linkand
:titleattributes in
PostsController:
def post_params params.require(:post).permit(:link, :title) end end
index,
create,
show, and
upvoteaction
PostsController:
def index respond_with Post.all end def create respond_with Post.create(post_params) end def show respond_with Post.find(params[:id]) end def upvote post = Post.find(params[:id]) post.increment!(:upvotes) respond_with post end private def post_params params.require(:post).permit(:link, :title) end end
:bodyattribute for comments in
def comment_params params.require(:comment).permit(:body) end end
createand
upvoteactions in
def create post = Post.find(params[:post_id]) comment = post.comments.create(comment_params) respond_with post, comment end def upvote post = Post.find(params[:post_id]) comment = post.comments.find(params[:id]) comment.increment!(:upvotes) respond_with post, comment end private def comment_params params.require(:comment).permit(:body) end
We respond with both post and comments in CommentsController because we are using a nested resource, although only the last object is returned when responding to json.
Our Rails backend is now ready to be wired up to our angular app! | https://thinkster.io/tutorials/angular-rails/creating-api-routes-and-controllers | CC-MAIN-2019-13 | refinedweb | 486 | 57.47 |
Compiling ScummVM/MinGWCompiling ScummVM
Download MinGW:
Download MSYS and the MSYS Developer Tools:
Check the "Installing MinGW and MSYS" section below for instructions on how to create your ScummVM compilation environment
To ease the whole process, a package of all the needed precompiled libraries has been created.
All you need to do is:
Both MinGW and MSYS need to be installed and working to compile ScummVM.
Now, we need to compile the required libraries and tools..
If you do wish to recompile SDL from source code, please note the following: should be compiled before libvorbis and libFLAC
Unzip the libogg archive in a folder, open MSYS, go to the libogg folder and issue these commands to compile the library:
./configure --disable-shared --prefix=/mingw
make
To install the library, type:
make install prefix=/mingw
Unzip the libvorbis archive in a folder, open MSYS, go to the libvorbis folder and issue these commands to compile the library:
Unzip the libmad archive in a folder. Open MSYS, go to the libmad folder. If you are using gcc 4.4 or higher, run the following command:
sed -i '/-fforce-mem/d' configure
Then issue these commands to compile the library:
Unzip the libmpeg2 archive into a folder. Open MSYS, go to the libmpeg2 folder.
./configure --disable-sdl --disable-shared --prefix=/mingw
make
Note that if you are compiling x64 i.e. for a 64-bit target, then currently (v0.5.1), the following patch is needed:
--- libvo/video_out_dx.c.orig 2014-02-17 16:38:24.000000000 +0100
+++ libvo/video_out_dx.c 2014-02-17 16:39:34.000000000 +0100
@@ -92,7 +92,7 @@
switch (message) {
case WM_WINDOWPOSCHANGED:
- instance = (dx_instance_t *) GetWindowLong (hwnd, GWL_USERDATA);
+ instance = (dx_instance_t *) GetWindowLongPtr (hwnd, GWLP_USERDATA);
/* update the window position and size */
point_window.x = 0;
@@ -173,7 +173,7 @@
/* store a directx_instance pointer into the window local storage
* (for later use in event_handler).
* We need to use SetWindowLongPtr when it is available in mingw */
- SetWindowLong (instance->window, GWL_USERDATA, (LONG) instance);
+ SetWindowLongPtr (instance->window, GWLP_USERDATA, (LONG) instance);
ShowWindow (instance->window, SW_SHOW);
Unzip the flac archive in a folder, open MSYS, go to the flac folder and issue these commands to compile the library:
We use fluidsynth 1.0.9, since later versions requires GTK.
Unzip the fluidsynth archive in a folder, open MSYS, go to the fluidsynth folder and apply the following patch:
--- include/fluidsynth.h
+++ include/fluidsynth.h
@@ -28,13 +28,7 @@
#endif
#if defined(WIN32)
-#if defined(FLUIDSYNTH_DLL_EXPORTS)
-#define FLUIDSYNTH_API __declspec(dllexport)
-#elif defined(FLUIDSYNTH_NOT_A_DLL)
-#define FLUIDSYNTH_API
-#else
-#define FLUIDSYNTH_API __declspec(dllimport)
-#endif
+#define FLUIDSYNTH_API
#elif defined(MACOS9)
#define FLUIDSYNTH_API __declspec(export)
Unzip the libpng archive in a folder, open MSYS, go to the libpng folder and issue these commands to compile the library:
Unzip the libtheora archive in a folder, open MSYS, go to the libtheora folder and issue these commands to compile the library:
./configure --disable-shared --disable-examples --prefix=/mingw
make
Unzip the libfaad2 archive in a folder, open MSYS, go to the libfaad2 folder and apply the following patch:
--- frontend/main.c
+++ frontend/main.c
@@ -31,7 +31,9 @@
#ifdef _WIN32
#define WIN32_LEAN_AND_MEAN
#include <windows.h>
+#ifndef __MINGW32__
#define off_t __int64
+#endif
#else
#include <time.h>
#endif
Unzip the freetype archive in a folder, open MSYS, go to the freetype folder and issue these commands to compile the library:
OK this should be all of it (thankfully), so you should be good to go. | http://wiki.scummvm.org/index.php/Compiling_ScummVM/MinGW | CC-MAIN-2014-10 | refinedweb | 567 | 50.97 |
02 May 2012 05:51 [Source: ICIS news]
SINGAPORE (ICIS)--Asia’s hydrous ethanol prices are expected to remain at their current levels with some firming up expected in the third quarter because of tightening supply from ?xml:namespace>
Sugarcane production in
However, market players said
A persistent drought has plagued the country since late 2011, according to media reports.
In addition, there has been a larger volume of ageing cane crop, making the crop turnover slower, the reports said.
“The [ethanol] supply will be tight because of these two factors, but not as bad as that in 2011,” a southeast Asia-based ethanol trader said.
According to the Brazilian Sugarcane Industry Association (UNICA), a total of around 509m tonnes are expected to be produced, which is 3.19% higher than the previous harvest but still lower than the government’s estimate.
In 2011,
As a result of the low harvest at the time, the prices of hydrous ethanol for export to countries such as
In 2012, the market for hydrous B-grade ethanol started on a firm note with offers at as high as $820/cbm CFR NE Asia, but prices gradually declined, according to data from ICIS.
During the later part of the first quarter, buying demand slowed down because of comfortable inventory levels among customers, and sellers lowered their prices as a result of the weak demand to spur buying interest.
Over the last two weeks, offers of Brazilian hydrous ethanol have hovered at $770-780/cbm but buyers’ response has been weak this week because major buyer
In addition, the depreciation of the Brazilian real (R) to R1.88 against the US dollar from R1.82 two weeks ago has made it unviable to sell ethanol at high | http://www.icis.com/Articles/2012/05/02/9555518/asias-hydrous-ethanol-prices-to-be-stable-to-firm-on-low.html | CC-MAIN-2014-42 | refinedweb | 292 | 54.15 |
E4/position papers/Frank Appel
From Eclipsepedia
Bio.
RAP Experience Report on E4/Work Area items
Here are some comments on some of the items of the E4/Work Areas / page. These comments are based on the experince we have made with the RAP project on these topics. Note that the solutions we provide for some of the problems in RAP are only ment as a starting point for discussion.
SWT and RWT
RAP was created to provide RCP developers a possibility to bring (at least parts of) their RCP applications to the web without the need to step deep into the low-level web-technologies. The reason for this is cost reduction by reuse of knowlege and code. Therefore we have chosen the technical approach that provides the highest possible reusability - a thin client with a rich widget set and a stateful serverside on top of OSGi, reusing the eclipse workbench paradigm.
With respect to this goal and the distributed nature of the environment it's clear that RWT can only provide a subset of functionality of SWT. So currently missing functionality leads with RWT to missing API. Transforming SWT code into RWT ends up with compile errors if functionality is used that isn't available in RWT. This helps to identify problems at compile time, which we preferred to the error-prone process of finding out about missing functionality at runtime.
Also there are certain additional APIs for web-specifics, not available in SWT of course (some of them are mentioned in the chapters below).
It is obvious that the more SWT functionality is provided by RWT the more reuse of code based on SWT can be done. So it would be great to achieve a solution where RWT is just the 'web-SWT' - fragment. But still there are some difficulties e.g. regarding missing GC support (how to draw on the webbrowser with javascript? The current available possibilities do not scale) and the different resource management (colors, fonts and images are no system resources on the server, they are shared between sessions and therefore don't provide constructors and dispose methods).
Session Support
As RAP is inherent server centric it has to deal with the three scopes an object can belong to on the server: request-, session- and application scope.
The lifetime of objects in request scope spans at most a certain request lifecycle. Objects in request scope are only visible for the current request. Several internal datastructures of RWT use this scope, e.g. the datastructure that helps to determine the status delta that has to be sent to the client. ServletRequest#setAttribute(String,Object) and ServletRequest#getAttribute(String) are used for storing objects in request scope in servlet environment.
The lifetime of objects in session scope spans at most all the requests that belong to a certain client. Objects that belong to a certain session can not be referenced by another session but they are visible for each request that belongs to the session. HttpSession#setAttribute(String,Object) and HttpSession#getAttribute(String) are used for storing objects in session scope in servlet environment.
Last but not least objects in application scope span at most the lifetime of the whole application and are accessible for each request and each session. Objects in application scope can be stored in class variables.
To handle this scopes RAP maps a context to each request thread using a ThreadLocal. With this context in place it's possible to use request, response and session references at any place in the code, without transfering them all the time in method parameters. At the start of the request's lifecycle the context gets mapped and at the end it get's disposed of:
try { ServiceContext context = new ServiceContext( request, response ); ContextProvider.setContext( context ); [...] // processing of lifecycle [...] } finally { ContextProvider.disposeContext(); }
Access to the http session for example can be done using:
RWT.getRequest().getSession()
As RCP applications generally are single user clients using singletons is a quite common practice. Trying to transfer this practice to RAP there arises a problem. Singletons use class variables for storing the singleton instance. But this puts the instance in application scope in RAP. To solve this problem, but providing a similar programming pattern RAP provides a class called SessionSingletonBase which allows to access unique instances of types per session.
SessionSingletonBase provides a convenience method
public static Object getInstance( final Class type )
that allows to create singletons that can be used like singletons in RCP:
static class MySingleton { private MySingleton() { //prevent instance creation } public static MySingleton getInstance() { return ( MySingleton )getInstance( MySingleton.class ); } }
This works fine as long as the access to the singleton is done in the request thread. Background threads don't have a context mapped, so they will fail accessing the singletons. In RWT there is API to deal with this too:
UICallBack#runNonUIThreadWithFakeContext(Display,Runnable)
Besides that the name isn't very good, this method executes the given runnable in the current thread but mapping a context that allows access to a certain session. The session is determined using the display instance, since in RWT display and session live in a one to one relationship.
Multi Locale Support
In consideration of the things mentioned in the section Session Support it's clear that the multi locale support of RCP doesn't work in server environments as the translated values of the messages classes are stored in class variables.
To keep the advantage of typesafety we moved the class variables of the messages classes in fields. For a certain locale a certain instance of that message class is provided. This instance can be retrieved using a static get method to access the locale aware translation:
public class WorkbenchMessages { private static final String BUNDLE_NAME = "org.eclipse.ui.internal.messages";//$NON-NLS-1$ [...] // --- File Menu --- public String NewWizardAction_text; [...] public static WorkbenchMessages get() { Class clazz = WorkbenchMessages.class; Object result = RWT.NLS.getISO8859_1Encoded( BUNDLE_NAME, clazz ); return ( WorkbenchMessages )result; } }
Usage may look like this:
page.setDescription(WorkbenchMessages.get().NewWizardNewPage_description);
RWT looks up the locale either in the session and if none is given there it evaluates the current request. If no request is available it uses the system locale as fallback.
This helps with common translation in code, but it doesn't help with translation of labels in extension-points. To achieve this we had to use a patch fragment for org.eclipse.equinox.registry. The patch fragment prevent the translation of labels in the registry code, since this would be a one-time-per-applicaton translation. The translation now takes place by the time an extension is materialized. | http://wiki.eclipse.org/index.php?title=E4/position_papers/Frank_Appel&oldid=117508 | CC-MAIN-2014-41 | refinedweb | 1,097 | 53 |
I have a program to add coins into a piggy bank and calculate the total but my program is not asking how many of each coin and will not give me the total.
import java.util.Scanner; public class PiggyBankTester { public static void main(String[] args) { int pennies; int nickels; int dimes; int quarters; Scanner in = new Scanner(System.in); String choice = ""; while (choice != "X") { System.out.println("What type of coin to add(P, N, D, Q, or X to exit?"); choice = in.next(); } if (choice == "P") { System.out.print("Enter the number of pennies: "); pennies = in.nextInt(); } else if (choice == "N") { System.out.print("Enter the number of nickels: "); nickels = in.nextInt(); } else if (choice == "D") { System.out.print("Enter the number of dimes: "); dimes = in.nextInt(); } else if (choice == "Q") { System.out.print("Enter the number of quarters: "); quarters = in.nextInt(); } PiggyBank bank = new PiggyBank(); System.out.println("Your total change in dollars and cents is "+ bank.bankTotal()); } }
public class PiggyBank { private int pennies; private int nickels; private int dimes; private int quarters; public PiggyBank() { } public void addPennies(int numPennies) { pennies = pennies + numPennies; } public void addNickels(int numNickels) { nickels = nickels + numNickels; } public void addDimes(int numDimes) { dimes = dimes + numDimes; } public double bankTotal() { double total = 0.0; total = pennies * 1 + nickels * 5 + dimes * 10.0 + quarters * 25; return total; } } | https://www.daniweb.com/programming/software-development/threads/387356/piggy-bank-not-adding-money | CC-MAIN-2018-30 | refinedweb | 221 | 51.95 |
Does anyone have a valid use case for
ActiveRecord::Base#toggle! - inquiring
minds want to know!
The implementation from the rails codebase…
def toggle!(attribute) toggle(attribute).update_attribute(attribute, self[attribute]) end
So you send an
ActiveRecord instance an attribute whose state you want to flip
around, it flips it around, then it saves the instance with
#update_attribute,
which means no validations will be run.
Here’s an example…
>> user = User.find :first => #<User id: 1, first_name: nil, last_name: nil, sysadmin: false, ...> >> user.sysadmin? => false >> user.toggle! :sysadmin => true >> user.reload.sysadmin? => true
…the problem is that while it’s easy to come up with ways that you CAN use this feature, we can’t come up with a use case where you SHOULD use this feature. Please enlighten us - but be warned that your reply will be evaluated within the context of various best practice rules that you may or may not be aware of. | https://thoughtbot.com/blog/riddle-me-this | CC-MAIN-2019-47 | refinedweb | 156 | 72.46 |
ASP.NET MVC - Actions
ASP.NET MVC Action Methods are responsible to execute requests and generate responses to it. By default, it generates a response in the form of ActionResult. Actions typically have a one-to-one mapping with user interactions.
For example, enter a URL into the browser, click on any particular link, and submit a form, etc. Each of these user interactions causes a request to be sent to the server. In each case, the URL of the request includes information that the MVC framework uses to invoke an action method. The one restriction on action method is that they have to be instance method, so they cannot be static methods. Also there is no return value restrictions. So you can return the string, integer, etc.
Request Processing
Actions are the ultimate request destination in an MVC application and it uses the controller base class. Let's take a look at the request processing.
When a URL arrives, like /Home/index, it is the UrlRoutingModule that inspects and understands that something configured within the routing table knows how to handle that URL.
The UrlRoutingModule puts together the information we've configured in the routing table and hands over control to the MVC route handler.
The MVC route handler passes the controller over to the MvcHandler which is an HTTP handler.
MvcHandler uses a controller factory to instantiate the controller and it knows what controller to instantiate because it looks in the RouteData for that controller value.
Once the MvcHandler has a controller, the only thing that MvcHandler knows about is IController Interface, so it simply tells the controller to execute.
When it tells the controller to execute, that's been derived from the MVC's controller base class. The Execute method creates an action invoker and tells that action invoker to go and find a method to invoke, find an action to invoke.
The action invoker, again, looks in the RouteData and finds that action parameter that's been passed along from the routing engine.
Types of Action
Actions basically return different types of action results. The ActionResult class is the base for all action results. Following is the list of different kind of action results and its behavior.
Let’s have a look at a simple example from the previous chapter in which we have created an EmployeeController.
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; namespace MVCControllerDemo.Controllers { public class EmployeeController : Controller{ // GET: Employee public ActionResult Search(string name){ var input = Server.HtmlEncode(name); return Content(input); } } }
When you request the following URL, then you will receive the following output as an action.
Add Controller
Let us add one another controller.
Step 1 − Right-click on Controllers folder and select Add → Controller.
It will display the Add Scaffold dialog.
Step 2 − Select the MVC 5 Controller – Empty option and click ‘Add’ button.
The Add Controller dialog will appear.
Step 3 − Set the name to CustomerController and click ‘Add’ button.
Now you will see a new C# file ‘CustomerController.cs’ in the Controllers folder, which is open for editing in Visual Studio as well.
Similarly, add one more controller with name HomeController. Following is the HomeController.cs class implementation.
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; namespace MVCControllerDemo.Controllers { public class HomeController : Controller{ // GET: Home public string Index(){ return "This is Home Controller"; } } }
Step 4 − Run this application and you will receive the following output.
Step 5 − Add the following code in Customer controller, which we have created above.
public string GetAllCustomers(){ return @"<ul> <li>Ali Raza</li> <li>Mark Upston</li> <li>Allan Bommer</li> <li>Greg Jerry</li> </ul>"; }
Step 6 − Run this application and request for. You will see the following output.
You can also redirect to actions for the same controller or even for a different controller.
Following is a simple example in which we will redirect from HomeController to Customer Controller by changing the code in HomeController using the following code.
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; namespace MVCControllerDemo.Controllers{ public class HomeController : Controller{ // GET: Home public ActionResult Index(){ return RedirectToAction("GetAllCustomers","Customer"); } } }
As you can see, we have used the RedirectToAction() method ActionResult, which takes two parameters, action name and controller name.
When you run this application, you will see the default route will redirect it to /Customer/GetAllCustomers
| https://www.tutorialspoint.com/asp.net_mvc/asp.net_mvc_actions.htm | CC-MAIN-2019-04 | refinedweb | 745 | 51.24 |
Visual
So having EBO by default instead of needing to opt-in via `declspec` is still literally years away..? :-/
It will be a while, yes. I’m personally with you (break the world in pursuit of conformance and performance, rebuild everything from scratch), but the vast majority of customers find binary breaking changes to be difficult to deal with.
Hi STL, can’t this be put behind a flag, like /fullconformance or /strict? So the default behavior would be to use declspec, and when the flag is specified, EBO would be on by default.
If people don’t want to update their code to use the new/modern compiler, then don’t update the compiler. Keep using VS2008 or whatever they use to compile their buggy code.
Modes which affect layout are extremely problematic.
Speaking of C++17… what are the chances of seeing Structured Bindings in VS2015? :)
Unlikely. We’ve never planned to add new features to VS 2015 through Updates indefinitely.
What about Structured Bindings in VC “15”? ☺
Not planned for VS “15” RTM. See GDR’s reply to a similar question here:
Does this mean VS 15 will not contain the next major release of the STL?
You referred to that in your previous article. I presume fixes that may break binary compatibility will wait for at least a year or more now.
It’ll help to explain our branch structure. For several years, the primary development branch for the compiler front-end and STL has been WCFB01. This is where the toolsets in VS 2015 RTM and its Updates came from. We’ve simply avoided making binary breaking changes after RTM. For VS “15”, we are continuing to work in WCFB01 and continuing to avoid binary breaking changes while adding features and fixing bugs. Most of our development work and almost all of our new features are going into this branch.
We have a separate branch, WCFB02, where we’re making binary-incompatible changes to the STL. It’s too early for us to announce exactly what this will contain or when it will be available, but I can mention a couple of specific examples. Major overhauls to the Clause 30 multithreading headers (future etc.) are being performed in WCFB02 because they inherently break bincompat. Similarly, overhauling the Filesystem TS implementation as part of moving it out of the experimental namespace will require changes to the separately compiled DLL – while that hasn’t happened yet, we expect to do this work in WCFB02.
So, the story is: VS “15” will contain a significantly updated compiler and STL, with new features and bugfixes. They’ll simply be binary compatible with VS 2015 (just like how its Updates were delivered). So it’s a major amount of work, but not a new major version of the toolset as far as bincompat is concerned.
WCFB02’s bincompat-breaking STL will appear at some point, but it won’t displace the WCFB01 bincompat-preserving toolset we’re shipping in VS “15” (this is what I meant when I said “will remain available throughout the lifecycle”).
Thank you for explaining. My concern would be that some improvements would be delayed for more VS releases than previously. The overhaul of Clause 30 etc. would be an example of this. I also recall you posting in the past that you made a fix that would allow you to eliminate a larger number of source files. Binary compat is important but so are fixes for performance and correctness.
Yeah, that was the iostreams perf fix, which I was able to ship in Update 2, but checked in a more aggressive form into WCFB02. I was also concerned that preserving bincompat would limit our ability to improve things, but surprisingly few fixes require bincompat breaks. The stuff we’re doing in WCFB02 is going to take a while, so instead of delaying the availability of various overhauls, you should think of this as bringing you an updated STL (from WCFB01) earlier, compared to our earlier practice which would have involved radio silence between 2015 RTM and whenever WCFB02 is ready.
Stephan,
I get the sense that you are being very careful in your wording in an attempt to be very clear without revealing things that you are not authorized to share. But I am ending up with more questions than answers after reading this post.
I have the task of providing some degree of guidance to my organization with respect to moving from VS 2013 to a newer compiler. Knowing that “15” is coming soon, we need to balance whatever urgency there is to upgrade with what we might miss out on if we go with VS 2015 rather than waiting.
This post seems to indicate that we will not get a non-experimental version of the Filesystem TS until VC “16”, and also suggests that any C++17 feature that is too involved will also wait until that version. These are not bad things, mind you. But they are important factors.
I understand that you do not know or can not share when VS “15” is expected to ship. I am sure the same holds true for giving definitive answers on the complete feature set. But information similar to what is in this post is nearly worse than having no information at all.
I can’t talk about release dates, and I generally can’t make promises about the future (with the exception that I can generally talk about things that have been checked into source control; e.g. I can tell you that std::optional has been implemented and will appear in the next build of VS “15”, it just barely missed the deadline for Preview 4). Other than that, I’m trying to explain as much as possible, which is why I mentioned WCFB02’s future tech.
Regarding your situation, I can say that VS 2013 is super old and super buggy. Not only have we implemented zillions of compiler and STL features between 2013 and 2015 Update 3, we’ve fixed tons of bugs for both correctness and performance (see my linked blog posts for more details). There is a massive amount of value in upgrading right now.
Without making concrete promises, I can say that (1) we don’t expect to be able to make major changes to the experimental Filesystem TS implementation in WCFB01 due to the need to preserve bincompat, (2) we’re planning to overhaul Filesystem in WCFB02 but it’s too early to talk about how that’ll ship, and (3) Filesystem is special because of the separately compiled DLL interaction – we are NOT planning to withhold other compiler/STL features from the compatible toolset in VS “15”. All new features are targeted for WCFB01’s bincompat-preserving toolset unless there are technical reasons why this is not possible.
Stephan,
I appreciate your response. You didn’t have to acknowledge my posts at all, let alone provide useful actionable information. Thank you.
In the past we have had to deal with an enormous amount of inertia with respect to compiler updates. We were using VS 2003 until 2011, and were grudgingly allowed to move to VS 2010 at that time. Our jaws hit the floor when they started handing out VS 2013 licenses, which is another recent development (just prior to 2015 release, if I recall correctly)… The fact that someone outside the development team is driving the talk of VS2015/”15″ is crazy… but true. Our concern is that the flow of upgrades will return to previous drought-like conditions… so a VS 2015 upgrade may preclude us seeing VS “15” and even “16” beyond it. Kind of like starving men let loose in a buffet.
So, again, thank you. This is something I can use.
It would be nice toolchains were side by side in VS. I work on a large product that we need to service for years after RTM (while working on newer versions). This is problematic with VS 2015. Consider the following scenario:
1. Product ships built using VS 2015 RTM.
2. Update 1,2,3 ships and I install.
3. I need to make a fix in the RTM branch of our product. –> Can’t do it because code base that we shipped no longer compiles with Update 3 (vs updates don’t maintain source compat). I need to use a separate machine/vm. I want to choose the toolchain in VS.
Just wanted to acknowledge this. we have some ideas in this space and the new installer tech is going to allow us to do things we couldn’t do before. We are trying to be VERY VERY judicious with our source compat changes as well so not many people hit this.
Do you have that many source breakages?
Olaf,
We had a few for each update. We have millions of lines of C++ code so that is expected. The work to fix them isn’t the problem. The fact that I can longer build our RTM branch on my main dev machine is the problem. Yes, I can overlay source fixes, use a VM or another machine but all these are inconvenient. Side by side toolchains would be so much better.
Steve,
Thank you very much. This is very encouraging.
albert
No binary compatibility, please… Does that mean we couldn’t get unordered_meow/tuple with new implementation?
As much as I enjoy breaking bincompat, most customers don’t, and our management has decided to preserve bincompat for this release. Therefore, we will be unable to overhaul unordered_meow’s representation in the WCFB01 toolset. Overhauling it in WCFB02 is on my list of things to do (not in the near future).
By the way, how are std::min(), std::max() improved by optimizer?
The checkin notes state that they’re recognizing these functions and emitting maxss (etc.) instructions, avoiding control flow. Apparently, it improves code size as well as runtime performance.
Is C++17 ‘if constexpr’ support coming anytime soon?
We can’t promise timeframes for specific features before they’ve been implemented. It’s on the compiler team’s list of things to do, and they’re working hard to achieve C++11/14/17 conformance while rewriting the compiler’s data structures.
Are you targeting C++17 conformance with VS15?
We can’t make specific promises. Even in the STL (where we were recently feature-complete until they voted in more stuff), all I’m going to promise is that we’ve got a list of all of the STL features and issue resolutions that have been voted into C++17, and we’re working on implementing them, prioritizing the important stuff first. I don’t know if we’ll be done by the time VS “15” locks down for release, whenever that will be. All I know is that the features we do ship will be implemented at the highest possible level of quality, and I’m not going to compromise that in the pursuit of quantity.
And what about the Clang front-end?
Our “ClangCrew” (which I’m a part of, on the libraries side) is still working on fixing bugs and upstreaming changes. They’re well aware of the upcoming Clang 4.0 release, but we can’t promise when Clang/C2 will be able to update to 4.0 (in particular, the upstreaming of our changes for debug info in PDBs is still ongoing, and until that’s complete, rebasing to a new major version involves lots of manual work. According to my understanding, the EH machinery for Clang/C2 and Clang/LLVM still differs considerably, also adding to the work needed.)
In the STL, we’ve added Clang/C2 configurations to our automated tests, giving us both x86 and x64 coverage with the highest level of strictness supported (-fno-ms-compatibility -fno-delayed-template-parsing).
I just want to clarify a point here. You say the STL itself is going to be binary compatible between VS2015 and VS “15”. Is this binary compatibility guarantee strictly limited to the STL itself, or are you saying that there is no binary incompatibility at all, even for any user-defined types? You couldn’t have any guarantees of compatibility at all unless there were very limited or no changes to the existing ABI. What I’m wondering is do we have a guarantee here that we have full C++ ABI compatibility in VS “15” for an existing pre-compiled dll that was built in VS2015? If not, can you list the edge-cases where compatibility will break? I’m mostly trying to gauge here if I can safely assume pre-compiled third party components built against VS2015 are safe to drop in as-is and use from VS “15”, and if there are some exceptions, if those exceptions apply to cases that would affect me. Thanks.
The compiler is also guaranteeing binary compatibility for user-defined types. (This is a lot less work for the compiler than for the STL.)
Thanks, I expected as much, but I didn’t want to make the assumption and end up getting stung.
Hmm, did I miss the previous 3 previews?
Yes, you did. They were announced on the Visual Studio blog. Until now, we haven’t been talking very much about VS “15” here on the Visual C++ blog, because the C++ toolset was dual-shipping in 2015 Updates and “15” Previews.
I hope so much for n::n::n::n… instead of n { n { n { n {…
That’s already implemented in 2015 Update 3.
Note that as a C++17 feature implemented in VS 2015 Update 3, Nested Namespace Definitions are guarded by /std:c++latest.
So this tables need an update?
Yes. Additionally, I will be publishing compiler/STL tables on VCBlog for the next build of VS “15” (I chose not to for Preview 4, as they aren’t different enough from 2015 Update 3 yet).
These switches are an interesting change of mind as you once explained to me why you don’t have them. I don’t remember anything.
Yes, it’s a policy change. We’ve managed to implement it without causing major disruption to the STL which is what I was concerned about. Avoiding any C++11 modes and giving special treatment to the C++17 features that shipped in Update 2 really helped.
Do you have plans for C11 (not C++11)?
We currently have no plans for C11, beyond what’s required for C++17 conformance (as the C++ Working Paper was recently updated to drag in a subset of the C11 Standard Library).
Is there a place to try this compiler out online? Specifically, I wanted to see if this issue has been resolved yet:
That ICE is still present in today’s daily build (via Nuget).
Oh well, thanks for checking!
That bug is still active in our internal database (as VSO#211976) and is assigned to one of our compiler devs. However, they’re busy with other things, so we can’t promise a timeframe for a fix.
BUG REPORT: The va_start(ap, v) in varargs.h incorrectly defined as va_start(ap), which lacks the latter parameter.
use stdarg.h instead – that’s standardised.
Thanks. Ploblem solved. The va_start in varargs.h requires va_alist instead of ‘…’.
The header file varargs.h might be used to build DOS and Win3.1 programs. New programs should use stdarg.h.
What about further new optimizer (introduced in VS2015 Update 3) improvements, that were supposed to be included in next VS version (they are mentioned in your lengthy blog post discussing the new optimizer)? Will they appear in VS15, or are they binary breaking?
I just realized my question is actually kinda stupid, if new compiler optimizer was considered binary breaking, it wouldn’t be introduced in VS Update, so I guess more improvements from your blog post are coming in VS15, sorry about that. :)
As I mentioned elsewhere, everything in VS 2015 Update 3 is present in VS “15” Preview 4, because their compiler/library toolsets were built from the same branch, and Preview 4 is simply newer. Optimizer improvements are basically inherently safe for binary compatibility (they don’t do things like change layout or calling convention in visible ways).
Can you not sort your numbering scheme out, so we don’t have VS 2016/17 is VS version 15 ?
Drop the year and just have Visual Studio 15. | https://blogs.msdn.microsoft.com/vcblog/2016/08/24/c1417-features-and-stl-fixes-in-vs-15-preview-4/ | CC-MAIN-2017-34 | refinedweb | 2,751 | 62.98 |
Writing a File System in Linux Kernel*nix
Who This Article is for
There are no complex or difficult concepts in this article, all required is a basic knowledge of the command line, the C language, Makefile and a general understanding about file systems.
In this article I am going to describe the components necessary for development inside the Linux kernel, then we’ll write the simplest loadable kernel module and, finally, write a framework for the future file system. It’s a module that will register quite a useful (for now) file system in the kernel. The ones familiar with development inside Linux kernel may not find anything interesting here.
Introduction
A file system is one of the central OS subsystems. File systems continue to develop as operating systems evolve. Currently we have an entire heterogenetic zoo of file systems, from the old “classic” UFS, to the new exotic NILFS (though this idea isn’t new at all, look at LFS) and BTRFS. We aren’t going to try to throw down monsters like ext3/4 and BTRFS. Our file system will be of educational nature, and we will familiarize ourselves with the Linux kernel using its help.
Environment Setup
Before we get into the kernel, let’s prepare all the necessary steps for building our OS module. I use Ubuntu, so I’m going to setup within this environment. Fortunately, it’s not difficult at all. To begin with, we’re going to need a compiler and building facilities:
sudo apt-get install gcc build-essential
Then we may need the kernel source code. We’ll go the easy route and won’t bother rebuilding the kernel from the source. We’ll just determine kernel headers; this should be enough to write a loadable module. The headers can be determined the following way:
sudo apt-get install linux-headers-uname -r``
And now I’m going to jump onto my soap box. Rummaging in the kernel on a working machine isn’t the smartest idea, so I strongly recommend you perform all these actions withiin a virtual machine. We won’t do anything dangerous so the stored data is safe. But if anything goes wrong, we’ll probably have to restart the system. Besides, it’s more comfortable to debug the kernel modules in a virtual machine (such as QEMU), though this question won’t be considered in the article.
Environment Check Up
In order to check the environment we’ll write and start the kernel module, which won’t do anything useful (Hello, World!). Let’s consider the module code. I named it super.c (super is derived from superblock):
#include <linux/init.h> #include <linux/module.h> static int __init aufs_init(void) { pr_debug("aufs module loaded\n"); return 0; } static void __exit aufs_fini(void) { pr_debug("aufs module unloaded\n"); } module_init(aufs_init); module_exit(aufs_fini); MODULE_LICENSE("GPL"); MODULE_AUTHOR("kmu");
At the very beginning you can see two headers. They are an important part of any loadable module. Then two functions: aufs_init and aufs_fini follow. They will be called before and after the module roll-out. Some of you may be confused by __init label. __init is a hint to the kernel that the function is used during module initialization only. It means that after the module initialization it can be unloaded from memory. There is an analogous marker for the data; the kernel can ignore these hints. The reference to __init functions and data from the main module code is a potential error. That’s why during the module setup it should be checked that there are no such references. If such message is found, the kernel setup system will give out an alert. The similar check is carried out for __exit functions and data. If you want to know details about __init and __exit, you can refer to the source code.
Please note that aufs_init returns int. Thus the kernel finds out that during the module initialization something went wrong. If the module hasn’t returned a zero value, it means that an error occurred during initialization. In order to find out which functions should be called at module loading and unloading, two macros module_init and module_exit are used. In order to learn details, refer to lxr, it’s really useful if you want to study the kernel. pr_debug – is a function (it’s a macro actually, but it doesn’t matter for now) of kernel output to the log, it’s very similar to the family of printf functions with some extensions (for example, for IP and MAC addresses printing. You will find a complete list of modifiers in the documentation of the kernel. Together with pr_debug, there is an entire family of macros: pr_info, pr_warn, pr_err and others. If you are familiar with Linux module development you know about printk function. pr_* open into printk calls, so you can use printk instead of them.
Then there are macros with information about descendants – a license and an author. There are also other macros that allow to store manifold information about the module. For example MODULE_VERSION, MODULE_INFO, MODULE_SUPPORTED_DEVICE and others. By the way, if you’re using different form GPL license, you won’t be able to use some functions available for GPL modules.
Now let’s build and start our module. We’ll write Makefile for this. It will build our module:
obj-m := aufs.o aufs-objs := super.o CFLAGS_super.o := -DDEBUG all: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules clean: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean
Makefile calls Makefile for building, it should be located in /lib/modules/$(shell uname -r)/build catalogue (uname –r is a command that returns the version of the started kernel). If the headers (or source codes) of the kernel are located in another catalogue, you should fix it.
obj-m allows to state the name of the future module. In our case it will be named aufs.ko (ko exactly – from kernel object). aufs-objs allows to indicate what source code aufs module should be built from. In our case super.c file will be used. You can also indicate different compiler flags, which will be used (in addition to those used by Makefile kernel) for object files building. In our case I pass –DDEBUG flag during super.c build. If we don’t pass the flag, we won’t see pr_debug output in the system log.
In order to build the module make command should be executed. If everything’s fine, aufs.ko file will appear in the catalogue. It’s quite easy to load the module:
sudo insmod ./aufs.ko
In order to make sure that the module is loaded you can look at lsmod command output.
lsmod | grep aufs
In order to see the system log, dmesg command should be called. We’ll see messages from our module in it. It’s not difficult to unload the module:
sudo rmmod aufs
Returning to the File System
Thus, the environment is adjusted and operates. We know how to build the simplest module, load and unload it. Now we should consider the file system. The file system design should begin “on paper”, from a thorough consideration of the data structures being usedr. But we’ll follow a simple path and defer the details of files and folders storage at the disk for the next time. Now we’ll write a framework of our future file system.
The life of a file system begins with check-in. You can check-in the file system by calling register_filesystem. We’ll register the file system in the function of module initialization. There’s unregister_filesystem for the file system unregistration and we’ll call it in aufs_fini function of our module.
Both functions accept a pointer to file_system_type structure as a parameter. It’ll “describe” the file system. Consider it as a class of the file system. There’re enough fields in the structure, but we’re interested in some of them only:
static struct file_system_type aufs_type = { .owner = THIS_MODULE, .name = "aufs", .mount = aufs_mount, .kill_sb = kill_block_super, .fs_flags = FS_REQUIRES_DEV, };
First of all, we are interested in name field. It stores the file system name. This name will be used during the assembling.
mount & kill_sb — are two fields containing pointers to functions. The first function will be called during the file system assembling, the second one during disassembly. It’s enough to implement one of them, and we’ll use kill_block_super instead of the second one, which is provided by the kernel.
fs_flags — stores different flags. In our case it stores FS_REQUIRES_DEV flag, which says that our file system needs a disk for operation (though it’s not the case for the moment). You don’t have to indicate this flag if you don’t want to, everything will operate without it. Finally, owner field is necessary in order to setup a counter of links to the module. The link counter is necessary so that the module wouldn’t be unloaded too soon. For example, if the file system has been assembled, the module loading can lead to the crash. The link counter won’t allow to unload the module till its being used, i.e. until we disassembly the file system.
Now let’s consider aufs_mount function. It should assemble the device and return the structure describing a root catalogue of the file system. It sounds quite complicated, but, fortunately, the kernel will do it for us as well:
static struct dentry *aufs_mount(struct file_system_type *type, int flags, char const *dev, void *data) { struct dentry *const entry = mount_bdev(type, flags, dev, data, aufs_fill_sb); if (IS_ERR(entry)) pr_err("aufs mounting failed\n"); else pr_debug("aufs mounted\n"); return entry; }
The biggest part of work happens inside moun_bdev function. We are interested in its aufs_fill_sb parameter. It’s a pointer to the function (again), which will be called from mount_bdev in order to initialize the superblock. But before we move on to it, an important file subsystem of the kernel — dentry structure – should be considered. This structure represents a section of the path to the file name. For example, if we refer to /usr/bin/vim file, we’ll have different structure exemplars representing sections of / (root catalogue), bin/ and vim path. The kernel supports these structures cache. It allows to quickly find inode (another center structure) by the file (path) name. aufs_mount function should return dentry, which represents the root catalogue of our file system. aufs_fill_sb function will create it.
Thus, for the moment aufs_fill_sb is the most important function in our module and it looks like the following:
static int aufs_fill_sb(struct super_block *sb, void *data, int silent) { struct inode *root = NULL; sb->s_magic = AUFS_MAGIC_NUMBER; sb->s_op = &aufs_super_ops; root = new_inode(sb); if (!root) { pr_err("inode allocation failed\n"); return -ENOMEM; } root->i_ino = 0; root->i_sb = sb; root->i_atime = root->i_mtime = root->i_ctime = CURRENT_TIME; inode_init_owner(root, NULL, S_IFDIR); sb->s_root = d_make_root(root); if (!sb->s_root) { pr_err("root creation failed\n"); return -ENOMEM; } return 0; }
First of all, we fill super_block structure. What kind of structure is it? Usually file systems store in a special place of a disk partition (this place is chosen by file system) the set of file system parameters, such as the block size, the number of occupied/free blocks, file system version, “pointer” to the root catalogue, magic number, by which a driver can check that the necessary file system is stored on the disk. This structure is named superblock (look at the picture below). super_block structure in Linux kernel is mainly intended for similar goals, we save a magic number in it and dentry for the root catalogue (the one returned by mount_bdev).
Besides, in s_op field of super_block structure we store a pointer to super_operations structure — these are super_block “class methods”, it’s another structure storing a lot of pointers to the functions. I’ll make another note here. Linux kernel is written on C, without support of different OOP features from the language side. But we can structure a program following OOP ideas without support from the language. So the structures storing a lot of pointers to the functions can be met quite often in the kernel. It’s a method of virtual functions implementation by current facilities.
Let’s get back to super_block structure and its “methods”. We’re interested in its put_super field. We’ll save a “destructor” of our superblock in it”
static void aufs_put_super(struct super_block *sb) { pr_debug("aufs super block destroyed\n"); } static struct super_operations const aufs_super_ops = { .put_super = aufs_put_super, };
While aufs_put_super function does nothing useful, we use it just in order to type in the system log one more line. aufs_put_super function will be called within kill_block_super (see above) before super_block structure deleting. i.e. when disassembling the file system.
Now let’s return to the most important aufs_fill_sb function. Before we create dentry for the root catalogue we should create an index node (inode) of the root catalogue. Inode structure is probably the most important one in the file system. Each file system object (a file, a folder, a special file, a magazine, etc.) identifies with inode. As well as super_block, inode structure reflects the way file systems are stored on a disk. Inode name comes from index node, meaning that it Indexes files and folders on a disk. Usually inside inode on a disk a directive to the place file data are stored on a disk (in which blocks he file content is stored), different access flags (available for read/write/execute), information about the file owner, time of creation/modification/writing/execution and other similar things are stored.
We can’t read from the disk yet, so we’ll fill inode with fictive data. As time of creation/modification/writing/execution we use the current time and the kernel will assign the owner and access permissions (call inode_init_owner function). Finally, create dentry bound with the root inode.
Skeleton Check Up
The skeleton of our file system is ready. It’s time to check it. Building and loading of the file system driver doesn’t differ from building and loading of a general module. We’ll use loop device instead of a real disk for experiments. It’s a “disk” driver, which writes data not on the physical device, but to the file (disk image). Let’s create a disk image. It doesn’t store any data yet, so it’s simple:
touch image
We should also create a catalogue, which will be an assembling point (root) of our file system:
mkdir dir
Now using this image we’ll assemble our file system:
sudo mount -o loop -t aufs ./image ./dir
If the operation ended successfully, we’ll see messages from our module in the system log. In order to disassemble the file system we should:
sudo umount ./dir
Check the system log again.
Summary
We familiarized ourselves with creation of loadable kernel modules and main structures of the file subsystem. We also wrote a real file system, which can assemble and disassemble only (it’s quite silly for the time being, but we’re going to fix it in future).
Then we’re going to consider data reading from the disk. To begin with we’ll define the way data will be stored on disks. We’ll also learn how to read superblock and inodes from the disk.
Links
- The code to the article is available at github
- An Indian has quite recently written a simple file system, he has performed a big job
- I understand that it’s not very correct educationally to send the newcomers to the kernel source code (though it’s useful to read them); I still recommend all the interested ones to look at the source code of a very simple ramfs file system. Besides, unlike our file system, ramfs doesn’t use a disk. It stores everything in the memory.
Ropes — Fast Strings
| https://kukuruku.co/post/writing-a-file-system-in-linux-kernel/ | CC-MAIN-2019-13 | refinedweb | 2,659 | 64.41 |
The following example shows how to create a form control that will allow users to choose a color from a drop-down list. The example is relatively simple, but the same basic approach can be used to create any type of custom form control or modify an existing one.
1. Open the web project in Visual Studio (or Visual Web Developer) using the WebProject.sln file or via File -> Open -> Web site in Visual Studio.
2. Right-click the CMSFormControls folder and choose Add New Item. Choose to create a new Web User Control and call it ColorSelector.ascx.
This folder (or a sub‑folder under it) should always be used to store the source files of custom form controls, since it ensures that registered form controls are exported correctly along with the site.
3. Edit the new control on the Design tab. Drag and drop a DropDownList control onto the form:
4. Edit the properties of the DropDownList and change its ID to drpColor.
5. Switch to the code behind and add a reference to the following namespace:
[C#]
[VB.NET]
6. Next, modify the class definition according to the following:
[C#]
[VB.NET]
This ensures that our form control inherits from the CMS.FormControls.FormEngineUserControl class and can use its standard properties.
7. Now add the following members into the class:
[C#]
[VB.NET]
The above code overrides three members inherited from the FormEngineUserControl class that are most commonly used when developing form controls:
•Value - it is necessary to override this property for every form control. It is used to get and set the value of the field provided by the control.
•GetOtherValues() - this method is used to set values for other fields of the object in which the form control is used. It must return a two dimensional array containing the names of the fields (columns) and their assigned values. Typically used for multi‑field form controls that need to store data in multiple database columns, but only occupy a single field in the form.
•IsValid() - this method is used to implement validation for the values entered into the field. It must return true or false depending on the result of the validation.
Also notice that a SelectorWidth property was defined for the form control. It serves as a way to access the value of a parameter that will be defined for the form control later in the example. This property is used in the EnsureItems() method to set the width of the internal drop-down list .
Remember to save the changes to both code files.
8. Go to Site Manager -> Development -> Form controls where you can register the new form control in the system. Click the
New form control link. Enter the following values:
•Display name: Custom color selector
•Code name: custom_colorselector
•Type: Selector
•File name: ~/CMSFormControls/ColorSelector.ascx (you can use the Select button to choose the file)
Click OK and the form control object will be created.
9. You will be redirected to the control's General tab. Here, check the Use control for - Text and Show control in - Document types boxes and click OK.
10. Switch to the Properties tab where parameters can be defined for the form control. Use the New attribute (
) action and set the following properties:
•Column name: SelectorWidth
•Attribute type: Integer number
•Allow empty value: true
•Display attribute in editing form: true
•Field caption: Drop-down list width
•Form control type: Input
•Form control: Text box
When finished, click
Save field.
This parameter will allow users to specify the width of the color selector directly from the administration interface whenever they add this control to a form. The code of the form control already ensures that the value is properly applied.
11. Now we will test this control by placing it onto a document editing form. Go to Site Manager -> Development -> Document types and edit (
) the Product document type. Select the Fields tab to access the field editor for this document type. Add two new fields using the New attribute (
) action. Set the following properties for the fields:
•Column name: ProductColor
•Attribute type: Text
•Attribute size: 100
•Display attribute in editing form: false
This field will store the name of the color selected for the product. It will not be available in the editing form, its value will be set automatically by the GetOtherValues() method of the ColorSelector.ascx control (notice that the Column name matches the name used in the code of the method).
Click
Save field and create the next field:
•Column name: ProductHexaColor
•Attribute type: Text
•Attribute size: 100
•Allow empty value: true
•Display attribute in editing form: true
•Field caption: Color
•Form control type: Selector
•Form control: Custom color selector
This field will store the hexadecimal code of the selected color. In the code of the form control, this value is handled through the Value property. The field will be displayed in the document's editing form according to the design of the custom form control.
Notice that the Editing control settings section of this field contains the Drop-down list width field. This is the SelectorWidth parameter defined for the form control in the previous step. Try to specify the width of the selector by entering some number, for example 200.
Click
Save field again.
12. Go to CMS Desk -> Content and create a new document of the Product document type under the Products section. The document's editing form will contain the new form control as shown below:
As you can see, the document field can be managed using the custom form control. The width of the displayed drop-down list will match the value that you entered into the form control's parameter. If you do not choose any color, the validation error message defined in the code of the form control will be displayed.
By implementing custom form controls as shown in this example, it is possible to create editing forms with almost unlimited flexibility, both in the administration interface and on the live site. | http://devnet.kentico.com/docs/6_0/devguide/developing_form_controls.htm | CC-MAIN-2016-36 | refinedweb | 1,004 | 61.77 |
Hi there,
I followed the instructions on this post
but I got stuck in the first step (of the video) where Yoav tests the first section of the code. It doesn't work at all (probably for some basic mistake).
Need help to overcome this...
Thank you in advance!
Bruno
import wixData from 'wix-data'; export function iName_keyPress(event, $w) { filter($w('iName').value); } function filter(Nome) { $w('#dataset1').setFilter(wixData.filter().contains('Nome Texto', Nome)); }
Make sure you use the Field Key and not the Field Name as shown in this example screenshot:
You want this (check your database for the exact field key):
$w('#dataset1').setFilter(wixData.filter().contains('nomeTexto', Nome));
And not this:
$w('#dataset1').setFilter(wixData.filter().contains('Nome Texto', Nome));
Note that the Field Key starts with a small letter and not a capital letter.
Hi Yisrael,
Thank you for your help. Actually the code was has you said, but I copy it while doing some tests. I tried again and unfortunately it didn't work. Could you help me detect the problem with this? I have a big team to display and such tool would be so useful.
Best,
Bruno
Please post the editor URL of your site. Only authorized Wix personnel can get access to your site in the editor.
Is it this one?
@bruno.vasconcelos On what page are you doing the database search?
@Yisrael (Wix) its on "People" | https://www.wix.com/corvid/forum/community-discussion/search-database-again | CC-MAIN-2019-47 | refinedweb | 237 | 68.67 |
Steve Langasek <vorlon@debian.org> writes: > On Sun, Nov 05, 2006 at 01:56:28PM +0100, maximilian attems wrote: >> {standard input}:372: Error: macro requires $at register while noat in effect >> make[5]: *** [arch/alpha/kernel/core_cia.o] Error 1 >> make[4]: *** [arch/alpha/kernel] Error 2 > > Taking a look at the assembler output for core_cia, this is due to use of > the ldbu, ldwu, stb, and stw instructions in asm-alpha/compiler.h, which are > instructions specific to ev56 and above. They are also guarded in the > source by an #if !defined(__alpha_bwx__). It looks like the difference is > in the assembler between gcc-4.0 and gcc-4.1; specifically, gcc-4.1 emits a > '.arch ev5' directive, where gcc-4.0 does not. I made a patch against gcc to suppress gcc outputting .arch directives that don't do anything useful except triggering this error, and it went in in 4.1.1ds2-17. Unfortunately, I thought an .ev4 directive would be the problem, while it seems to be .ev5. This updated patch instead of alpha-no-ev4-directive.patch should help: --- gcc/config/alpha/alpha.c.orig 2006-11-06 09:59:12.000000000 +0100 +++ gcc/config/alpha/alpha.c 2006-11-06 09:59:06.000000000 +0100 @@ -9353,7 +9353,7 @@ fputs ("\t.set nomacro\n", asm_out_file); if (TARGET_SUPPORT_ARCH | TARGET_BWX | TARGET_MAX | TARGET_FIX | TARGET_CIX) { - const char *arch; + const char *arch = NULL; if (alpha_cpu == PROCESSOR_EV6 || TARGET_FIX || TARGET_CIX) arch = "ev6"; @@ -9361,12 +9361,9 @@ arch = "pca56"; else if (TARGET_BWX) arch = "ev56"; - else if (alpha_cpu == PROCESSOR_EV5) - arch = "ev5"; - else - arch = "ev4"; - fprintf (asm_out_file, "\t.arch %s\n", arch); + if (arch) + fprintf (asm_out_file, "\t.arch %s\n", arch); } } #endif > Since the errors from the assembler really indicate that these instructions > are not supported by the ev5 (gcc-4.0 has the same problem assembling the > gcc-4.1 output as gcc-4.1 itself does, due to the .arch ev5 declaration), > and this kernel code hasn't changed recently that I see, it seems to be the > case that ev5 processors are already unsupported by the current kernel in > etch. Given that no one has complained about this to date (at least that > I'm aware of), is it time to explicitly bump the baseline on alpha to ev56 > for etch? I'm not opposed to this, in fact I was planning to suggest this for etch+1. However, this particular problem should be reasonably easy to fix, so if anybody speaks up for ev5, we should give it a try... -- Falk | https://lists.debian.org/debian-alpha/2006/11/msg00007.html | CC-MAIN-2016-36 | refinedweb | 422 | 68.47 |
Important: Please read the Qt Code of Conduct -
Cannot open this QML document because of an error in the QML file:
- Julia Johnson last edited by
I have a .qml file containing multiple Qtquick controls 2.0 that I have been editing using both the Edit Tab and the Design Tab. It was originally refactored from a ui.qml file. Recently, whenever I click the design tab the following message appears:
Cannot open this QML document because of an error in the QML file:
Internal error (file: C:\work\build\qt-creator\src\plugins\qmldesigner\designercore\variantproperty.cpp, function: QMLDesigner::VariantProperty::setValue, line 62)
I haven't opened the variantproperty.cpp file manually and don't understand why I am getting this error. I have reverted my QML file several times and after a number of uses this error reappears.
I'm new to Qt so any help would be great. Does anyone know how to fix this?
- dheerendra Qt Champions 2017 last edited by
Hi Are you using the new Qt version to open the qml file. I suspect that you have created the .qml in earlier version & now you are opening the file with new qt installation.
- Julia Johnson last edited by
Hi dheerendra,
I created this project as a "Qt Quick Controls 2 Application" for Qt 5.7. I have these import statements in the QML files
import QtQuick 2.7
import QtQuick.Controls 2.0
import QtQuick.Layouts 1.3
My project contains a mix of qml and ui.qml files, but the file loading the C++ object is just qml.
Was the syntax for importing objects changed in Qt 5.7?
The QML Designer does not allow imperative code. It is not possible to design files that execute arbitrary JS code. Did you not see any warnings that you should not edit .ui.qml files by hand? Use the pattern used in the application template: refer to the building blocks from the outside, where an instance if the UI form is created.
I got same issue.
Solved removing Window flags "SplashScreen".
- Sytse Reitsma last edited by
What caused the issue for me was an empty onClicked handler like so:
Item {
id: testDialog;
Rectangle { id: dlgBackground MouseArea { anchors.fill: parent onClicked: ; //Deleting this line fixed the issue for me } }
After removing the onClicked line the error did not occur anymore (you may need to close and reopen the qml file for the designer to pick up the changes).
Sytse | https://forum.qt.io/topic/69223/cannot-open-this-qml-document-because-of-an-error-in-the-qml-file/6 | CC-MAIN-2020-34 | refinedweb | 413 | 66.44 |
Design Time Problems
The article lists commonly met issues on designing Telerik Reports.
I want to show my report in a ReportViewer control, but when I click on the arrow in the ReportSource property from the property grid, it does not show available reports - what is wrong?
Follow our best practices and have the report in a separate class library that is referenced in the application or website. Check if the class library containing the report is referenced in your application/website and that you have rebuilt the application/website.
The most reliable way to specify a report for the ReportViewer is to do this programmatically. For example, if you're using the ASP.NET report viewer, on the Page_Load event of the page: only one column wide.
'The type or namespace name 'Telerik' could not be found (are you missing a using directive or an assembly reference?)' Error on build.
Double-check if the project has references to Telerik Reporting assemblies, and if the references CopyLocal is set to true in the Visual Studio Property grid. In case you recently updated your Telerik Reporting installation, run the Upgrade Wizard in all related projects in Visual Studio.
If Telerik Reporting assemblies are referenced and updated, verify that the project targets .NET4+ framework Full Profile version. | https://docs.telerik.com/reporting/troubleshooting-design-time | CC-MAIN-2019-04 | refinedweb | 215 | 55.03 |
the video on the SharePoint PnP YouTube Channel. choose Enter.
- Select Use the current folder for where to place the files.
The next set of prompts will ask for specific information about your web part:
- Accept the default No javascript web framework as the framework you would like to use and choose Enter.
- Accept the default HelloWorld as your web part name and choose Enter.
- Accept the default HelloWorld description as your web part description and choose Enter.
At this point, Yeoman will install the required dependencies and scaffold which. However, since a default certificate is not configured for the local dev environment, your browser will report a certificate error. The SPFx toolchain comes with a developer certificate that you can install for building web parts.
To install the developer certificate for use with SPFx development, switch to your console, make sure you are still in the helloworld-webpart directory and enter the following command:
Notice that if you hare using Chrome v58 or newer as your browser, below command does not generate valid certificate for your environment and you will see a certificate exception in local workbench
gulp trust-dev-cert
Now that we have installed the developer certificate, enter the following command in the console to build and preview your web part:
gulp serve
This command will execute a series of gulp tasks to create a local, Node-based HTTPS server on 'localhost:4321' and launch your default browser to preview web parts from your local dev environment.
SharePoint client-side development tools use gulp as the task runner to handle build process tasks such as:
- Bundle and minify JavaScript and CSS files.
- Run tools to call the bundling and minification tasks before each build.
- Compile SASS files to CSS.
- Compile TypeScript files to JavaScript.
If you are new to gulp, you can read Using Gulp which describes using gulp with Visual Studio in conjunction with building ASP.NET 5 projects.
Visual Studio Code provides built-in support for gulp and other task runners. Choose Ctrl+Shift+B on Windows or Cmd+Shift+B on Mac to debug and preview your web part.
SharePoint Workbench add the HelloWorld web part, choose the add button. The add button opens the toolbox where you can see a list of web parts available for you to add. The list will include the HelloWorld web part as well other web parts available locally in your development environment. as when the behavior is reactive.
Web part project structure
You can use Visual Studio Code to explore the web part project structure.
- In the console, go to the src\webparts\helloWorld directory.
- defines the main entry point for the web part. The web part class HelloWorldWeb in a separate file IHelloWorldWebPartProps.ts.:>`; }
This model is flexible enough so that web parts can be built in any JavaScript framework and loaded into the DOM element.
Configure the Web part property pane
The property pane is defined in the HelloWorldWebPart class. The propertyPaneSettings property is where you need to define the property pane.
When the properties are defined, you can access them in your web part using
this.properties.<property-value>, as shown in the render method:
<p class="${styles.description}">${escape(this.properties.description)}</p>
Notice that we are performing a HTML escape on the property's value to ensure a valid string.
Read the Integrating property pane with a web part article to learn more about how to work with the property pane and property pane field types.
Lets now add few more properties - a checkbox, dropdown and a toggle - to the property pane. We first start by importing the respective property pane fields from the framework.
Scroll to the top of the file and add the following to the import section from
@microsoft/sp-webpart-base:
PropertyPaneCheckbox, PropertyPaneDropdown, PropertyPaneToggle
The complete import section will look like the following:
import { BaseClientSideWebPart, IPropertyPaneConfiguration, PropertyPaneTextField, PropertyPaneCheckbox, PropertyPaneDropdown, PropertyPaneToggle } from '@microsoft/sp-webpart-base';
Save the file.
Next, update the web part properties to include the new properties. This maps the fields to typed objects.
Open IHelloWorldWebPartProps.ts and replace the existing code with the following code.
export interface IHelloWorldWebPartProps { description: string; test: string; test1: boolean; test2: string; test3: boolean; }
Save the file.
Switch back to the HelloWorldWebPart.ts file.
Replace the getPropertyPaneConfiguration method with the code below="ms-font-l ms-fontColor-white">${escape(this.properties.test)}</p>
To set the default value for the properties, you will need to update the web part manifest's properties property bag:
Open
HelloWorldWebPart.manifest.json and modify the
properties to:
"properties": { "description": "HelloWorld", "test": "Multi-line text field", "test1": true, "test2": "2", "test3": true }
The web part property pane will now have these default values for those properties.
Web part manifest
The HelloWorldWebPart.manifest.json file defines the web part metadata such as version, id, display name, icon, and description. Every web part should contain this manifest.
{ "$schema": "../../../node_modules/@microsoft/sp-module-interfaces/lib/manifestSchemas/jsonSchemas/clientSideComponentManifestSchema.json", "id": "922a9623-f92f-4971-8574-185b31554e44", "alias": "HelloWorldWebPart", "componentType": "WebPart", "version": "0.0.1", "manifestVersion": 2, "preconfiguredEntries": [{ "groupId": "922a9623-f92f-4971-8574-185b31554e44", "group": { "default": "Under Development" }, "title": { "default": "HelloWorld" }, "description": { "default": "HelloWorld description" }, "officeFabricIconFontName": "Page", "properties": { "description": "HelloWorld", "test": "Multi-line text field", "test1": true, "test2": "2", "test3": true } }] }
Now that we have introduced new properties, make sure that you are again hosting the web part from the local development environment by executing following command. This will also ensure that the above changes were correctly applied.
gulp serve
Preview the web part in SharePoint
SharePoint Workbench is also hosted in SharePoint to preview and test your local web parts in development. The key advantage is that now you are running in SharePoint context and that you will be able to interact with SharePoint data.
Go to the following URL: ''
Note: If you do not have the SPFx developer certificate installed, then Workbench will notify you that it is configured not to load scripts from localhost. Stop currently running process in the console window, execute
gulp trust-dev-certcommand in your project directory console to install the developer certificate before running
gulp servecommand again.
Notice that the SharePoint workbench now has the Office 365 Suite navigation bar.
Choose add icon in the canvas to reveal the toolbox. The toolbox now shows the web parts available on the site where the SharePoint workbench is hosted along with your HelloWorldWebPart.
Add HelloWorldWebPart from the toolbox. Now you're running your web part in a page hosted in SharePoint! to SharePoint. You will use the same Hello World web part project and add the ability to interact with SharePoint List REST APIs. Notice that the
gulp serve command is still running in your console window (or in Visual Studio Code if you using the editor). You can continue to let it run while you go to the next article. | https://dev.office.com/sharepoint/docs/spfx/web-parts/get-started/build-a-hello-world-web-part | CC-MAIN-2017-34 | refinedweb | 1,146 | 53.61 |
This was adapted from a post which originally appeared on the Eager blog. Eager has now become the new Cloudflare Apps..”
But CSS wouldn’t be introduced for five years, and wouldn’t be fully implemented for ten. This was a period of intense work and innovation which resulted in more than a few competing styling methods that just as easily could have become the standard.
While these languages are obviously not in common use today, we find it fascinating to think about the world that might have been. Even more surprisingly, it happens that many of these other options include features which developers would love to see appear in CSS even today.
The First Proposal
In early 1993 the Mosaic browser had not yet reached 1.0. Those browsers that did exist dealt solely with HTML. There was no method of specifying the style of HTML whatsoever, meaning whatever the browser decided an
<h1> should look like, that’s what you got.
In June of that year, Robert Raisch made a proposal to the www-talk mailing list to create a “an easily parsable format to deliver stylistic information along with Web documents” which would be called RRP.
@BODY fo(fa=he,si=18)
If you have no idea what this code is doing you are forgiven. This particular rule is setting the font family (
fa) to helvetica (
he), and the font size (
si) to 18 points. It made sense to make the content of this new format as terse as was possible as it was born in the era before gzipping and when connection speeds hovered around 14.4k.
Some interesting things missing from this proposal were any mention of units, all numbers being interpreted based on their context (font sizes were always in points for example). This could be attributed to RRP being designed more as a “set of HINTS or SUGGESTIONS to the renderer” rather than a specification. This was considered necessary because the same stylesheet needed to function for both the common line-mode browsers (like Lynx, and the graphical browsers which were becoming increasingly popular.
Interestingly, RRP did include a method of specifying a columnar layout, a feature which wouldn’t make it to CSS until 2011. For example, three columns, each of width ‘80 units’ would look like this:
@P co(nu=3,wi=80)
It’s a little hard to parse, but not much worse than
white-space: nowrap perhaps.
It’s worth noting that RRP did not support any of the ‘cascading’ we associate with stylesheets today. A given document could only have one active stylesheet at a time, which is a logical way to think about styling a document, even if it’s foreign to us today.
Marc Andreessen (the creator of Mosaic, which would become the most popular browser) was aware of the RRP proposal, but it was never implemented by Mosaic. Instead, Mosaic quickly moved (somewhat tragically) down the path of using HTML tags to define style, introducing tags like
<FONT> and
<CENTER>.
Viola and the Proto-Browser Wars
Then why don't you just implement one of the many style sheet
proposals that are on the table. This would pretty much solve the
problem if done correctly.
So then I get to tell people, "Well, you get to learn this language
to write your document, and then you get to learn that language for
actually making your document look like you want it to." Oh, they'll
love that.
Contrary to popular perception, Mosaic was not the first graphical browser. It was predated by ViolaWWW, a graphical browser originally written by Pei-Yuan Wei in just four days.
Pei-Yuan created a stylesheet language which supports a form of the nested structure we are used to in CSS today:
(BODY fontSize=normal BGColor=white FGColor=black (H1 fontSize=largest BGColor=red FGColor=white) )
In this case we are applying color selections to the body and specifically styling
H1s which appear within the body. Rather than using repeated selectors to handle the nesting, PWP used a parenthesis system which is evocative of the indentation systems used by languages like Stylus and SASS which are preferred by some developers to CSS today. This makes PWP’s syntax potentially better in at least one way than the CSS language which would eventually become the lingua franca of the web.
PWP is also notable for introducing the method of referring to external stylesheets we still use today:
<LINK REL="STYLE" HREF="URL_to_a_stylesheet">
ViolaWWW was unfortunately written to work chiefly with the X Windowing
System which was only popular on Unix systems. When Mosaic was ported to Windows it quickly left Viola in the dust.
Stylesheets Before the Web
HTML is the kind of thing that can
only be loved by a computer scientist. Yes, it expresses the underlying
structure of a document, but documents are more than just structured text
databases; they have visual impact. HTML totally eliminates any visual
creativity that a document’s designer might have.
The need for a language to express the style of documents long predates the Internet.
As you may know, HTML as we know it was originally based on a pre-Internet language called SGML. In 1987 the US Department of Defense decided to study if SGML could be used to make it easier to store and transmit the huge volume of documentation they deal with. Like any good government project, they wasted no time coming up with a name. The team was originally called the Computer-Aided Logistics Support team, then the Computer-aided Acquisition and Logistics Support team, then finally the Continuous Acquisition and Life-cycle Support initiative. In any case, the initials were CALS.
The CALS team created a language for styling SGML documents called FOSI which is an initialism which undoubtedly stands for some combination of four words. They published a specification for the language which is as comprehensive as it is incomprehensible. It also includes one of the best nonsensical infographics to ever exist on the web.
One inviolate rule of the Internet is: more will always get done if you can prove someone wrong in the process. In 1993, just four days after Pei-Yuan’s proposal, Steven Heaney proposed that rather than “re-inventing the wheel,” it was best to use a variant of FOSI to style the web.
A FOSI document is itself written in SGML, which is actually a somewhat logical move given web developers existing familiarity with the SGML variant HTML. An example document looks like this:
> </outspec>
If you’re a bit confused what a
docdesc or
charlist are, so were the
members of
www-talk. The only contextual information given was that
e-i-c means ‘element in context’. FOSI is notable however for introducing the
em unit which has now become the preferred method for people who know more about CSS than you to style things.
The language conflict which was playing out was actually as old as programming itself. It was the battle of functional ‘lisp-style’ syntax vs the syntax of more declarative languages. Pei-Yuan himself described his syntax as “LISP’ish,” but it was only a matter of time until a true LISP variant entered the stage.
The Turing-Complete Stylesheet
For all its complexity, FOSI was actually perceived to be an interim solution to the problem of formatting documents. The long-term plan was to create a language based on the functional programming language Scheme which could enable the most powerful document transformations you could imagine. This language was called DSSSL.
In the words of contributor Jon Bosak:.
At its simplest, DSSSL is actually a pretty reasonable styling language:
(element H1 (make paragraph font-size: 14pt font-weight: 'bold))
As it was a programming language, you could even define functions:
(define (create-heading heading-font-size) (make paragraph font-size: heading-font-size font-weight: 'bold)) (element h1 (create-heading 24pt)) (element h2 (create-heading 18pt))
And use mathematical constructs in your styling, for example to ‘stripe’ the rows of a table:
(element TR (if (= (modulo (child-number) 2) 0) ... ;even-row ...)) ;odd-row
As a final way of kindling your jealousy, DSSSL could treat inherited values as
variables, and do math on them:
(element H1 (make paragraph font-size: (+ 4pt (inherited-font-size))))
DSSSL did, unfortunately, have the fatal flaw which would plague all
Scheme-like languages: too many parenthesis. Additionally, it was arguably too complete of a spec when it was finally published which made it intimidating to browser developers. The DSSSL spec included over 210 separate styleable properties.
The team did go on to create XSL, a language for document transformation which is no less confusing, but which would be decidedly more popular.
Why Did The Stylesheet Cross The Wire
CSS does not include parent selectors (a method of styling a parent based on what children it contains). This fact has been long bemoaned by Stack Overflow posters, but it turns out there is a very good reason for its absence. Particularly in the early days of the Internet, it was considered critically important that the page be renderable before the document has been fully loaded. In other words, we want to be able to render the beginning of the HTML to the page before the HTML which will form the bottom of the page has been fully downloaded.
A parent selector would mean that styles would have to be updated as the HTML document loads. Languages like DSSSL were completely out, as they could perform operations on the document itself, which would not be entirely available when the rendering is to begin.
The first contributor to bring up this issue and propose a workable language was Bert Bos in March of 1995. His proposal also contains an early edition of the ‘smiley’ emoticon :-).
The language itself was somewhat ‘object-oriented’ in syntax:
*LI.prebreak: 0.5 *LI.postbreak: 0.5 *OL.LI.label: 1 *OL*OL.LI.label: A
Using
. to signify direct children, and
* to specify ancestors.
His language also has the cool property of defining how features like links work in the stylesheet itself:
*A.anchor: !HREF
In this case we specified that the destination of the link element is the value of its
HREF attribute. The idea that the behavior of elements like links should be controllable was popular in several proposals. In the era pre-JavaScript, there was not an existing way of controlling such things, so it seemed logical to include it in these new proposals.
One functional proposal, introduced in 1994 by a gentleman with the name ‘C.M. Sperberg-McQueen’, includes the same behavior functionally:
(style a (block #f) ; format as inline phrase (color blue) ; in blue if you’ve got it (click (follow (attval 'href))) ; and on click, follow url
His language also introduced the
content keyword as a way of controlling the content of an HTML element from the stylesheet, a concept which was later introduced into CSS 2.1.
What Might Have Been
Before I talk about the language which actually became CSS, it’s worth mentioning one other language proposal if only because it is in some ways the thing of an early web developer’s dreams.
PSL96 was, in the naming convention of the time, the 1996 edition of the “Presentation Specification Language.” At its core, PSL looks like CSS:
H1 { fontSize: 20; }
It quickly gets more interesting. You could express element position based on not just the sizes specified for them (
Width), but the actual (
Actual Width) sizes the browser rendered them as:
LI { VertPos: Top = LeftSib . Actual Bottom; }
You’ll also notice you can use the element’s left sibling as a constraint.
You can also add logical expressions to your styles. For example to style only anchor elements which have
hrefs:
A { if (getAttribute(self, "href") != "") then fgColor = "blue"; underlineNumber = 1; endif }
That styling could be extended to do all manner of things we resort to classes today to accomplish:
LI { if (ChildNum(Self) == round(NumChildren(Parent) / 2 + 1)) then VertPos: Top = Parent.Top; HorizPos: Left = LeftSib.Left + Self.Width; else VertPos: Top = LeftSib.Actual Bottom; HorizPos: Left = LeftSib.Left; endif }
Support for functionality like this could have perhaps truly enabled the dream of separating content from style. Unfortunately this language was plagued by being a bit too extensible, meaning it would have been very possible for its implementation to vary considerably from browser to browser. Additionally, it was published in a series of papers in the academic world, rather than on the www-talk mailing list where most of the functional work was being done. It was never integrated into a mainstream browser.
The Ghost of CSS Past
The language which, at least in name, would directly lead to CSS was called CHSS (Cascading HTML Style Sheets), proposed in 1994 by Håkon W Lie.
Like most good ideas, the original proposal was pretty nutty.
h1.font.size = 24pt 100% h2.font.size = 20pt 40%
Note the percentages at the end of rules. This percentage referred to how much ‘ownership’ the current stylesheet was taking over this value. If a previous stylesheet had defined the
h2 font size as
30pt, with
60% ownership, and this stylesheet styled
h2s as
20px 40%, the two values would be combined based on their ownership percentage to get some value around
26pt.
It is pretty clear how this proposal was made in the era of document-based HTML pages, as there is no way compromise-based design would work in our app-oriented world. Nevertheless, it did include the fundamental idea that stylesheets should cascade. In other words, it should be possible for multiple stylesheets to be applied to the same page.
This idea, in its original formulation, was generally considered important because it gave the end user control over what they saw. The original page would have one stylesheet, and the web user would have his or her own stylesheet, and the two would be combined to render the page. Supporting multiple stylesheets was viewed as a method of maintaining the personal-freedom of the web, not as a way of supporting developers (who were still coding individual HTML pages by hand).
The user would even be able to control how much control they gave to the suggestions of the page’s author, as expressed in an ASCII diagram in the proposal:
User Author Font o-----x--------------o 64% Color o-x------------------o 90% Margin o-------------x------o 37% Volume o---------x----------o 50%
Like many of these proposals, it included features which would not make it into CSS for decades, if ever. For example, it was possible to write logical expressions based on the user’s environment:
AGE > 3d ? background.color = pale_yellow : background.color = white DISPLAY_HEIGHT > 30cm ? :
In a somewhat optimistic sci-fi vision of the future, it was believed your browser would know how relevant a given piece of content was to you, allowing it to show it to you at a larger size:
RELEVANCE > 80 ? h1.font.size *= 1.5
You Know What Happened Next
Microsoft is absolutely committed to open standards, especially on the Internet.
Håkon Lie went on to simplify his proposal and, working with Bert Bos, published the first version of the CSS spec in December of 1996. Ultimately he would go on to write his doctoral thesis on the creation of CSS, a document which was heroically helpful to me in writing this.
Compared to many of the other proposals, one notable fact of CSS is its simplicity. It can be easily parsed, easily written, and easily read. As with many other examples over the history of the Internet, it was the technology which was easiest for a beginner to pick up which won, rather than those which were most powerful for an expert.
It is itself a reminder of how incidental much of this innovation can be. For example, support for contextual selectors (
body ol li) was only added because Netscape already had a method for removing borders from images that were hyperlinks, and it seemed necessary to implement everything the popular browser could do. The functionality itself added a significant delay to the implementation of CSS, as at the time most browsers didn’t keep a ‘stack’ of tags as they parsed HTML. This meant the parsers had to be redesigned to support CSS fully.
Challenges like this (and the widespread use of non-standard HTML tags to define style) meant CSS was not usable until 1997, and was not fully supported by any single browser until March of 2000. As any developer can tell you, browser support wasn’t anywhere close to standards compliant until just a few years ago, more than fifteen years after CSS’ release.
The Final Boss.
— Jeffrey Zeldman
Internet Explorer 3 famously launched with (somewhat terrible) CSS support. To compete, it was decided that Netscape 4 should also have support for the language. Rather than doubling down on this third (considering HTML and JavaScript) language though, it was decided it should be implemented by converting the CSS into JavaScript, and executing it. Even better, it was decided this ‘JavaScript Style Sheet’ intermediary language should be accessible to web developers.
The syntax is straight JavaScript, with the addition of some styling-specific APIs:
tags.H1.color = "blue"; tags.p.fontSize = "14pt"; with (tags.H3) { color = "green"; } classes.punk.all.color = "#00FF00" ids.z098y.letterSpacing = "0.3em"
You could even define functions which would be evaluated every time the tag was encountered:
evaluate_style() { if (color == "red"){ fontStyle = "italic"; } else { fontWeight = "bold"; } } tag.UL.apply = evaluate_style();
The idea that we should simplify the dividing line between styles and scripts is certainly reasonable, and is now even experiencing a resurgence of sorts in the React community.
JavaScript was itself a very new language at this time, but via some reverse engineering Internet Explorer had already added support for it into IE3 (as “JScript”). The bigger issue was the community had already rallied around CSS, and Netscape was, at this time, viewed as bullies by much of the standards community. When Netscape did submit JSSS to the standards committee, it fell on deaf ears. Three years later, Netscape 6 dropped support for JSSS and it died a (mostly) quiet death.
What Might Have Been
Thanks to some public shaming by the W3C, Internet Explorer 5.5 launched with nearly complete CSS1 support in the year 2000. Of course, as we now know, browser CSS implementations were heroically buggy and difficult to work with for at least another decade.
Today the situation has fortunately improved dramatically, allowing developers to finally realize the dream of writing code once and trusting it will function (almost) the same from browser to browser.
Our conclusion from all of this is the realization of just how arbitrary and contextual many of the decisions which govern our current tools were. If CSS was designed the way it is just to satisfy the constraints of 1996, then maybe that gives us permission 20 years later to do things a little differently.
Use Cloudflare Apps to build tools which can be installed by millions of sites.
If you're in San Francisco, London or Austin: work with us.
- | https://blog.cloudflare.com/the-languages-which-almost-became-css/ | CC-MAIN-2018-05 | refinedweb | 3,208 | 59.03 |
UAX14: line-break should be allowed after hyphens (unless followed by number)
RESOLVED WORKSFORME
Status
()
People
(Reporter: alanmwood, Unassigned)
Tracking
(Blocks 2 bugs, {intl, testcase})
101519, 106179, 108073, 111104, 112545, 142446, 149137, 154541, 159340, 160852, 173534, 174302, 175799, 192757, 193360, 195491, 204233, 207549, 214618, 215166, 217520, 228243, 230100, 230716, 234071, 239157, 242615, 248452, 249083, 252327, 257673, 263803, 271878, 289462, 290045, 291405, 294059, 369437, 379826
Bug Flags:
Firefox Tracking Flags
(Not tracked)
Attachments
(9 attachments, 11 obsolete attachments)
From Bugzilla Helper: User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows 95) BuildID: 2001080110 Very long words, such as systematic chemical names, do not wrap to stay within the browser window, on-screen and when printed. Reproducible: Always Steps to Reproduce: 1.Display page 2. 3. Actual Results: Very long words are cut off at the right-hand edge of the window. Expected Results: Very long words words should be wrapped to stay within the browser window. It is conventional to break such words after hyphen, right parenthesis or right bracket.
OS: Windows 95 → All
Hardware: PC → All
Status: UNCONFIRMED → NEW
Ever confirmed: true
If I read correctly, I don't think it's required to break a line at a hard hyphen, only at a soft hyphen (). But it would be nice, I agree. But splitting at a hyphen wouldn't be enough, you would also have to implement real word-wrapping. And the rules for that are very complicated, and are different for every language. I know a guy who helped to implement an advanced word-wrapping algorithm in Dutch for a newspaper, and it took 2 man-years to do it ! See also bug 47483 and bug 9101 for soft hyphen support. I think this should be supported at the least. But not in this case - hyphens are mandatory in chemical formulas.
Re Johan Hermans' comments, I don't consider this bug to be anything to do with HTML 4 recommendations. HTML pages do not have a defined page width (unlike word processor documents), so I think they should wrap to stay within the browser window when it is re-sized. Internet Explorer has done this with very long words since at least version 4. The word-wrapping algorithm does not need to be perfect, and I would have thought that there was already something available in the public domain. Alan Wood (alan.wood@context.co.uk)
*** Bug 101519 has been marked as a duplicate of this bug. ***
Old summary: "Very long words in table cells do not wrap" New summary: "Very long words in table cells do not wrap (such as hyphens)" (to make dupe-finding easier)
Summary: Very long words in table cells do not wrap → Very long words in table cells do not wrap (such as hyphens)
*** Bug 106179 has been marked as a duplicate of this bug. ***
-->attinasi
Assignee: karnaze → attinasi
*** Bug 108073 has been marked as a duplicate of this bug. ***
Note: IE 6.0 breaks at the hyphens. today's trunk cvs build on WINNT does not.
Target Milestone: --- → mozilla1.1
*** Bug 111104 has been marked as a duplicate of this bug. ***
Testcase from bug 111104:
*** Bug 112545 has been marked as a duplicate of this bug. ***
This patch can wrap long word by key-characters. Key-charcters are parted tree groups. suffixed-key / : ; & suffixed-key ) ] } > ! ? prefixed-key ( [ { < prefixed-key $ \ sepalated-key % - I tested mozilla-0.9.8.
My patch program "patch to wrap long word by key-characters" includes a patch to fix a bug. Probably a bug has not just yet be reported to Bugzilla.org. A word included Japanese Kanji can not be wrapped after Kanji, i.e. its word goes through the TABLE tag width. I list testcase below. A word in the 1st TABLE tag is wrapped corectly, but a word in the 2nd TABLE tag is not wrapped. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML><HEAD><TITLE>Testcase for a not wrapped word</TITLE> <META http-equiv=Content-Type </HEAD><BODY> <TABLE cellSpacing=0 cellPadding=0 width=80 align=left border=1> <TBODY><TR><TD> <!-- this word is wrapped correctly --> 012あabxy </TD></TR></TBODY> <TABLE cellSpacing=0 cellPadding=0 width=80 align=left border=1> <TBODY><TR><TD> <!-- this word is not wrapped --> 012あab<A href="">xy</A> </TD></TR></TBODY> </TABLE></BODY></HTML>
Now I have a mistake. A attachment "testcase of a word not wrapped after Kanji" is not patch.
P.S. This testcase depends on the font size. Please select an appropriate font size or change the width of TABLE tag. (change width="80" to width="60" for example)
*** Bug 142446 has been marked as a duplicate of this bug. ***
just some comments for the patch author, and hopefully some more visibility for the patch... is_alpha is '_' really an alpha? is_number reimplements what i think is a standard function call... IS_BREAK_CHAR by case looks like a macro but is not afaik return x; is preferred over return (x); and else is shunned after return the function also appears to be asking for a switch statement. #define isalnum_(c) (isalnum(c) || c=='_') /*yes i just broke the case naming law for macros, whoops*/ inline PRBool is_break_char(PRUnichar c, PRUnichar cc) { if (isalnum_(c)) { switch (cc) { case '/': case ':': case ';': case '&': case '!': case '?': case ')': case ']': case '}': case '>': return PR_TRUE; } } else switch (c) case '%': case '-': return PR_TRUE; case '(': case '[': case '{': case '<': if (is_alpha(cc) || is_number(cc)) return PR_TRUE; case '$': case'\\': if (is_number(cc)) return PR_TRUE; } return PR_FALSE; } comments anyone?
Mr. timeless, thank you for your advice. The patch was updated.
Although it is also in a former comment, if there is an element of <xx>...</xx> which follows the multiple Kanji characters which does not contain a white space, also contains the patch coping with the fault which the portion of "..." is not turned up correctly. The following is an portion of the patch. layout/html/base/src/nsTextFrame.cpp: } else { firstChar = *bp2; + if (IS_CJK_CHAR(firstChar)) + aTextData.mIsBreakable = PR_TRUE; } Refer to the patch of Bug 135323 for the macro IS_CJK_CHAR used in a patch.
:) fwiw i made one trivial omission: return PR_TRUE; break; //<-- this case '$': case'\\': it doesn't actually matter, ideally the compiler will realize that the next case can't be satisfied any better than the previous case but for correctness we should have the break. good luck.. the rest of this is out of my area
Thank you for your indication. It was careless. The patch was updated.
The macro of IS_CJK_CHAR was defined in nsTextFrame.cpp.
Saito san, This is again a very difficult problem to handle. We have problem in breaking certain symbol sequence, like :-). Those commonly used stuff should be kept together, and that's why we only break ascii word by space. From this point of view, your current approach will be unacceptable. My proposal is (logically) to do this in 2 steps. First, we try all those old logic to wrap words, as what we are doing now. Second, if and only if a single word is too long to fit in current cell, should we try to break the word. In Linebreaker, we need to implement an new API to break a word into word segment. That should be rather easy to do, basically you can move your new code to this function. In layout code, we want to call this api and break a word only when such situation arises. That is difficult because layout code looks too complicated, but it is doable. Let me know if you agree with this approach and if you have time to do it.
I understood that many problems are in my patch. I will investigate whether anything may be made according to your advice. My patch points out other problem related to a line break. If an element as shown in <a>...</a> is included in a line which does not contain any space only with the multiple CJK characters, a line break will go wrong. A CJK character is possible to break a line, therefore, variable aTextData.mIsBreakable should be set to TRUE. layout/html/base/src/nsTextFrame.cpp: } else { firstChar = *bp2; + if (IS_CJK_CHAR(firstChar)) + aTextData.mIsBreakable = PR_TRUE; } If it seems that there is especially no comment, I will report newly a bug report as this problem.
It is a test patch. It changed so that the string longer than the width of a table might be broken.
The bug of Bug 153504 with nowrap property of CSS might be fixed. Since correction has been added if it sometimes tests, though regrettable, it is a very complicated patch. Although it may be hard to accept my patch, I think that processing to break a string in case the breakable string and unbreakable string is connected turns into similarly complicated processing.
Explanation of a patch: The string loaded first turns up and asks for the longest WORD length. An actual string is laid so that a wrap may not take place if possible. It is because window width cannot be made narrower than the WORD length which asked first when window width is shortened.
I am sorry. It corrects. When table width is variable length, it is because table width cannot be made narrower than the WORD length which asked first when window width is shortened.
It tested about the <td nowrap> and <nobr></nobr> elements, the patch was corrected.
As you work on hyphenation, could you please test that it breaks at soft hyphen. As a result I expect text: ab- bc c d And not: abbc c d as does 2002072221 nightly
*** Bug 159340 has been marked as a duplicate of this bug. ***
*** Bug 154541 has been marked as a duplicate of this bug. ***
I tried to fix the problem of soft-hypen, the changes is only a display part. This patch also includes the fix of word-wrapping, a problem of nowrap property of CSS and a problem of <nobr></nobr> tag.
*** Bug 160852 has been marked as a duplicate of this bug. ***
*** Bug 149137 has been marked as a duplicate of this bug. ***
*** Bug 175799 has been marked as a duplicate of this bug. ***
Does anyone know what is happening with this bug? After a burst of activity in July, it seems to have gone quiet, and the target (1.1alpha) has not been updated. It is assigned to attinasi@netscape.com (Marc Attinasi), but email to that address is rejected (User unknown). Alan Wood alan.wood@context.co.uk
I've tried to interpret this bug and patch. Can someone clarify what will happen with the strings: "2002-12-31" (ISO date) "-2147483648-2147483647" (the interval for a 32 bit 2-complement integer) "x=(-3)*y-5" (some maths) "x=-3*y-5" (some maths)
If page width is narrowed, it will be displayed as follows. "2002-12-31" result: 2002- 12-31 "-2147483648-2147483647" result: - 2147483648- 2147483647 "x=(-3)*y-5" result: x=(- 3) *y-5 "x=-3*y-5" result: x=- 3*y- 5
Not all those were especially good. I could accept them if they were the last resort, but I know from MSIE that MSIE unnecessary compresses table cells and forces breaking were none is wanted. I would be unhappy to see the same in Mozilla.
Compared with MSIE, patch has an advantage. By the case "x=(-3)*y-5 x=-3*y-5" is displayed, when a margin is in the width of a table cell, an effect can be seen. MSIE result: <-------------> "x=(-3)*y-5 x=- 3*y-5" mozilla with patch result: <-------------> x=(-3)*y-5 x=-3*y-5
Attinasi is gone. Reassigning to patch author.
Assignee: attinasi → saito
*** Bug 192757 has been marked as a duplicate of this bug. ***
Comment on attachment 108709 [details] [diff] [review] patch v9 for mozilla-1.2.1 Saito-san, please post your new patch.
Attachment #108709 - Attachment is obsolete: true
I don't think we should break on hyphens. So why should we add this much additional code complexity to fix something that isn't even a bug? (Adding support for soft hyphen, etc., is definitely a good thing, but I imagine the changes to do that would be much simpler.)
OK, I'll partially retract that statement (in response to email sent to me that should have been a comment on this bug). I don't think we should break on hyphens due to the complexity of the current linebreaking code. (I recall a discussion of this issue in much more detail in another bug, though, that led me to think we shouldn't break on hyphens at all.)
I was going to disagree with Comment #51, even after looking at the patch, but I didn't as I've worked on line breaking code before. Comment #52 gets my agreement, but maybe for differing reasons. Line breaking code is horribly complex as it has to deal with all possible scenarios. Now, as Mozilla is a "mutli-lingual" browser, line breaking becomes exponentially worse. I'd hate to try and determine linebreaks for Kanji(which I don't think has hypens at all) or Arabic(which I recall is read right to left). I beleive the W3C standard is to break on n which is a deliberate choice made by the web page designer. Maybe that is all that is needed, and other cases can go to Evang.
Please don't give up on this bug. Check the URL from my original bug report in Mozilla, and then compare with I.E. 4+ and Opera 6+, both of which wrap reasonably well. Printed material has always broken lines after hyphens. Word processors break lines after hyphens. Why should Web browsers be any different? There are some hyphens where breaking is not appropriate, and the rules need to take this into account, by for example not allowing a break if it would produce a string of 3 or fewer characters at the start of the next line. Algorithms like this have existed for many years for computerised typesetting, and there is no good reason why Web browsers cannot have this facility. I appreciate that multi-script line breaking introduces even more complexity, but isn't Mozilla intended to be the best Web browser? Alan Wood
We shouldn't drop this. If necessary, push it into the future. For many languages, such as English, good style requires line breaking on hyphens. We should do this, eventually. I'm sure languages have different hyphenation rules, but maybe that could be addressed by relying on statements like xml:lang="en" in the page source.
The point is, it would be nice to make the current line-breaking and text-measurement code something approaching readable. nsTextFrame::Reflow is already one of the most regression-prone (if ever touched) and undecipherable pieces of code in Mozilla. I think breaking it up into multiple functions (ones that are not 300 lines long), renaming the variables to have names that have something to do with what they're doing (some fail this test last I checked), writing some frigging comments should take opriority over adding new stuff to it.... (my 2cents). At that point, changes would become much more reasonable both to code and to review...
dbaron, do you oppose the simple code which fix this bug?
Please see a screen shot, some strings overflow a table frame. It is the bug of mozilla. Since the patch contained the code for correcting its bug, it became complicated. It is likely to be necessary to divide a patch.
I guess I really shouldn't talk, since I don't work on horizontal aspects of inline layout...
I think we all understand the reasons for this patch.... I agree that they are good reasons. All I'm asking is that people consider the maintainability of the code in addition to its functionality. The current code is unmaintainable, so it would be nice if someone who understands the code (as Saito-san clearly does) could make it more readable and maintainable... (either before or after landing this patch, as long as it _happens_).
*** Bug 193360 has been marked as a duplicate of this bug. ***
> Bug 193360 Bugzilla recognizes that some strings of <textare> is URL. If a string is wrapped at '/', '-' or etc., it will become impossible to recognize a URL since new-line is included. This patch does not wrap any long words in <textare> that has -moz-pre-wrap property.
*** Bug 195491 has been marked as a duplicate of this bug. ***
Why is this bug applied only to table cells? Mozilla doesn't break long strings in any context, to my knowledge. This may be late in the game, but line-breaking on punctuation probably should strive to follow the rules laid out in: To wit, see this comment from the bug:
I checked only ascii code. Please refer to a function of nsTextTransformer::GetNextDividedWord. This patch includes a bug fix shown below, because style of text should be updated for connecting some fragmentary text. nsTextFrame::ComputeWordFragmentDimensions + nsIStyleContext* aStyleContext; + aTextFrame->GetStyleContext(&aStyleContext); + const nsStyleText* textStyle = (const nsStyleText*) + aStyleContext->GetStyleData(eStyleStruct_Text); + aCanBreakBefore = (NS_STYLE_WHITESPACE_NORMAL == textStyle->mWhiteSpace) || + (NS_STYLE_WHITESPACE_MOZ_PRE_WRAP == textStyle->mWhiteSpace);
Attachment #114308 - Attachment is obsolete: true
Care to file a separate bug for that? Do well to also include a testcase to show the problem. It is not clear why |aCanBreakBefore| is out-of-sync when it is passed to ComputeWordFragmentDimensions().
*** Bug 204233 has been marked as a duplicate of this bug. ***
This patch includes some changes for hyphenation of soft hyphen. The frame's white-space mode should be updated whenever the text of the next text frame is read , but I was not able to show the effect of the following patch. I removed the following patch. nsTextFrame::ComputeWordFragmentDimensions + aCanBreakBefore = (NS_STYLE_WHITESPACE_NORMAL == textStyle->mWhiteSpace) || + (NS_STYLE_WHITESPACE_MOZ_PRE_WRAP == textStyle->mWhiteSpace);
Attachment #120999 - Attachment is obsolete: true
*** Bug 214618 has been marked as a duplicate of this bug. ***
*** Bug 215166 has been marked as a duplicate of this bug. ***
Target Milestone: mozilla1.1alpha → ---
*** Bug 217520 has been marked as a duplicate of this bug. ***
*** Bug 217705 has been marked as a duplicate of this bug. ***
Retitling bug from "Very long words in table cells do not wrap (such as hyphens)" to "line-break should be allowed after hyphens (unless followed by number)", which better reflects what this bug is about. We're not going to change the way table cells compute their size. Unbreakable things inside table cells will increase the size of the table. The web depends on this, and it's described in (admittedly, an informative part of) the CSS2 table model. What we might change is what's considered breakable and what isn't. UAX #14 suggests that line breaks should be allowed after hyphens unless they're followed by a numeric character. I agree that this makes sense and we should do this. It should be possible to do without increasing the complexity of the code. One serious problem with our current line breaking code is that we follow three separate codepaths -- one for text that's entirely ASCII, one for text that's not ASCII but has no CJK characters, and one for text with CJK characters. In some cases, these codepaths have different behavior. Since HYPHEN-MINUS is in ASCII, all three codepaths need to be modified to fix this bug -- or we need to combine the codepaths. I think combination of the codepaths is probably a good approach, and I may be able to look into it in the near future. It might also allow us to implement additional improvements to line breaking based on UAX #14 (also see discussion on bug 56652 and bug 206152), such as support for soft hyphens. Any duplicates of this bug that aren't about hyphens should (I think) be reopened (and perhaps marked as duplicates of other bugs or marked invalid). If someone disagrees, please say so.
Summary: Very long words in table cells do not wrap (such as hyphens) → line-break should be allowed after hyphens (unless followed by number)
Component: Layout: Tables → Layout: Fonts and Text
Comment on attachment 123515 [details] [diff] [review] patch I think one reason this patch introduces so much additional complexity is that it's trying to modify line breaking from a level of the code other than the one at which line breaking happens. I would expect the fix to this bug to be closer to our current line breaking code, i.e., nsJISX4501LineBreaker (sic) and nsTextTransformer::Scan*. I don't see why a fix for this bug would need to modify other code.
My problem (originally 217520, but transferred to 95067) concerned very_long_words (long string of text without white space) that exceeded cell width, and often window width. This has little to do with splitting a word on hyphens, etc. What is needed at a minimum is too ensure that a very_long_words will NEVER cause the cell width to exceed the window width. This is in itself a relatively simple problem, unlike splitting the very_long_word in an esthetically pleasing manner. It also solves a serious problem. When a very_long_word causes a text cell to exceed the window width, in a long text, the text is rendered unreadable, due to requiring scrolling right/left for each line, without visual cues to maintain one's place in the text. As this is often important information (e.g. re security patches) this poses a SERIOUS PROBLEM. Agreed, it would be NICE to have a more esthetically pleasing presentation, at A MINIMUM, THIS PARTICULAR PROBLEM should be solved as A PRIORITY.
No. As I said in comment 74, we're not changing the basic table algorithm. The web depends on it, and your proposal really won't help for all but the simplest case. (Why does breaking at the *window width* help for a table that has multiple columns?) But I reopened your bug and marked it a duplicate of a different bug. Please do not discuss the issue further on *this* bug. It's off-topic.
I am not completely happy with the end of the new title "unless followed by number". I can see the validity of this for dates, as in 2002-12-31 (comment 43), but I feel that some breaking of dates can be avoided by a widow/orphan setting of 3 characters, i.e. don't break if it would result in 1, 2 or 3 characters at the start or end of a line. Perhaps a widow/orphan setting could be included in Preferences? Not breaking after a hyphen that is followed by a number does not work so well for chemical names, which started this bug. For example: 2-bromo-4,4-dichlorophenol could happily be broken as 2-bromo- 4,4-dichlorophenol but breaking as 2-bromo-4,4- dichlorophenol is much less satisfactory. However, breaking after some hyphens would be MUCH better than never breaking after hyphens. We cannot have manual checking of each break, and so we will have to accept some imperfections. Alan Wood
Regarding the venerated and abused ASCII hyphen (Unicode HYPHEN-MINUS): "unless followed by number" is not explicitly called for by UAX14: Instead, that spec rather vaguely states: "Some additional context analysis is required to distinguish usage of this character as a hyphen from the use as minus sign (or indicator of numerical range). If used as hyphen, it acts like HYPHEN." In this instance, HYPHEN is a subsection of: The context analysis mentioned is primarily to determine whether the hyphen is being used as a MINUS: (I'm a little surprised about the "indicator of numerical range" clause, as that implies functional equivalence to the EN DASH, which breaks like HYPHEN.) I'm not sure that context analysis could be relied on to distinguish between breaking and non-breaking requirements for the chemical names. Perhaps the best solution to that problem is to use explicit Unicode hyphen (2010) and non-breaking hyphen (2011) characters in the text.
*** Bug 217520 has been marked as a duplicate of this bug. ***
It is explicitly stated in , rule LB18: HY × NU
With reference to the last paragraph of comment 79, the idea of using U+2010 (hyphen) and U+2011 (non-breaking hyphen) in chemical names is a non-starter. These characters are not on any keyboard, and so would need to be entered as numeric character references. These characters are not included in the core fonts for Windows (Arial, Courier New, Times New Roman), and therefore are not likely to be displayed in most people's Web browsers. I can live with sub-optimal breaking at hyphens in chemical names. Any sort of breaking at hyphens would make it possible to view and print long chemical names in Mozilla, something which is sadly not possible at the moment. Alan Wood
I suspect the most common place where the sequence hyphen,number occurs is in negative numbers. I think that's why UAX #14 recommends that a break should not be allowed between a hyphen and a numeric character. We don't want to format the sentence: The sum of 7000 and -5000 is 2000. as: The sum of 7000 and - 5000 is 2000. Distinguishing the negative number case from others requires more than pair-based analysis, and the algorithm presented in UAX #14 is based on pair-based analysis, which is simpler to implement than more complex analysis for line breaking.
With reference to comment 83, I had not thought about negative numbers, but there must be lots of them on the Web, mostly using the hyphen-minus character from the keyboard instead of the proper Unicode minus sign (U+2212) which is present in WGL4 (and is therefore present in many TrueType fonts for Windows). I agree that the case for not breaking after the 'minus' of a negative number is stronger than my case for allowing it because of chemical names (comment 78). Alan Wood
Regarding the issue of rendering for Hyphen and Non-Breaking Hyphen: as it turns out, Mozilla realizes that if these characters are not part of the font, it can render them using the Hyphen-Minus glyph, and does so. Of course, it doesn't break after either. IE6 also substitutes, but only for the Non-Breaking Hyphen; Opera, whose Unicode handling is currently not so great anyway, doesn't substitute for either. However, both of these browsers correctly handle the breaking properties of both characters. I hadn't realized the precise nature of the rendering problem because my default web font is Palatino Linotype, which does define the two hyphens characters, so my test page looked fine in all three browsers.
*** Bug 207549 has been marked as a duplicate of this bug. ***
*** Bug 228243 has been marked as a duplicate of this bug. ***
*** Bug 230100 has been marked as a duplicate of this bug. ***
*** Bug 230716 has been marked as a duplicate of this bug. ***
This bug is the apparent cause of a large block of white-space in Mozilla won't line-break the text: April-Fool's-joke-29-days-after-April-Fool's-Day in the article. Does this qualify the bug for the TOP100 keyword?.
*** Bug 234071 has been marked as a duplicate of this bug. ***
*** Bug 239157 has been marked as a duplicate of this bug. ***
*** Bug 242615 has been marked as a duplicate of this bug. ***
About the complexity of the program code you may be right. But I think you mostly see websites written in english. The use of the hyphen as wordbreak is often used in german language with sometimes very long words. I would prefer a simple breaking rule by matching [A-Za-z]{3}-[A-Za-z]{3} or a -moz- CSS-rule wich I can add to the body.
*** Bug 248452 has been marked as a duplicate of this bug. ***
*** Bug 249083 has been marked as a duplicate of this bug. ***
(In reply to comment #97) > About the complexity of the program code you may be right. But I think you > mostly see websites written in english. The use of the hyphen as wordbreak is > often used in german language with sometimes very long words. An "in English" example is how Firefox does not wrap the rather long trackback_urls when permalinks is on (on WordPress sites). French also uses hyphens as wordbreaks, but you are talking then about soft hyphens wich is not the subject of this page. Refer to Bug 9101 for soft hyphens. Anyway, compound words containing normal hyphens should break after the hyphen if needed (at the end of a line).
(In reply to comment #93) >. Look nicer is not the only argument. Typography is an art that existed for centuries for the ease of reading. We're used to it. Reading a good typographed taxt is far from reading each words. We read only part of the word and mentally complete the words depending on the general purpose of the text. In justified text, large space between words do create white columns that attract the eye and break reading. You whould say that it's not the case in left-aligned text. In fact it is also a problem there since the more your text width is short, the more you will have differences of length between lines. So that reading become more and more difficult. >. It is better to use nowrap SPANs according to your own country rules of typography, than not allowing others to do anything against this problem. Opera and IE do respect these conventions, so why not Moz ? Column lenght does not cost trees but how many sites use a 2 or more columns layout ? Nearly all. So that short columns are common. And even if a one column layout is used, imagine reading the text on a small PDA screen if compound words are not broken: the problem remains still.
*** Bug 252327 has been marked as a duplicate of this bug. ***
*** Bug 257673 has been marked as a duplicate of this bug. ***
*** Bug 173534 has been marked as a duplicate of this bug. ***
*** Bug 263803 has been marked as a duplicate of this bug. ***
*** Bug 174302 has been marked as a duplicate of this bug. ***
*** Bug 271878 has been marked as a duplicate of this bug. ***
Please don't forget that this fix can change the height of a block-element. If you use overflow: hidden ore read the height width the javascript-function scrollHeight the size won't be correct after rendering the whole page (same effect by using images).
re : comment #74, I totally agree. bug 255990 deals with it although I haven't made any patch in that direction. The only patch uploaded there is just a kludge/stop-gap measure.
(In reply to R.K.Aa., comment #61) > *** Bug 193360 has been marked as a duplicate of this bug. *** If this bug is indeed related to textareas, please change the summary to make it easier to find. Thanks, Prog.
The iFrame on the right side of the page loads incorrectly, the text inside of the iFrame should wrap to the iFrame's width.
Consideration should be given to treatment of strings w/o hyphens, soft or otherwise -- e.g. URLs, which may be quite long. Ref page: 1. URLs are certainly necessary content to be able to handle, e.g. to make printed copy more useful (<A> links don't print underlying URL.) 2. Author must not be required to do UA job of rendering, e.g. by inserting breaks. 3. By way of example, Opera and MSIE both will break URLs at certain special characters, e.g. dash, slash, question mark. -R. dav4is@yahoo.com
re : comment #112 That's what's being dealt with in bug 255990.
*** Bug 290045 has been marked as a duplicate of this bug. ***
*** Bug 291405 has been marked as a duplicate of this bug. ***
I would be nice if words would break on - (-) rather sooner than later. Both Opera and IE do it and that is what everyone expects. Because of the ever increasing popularity of FF I think this should be fixed in 1.1 ->?1.1
Flags: blocking-aviary1.1?
Flags: blocking1.8b3?
*** Bug 294059 has been marked as a duplicate of this bug. ***
*** Bug 289462 has been marked as a duplicate of this bug. ***
Flags: blocking1.8b3? → blocking1.8b3-
Flags: blocking-aviary1.1? → blocking-aviary1.1-
Assignee: saito → nobody
QA Contact: amar → layout.fonts-and-text
Proposing keyword "helpwanted".
RE: Comment #93 "If you want to break at every hyphen except those that match pre-programmed rules, the complexity to get it right is going to reach ridiculous levels." That's probably quite true. However, instead of simply allowing line breaks after any hyphen, at least some kind of basic limitations would be desirable. Here's one interesting proposal (quoting an ISO/IEC expert contribution at): "HYPHEN-MINUS (002D): HYPHEN-MINUS allows an automatic line break to be established just after it only if it is both immediately preceded by a letter and immediately followed by a letter. HYPHEN-MINUS should be imaged by a graphic symbol identical with that representing HYPHEN when immediately preceded or immediately followed by a letter. HYPHEN-MINUS should be imaged by a graphic symbol identical with that representing MINUS otherwise." So according to this approach, HYPHEN-MINUS (i.e., the regular ASCII hyphen character) would allow a line break only within a (compound) word. Line breaks would not be allowed in connection with numerals nor other punctuation or special characters, so e.g. most smileys would remain intact. Also elliptical hyphens, which occur regularly in the beginnings of words in certain contexts in some languages, would stick with the string of letters they are attached to (for example, in Finnish you could write "videokasetti ja -levy", meaning video cassette and video disk, where the hyphen in the beginning of the last word makes it unnecessary to repeat the word "video"). For decent typography, it might also be good to set an additional rule that line break is not allowed unless there is at least two or three letters on both sides of the hyphen. Thus, words such as "T-shirt" would not be broken. On the other hand, the idea that HYPHEN-MINUS should be imaged identically with MINUS if not connected to a letter, seems a little problematic with regard to, e.g., smileys or data formats such as 2006-06-09. It's probably better to dismiss that part of the proposal and encourage Web authors to use the HTML character entity reference or Unicode MINUS character instead, when necessary. As HYPHEN-MINUS is the default hyphen character available on keyboards today and most people probably don't know (or even care of) how to produce other kinds of hyphens, it is important that it is treated in a way that is supposed to be adequate on most situations. The quoted proposal also speaks of the Unicode characters HYPHEN and NON-BREAKING HYPHEN (as well as two kinds of SOFT HYPHENS, but that's a separate issue): "HYPHEN (2010): HYPHEN allows an automatic line break to be established just after it. HYPHEN is imaged by a graphic symbol." "NON-BREAKING HYPHEN (2011): NON-BREAKING HYPHEN is a graphic character, the visual representation of which is identical to that of HYPHEN. NON-BREAKING HYPHEN is for use as hyphen when an automatic line break just before or just after it is to be prevented in the text as presented." Thus, if necessary, Web authors might use HYPHEN to get around the special restrictions appended to HYPHEN-MINUS. Respectively, NON-BREAKING HYPHEN (as well as CSS rule "white-space: nowrap") could be used to restrict the hyphenation further. Hyphenation is a tricky question, and this approach certainly wouldn't resolve all or even the principal problems. However, (in combination with support for soft hyphens) it might be a reasonable, not too complicated compromise at least until more sophisticated, language specific hyphenation solutions may emerge.
RE: Comment #120 "For decent typography, it might also be good to set an additional rule that line break is not allowed unless there is at least two or three letters on both sides of the hyphen. Thus, words such as 'T-shirt' would not be broken." On reflection, one could simply treat numerals the same way, as this "additional" rule in itself also guaranteed that HYPHEN-MINUS marking a negative number would not be separated from the numeral string. Basically, line break within a compound word will be allowed only if there is a minimum amount of characters both before and after HYPHEN-MINUS, and no additional rules will be necessary (at this point). This would allow line break after HYPHEN-MINUS within long numeral strings as well as letter strings. However, this approach may not suffice to resolve the problem with long chemical names, such as "2-bromo-4,4-dichlorophenol" (comment #78). The same (or at least a similar) rule should apply to line breaks after slashes. It is not desirable to allow line break within, e.g., an abbreviaton such as "c/o", but especially within long URLs it would indeed often result in better typography. Obviously, punctuation characters should not be counted in the character strings preceding and following a hyphen or slash. For example, in string "...(Latin-1)." there is in effect only one character after the hyphen, and the ending parenthesis, full stop and quotation mark shouldn't make line break possible in that kind of situation.
A workaround would be to add (invisible space) after each "-" that should break. GreaseMonkey or a bookmarklet can do this. Pasted work in firefox but may turn into "​" in non unicode views.
I present to y'all, the Great Wrapinator! javascript:var bob = document.body.innerHTML; bob = bob.replace(/([^<> ]{80})[^&]/g, "$1"); document.body.innerHTML = bob;
Sorry, i should've tested on clearer data. javascript:var bob = document.body.innerHTML; bob = bob.replace(/([^<> ]{80})([^&])/g, "$1$2"); document.body.innerHTML = bob;
The Css Style Class "WORD-BREAK:BREAK-ALL" Is working with IE and Not working with Firework. Anything can be done to Fix it??..
(In reply to comment #125) > The Css Style Class "WORD-BREAK:BREAK-ALL" Is working with IE and Not working > with Firework. Anything can be done to Fix it??.. That's bug 249159.
Alias: uax14
Summary: line-break should be allowed after hyphens (unless followed by number) → UAX14: line-break should be allowed after hyphens (unless followed by Any news?
See my point? That is what I need a solution for!
You can fix this using javascript, ASP, PHP or other things like that. I don't belive this should be a concern of the browser. Anyway, portuguese have a lot of words with - that in most cases shouldn't be break, like quinta-feira (thursday).
Iuri: 1. Correct word-wrapping IS a concern of the browser, because it depends highly on how the browser is configured (windows size, font size etc.) -- and can change dynamically (if you resize the window, for instance). It does not make sense for the webserver to have to anticipate this and serve pre-wrapped pages. 2. Yes, Portuguese has a lot of compound, hyphenated words. And you know what? It IS acceptable to break the line after the hyphen. It is even PREFERRED, as a matter of style -- so you don't end up with an extra break inside the same word, which looks ugly. I mean, what do you like better: Vou ao cinema na próxima quinta-fei- ra à noite. or Vou ao cinema na próxima quinta- feira à noite.
I fixed bug 255990, now URLs are breaking by some characters. See the spec table: So, the issues for Portuguese of comment 132 should be fixed now. I tested some specs in bug 255990. By the experience, I don't believe that UAX#14 is a best solution for us. Because we need to handle non-natural languages. e.g., many date formats, many time formats, fragments of code of programing languages, ASCII arts, URLs and file paths(UNIX, Win32/DOS)... So, I believe that there is no best spec which can fit to all context. Therefore, I used WinIE7 based (customized) spec for us. This makes better compatibility with WinIE (and also the web pages which is designed for WinIE). Especially when the table has many text but the table width is too narrow, the line breaking compatibility is very important (e.g., tinderbox and checked-in list of bonsai). I can agree to use UAX#14 for characters of each languages, but I think that we should keep the compatibility with WinIE in ASCII range for layout of table cells.
We can mark this fixed now, right? Because we do break after a hyphen thats not part of a number.
Does Mozilla break at U+2010 Hyphen now or just U+002D Hypen‐Minus?
ok, I mark this to FIXED, we don't use UAX#14, but we fix the actual bug. -> FIXED (In reply to comment #135) > Does Mozilla break at U+2010 Hyphen now or just U+002D Hypen‐Minus? Now, U+2010 is not breaking the line. But we can fix it easy. Please file a new bug and CC me.
Status: NEW → RESOLVED
Last Resolved: 12 years ago
Resolution: --- → FIXED
(In reply to comment #133) > I fixed bug 255990, now URLs are breaking by some characters. > See the spec table: > Thanks for the informative table. However, when defining the context (surroundings) of the possible line-breaks, the term "character" feels rather ambiguous. Basically, all letters, numbers and punctuation marks are characters. Perhaps instead of "characters", here it would be better to speak of "letters"? How about SPACE? I hope SPACE does not count as a letter-character because if it did, some undesirable line-breaks could occur: after a hyphen: "suffix -ed" after a slash: "in /home directory" after a degree sign: "the temperature is 20 °C" Even if SPACE does not count as a letter, some problems seem to remain: after a hyphen: "T-shirt" after a slash: "c/o" before and after parentheses: "colo(u)ring" In "T-shirt", a line-break after the hyphen would leave "T" orphaned in the end of a line, which woud look ugly -- although it would be unlikely to cause any real confusion. In "c/o", a line-break after the slash would be a little more distracting. The worst case, however, would be if the string "colo(u)ring" could break both before and after the parentheses. Even if it is a rather exeptional example, I think this kind of behavior would be a much more severe bug than not breaking after hyphens. I wonder whether it is at all reasonable to allow line-break in connection with parentheses, square-brackets etc.
We should treat punctuation next to a space as non-punctuation so we don't break before or after it. That would be fairly easy. The other problems are kind of hard. It's impossible to please everyone. Authors may have to learn to use ZWJ or white-space:nowrap in rare cases.
(In reply to comment #137) > after a hyphen: "suffix -ed" This case can be fixed easy. > after a slash: "in /home directory" You should test it. This case will be rendered same as your hope. Other cases should be marked as INVA or WONTFIX. Please file bugs for each issues if you find them.
Authors should certainly *not* be using ZWJ to suppress breaking within a word. ZWJ has its own semantics, it's not there to suppress breaking between punctuation. Also, doing weird things to text content to avoid breaking has undesired consequences for other uses of text, like copy/paste and text comparison. I strongly agree that punctuation adjacent to spaces should not be providing any breaking opportunities. That will solve the worst problems with the recent changes.. If the critical market for these new breaks is East Asia, couldn't we avoid such breaks unless the block has encountered a CJK character before the first break? At least until we have a more intelligent line-breaking algorithm in place?
I appreciate the efforts to solve the layout problems caused by URLs and long words. However, I disagree with the general idea that compatibility with IE or handling of non-natural languages should be more important than respecting the writing conventions of natural languages. Basically, technology is a tool, and tools should adapt to people's needs, not vice versa. I just realized that this bug seems to be a duplicate of bug 56652, which considers linebreaking algorithms from a little more general perspective. Perhaps the discussion should be moved there?
> If the critical market for these new breaks is East Asia, couldn't we avoid such > breaks unless the block has encountered a CJK character before the first break? That's similar what we were doing before, and it's decidedly odd. Inserting a CJK character arbitrarily far away in a word shouldn't change breaking behaviour. I'm sorry this comes as such a surprise. This was worked on over a long period of time. I'd sort of assumed you were CCed on the bug(s).
Backing out bug 255990 (again) is an option, but we need to be sure we're not making the best the enemy of the good. I'm quite pleased with the changes in my browsing. I don't want to back out a patch that's an overall improvement just because a hypothetical better approach may exist which no-one has managed to specify precisely yet. I'd rather see an argument that 255990 actually made things worse overall.
Especially because I suspect that there is no algorithm --- certainly no reasonably simple algorithm --- that can interpret the intent of Web text accurately enough to always choose good breaks. So whatever our algorithm is, people will be able to produce examples where it falls down badly.
(In reply to comment #140) > I strongly agree that punctuation adjacent to spaces should not be providing > any breaking opportunities. This is a interesting idea, would you file a new bug? >. I think that we need long testing time for this. Because there are very very many patterns. I think that we should fix each problems on current trunk. > If the critical market for these new breaks is East Asia, couldn't we > avoid such breaks unless the block has encountered a CJK character before > the first break? No, the URL doesn't have CJK characters in most cases.
Well, how about the approach suggested by Jukka Korpela in his criticism on UAX #14 (I already posted this link for bug 56652, but here it is again): The basic idea is that the generic (language-independent) linebreaking rules should be very minimalistic. A break would be allowed only after spaces, hyphens and dashes (and even then not always), and any further exceptions should be language-dependent. However, for extremely long strings, such as URLs, a special "emergency break" rule could be applied, allowing breaks even after slashes and ampersands.
No comments? Well, let's elaborate the suggested approach a bit: At the language-independent level, a line-break is allowed after a space, hyphen or dash, unless - the space or hyphen is of the no-break type (this should be obvious) - the space is followed by another space (so no break can occur between two space characters) - there is a space or any punctuation character either immediately before or immediately after the hyphen or dash (break is allowed after the space, of course, unless it is a no-break space; note that this rule should be sufficient to prevent breaks even inside smileys) - there is only one alphabetical character on either side of the hyphen or dash (this is merely a wishlist feature that would improve the typographical appearence a little; one might even consider strightening the rule into "there is only _two_or_less_ alphabetical characters on either side of the hyphen or dash") - there is a numerical character on either side of the hyphen or dash (but it would be nice if the previous rules, probably after some tweaking, were sufficient to cover even the numerical contexts; for example, it might be better if long chemical names, such as "2-bromo-4,4-dichlorophenol" as described in comment 78, were allowed to break after a hyphen, even if the break point wasn't always optimal). Otherwise, line-breaks may occur only if allowed at the language-specific level, or by the "emergency break" rules for exceptionally long strings. These minimal linebreaking rules should cover the most important cases at least for Latin scripts (although I'm sure I have overlooked something, please feel free to append the list). The language-specific additional rules may be specified as needed, for example: - in English, a line-break is allowed both before and after an em-dash, and irrespective of how many alphabetical or numerical characters there are on either side - in French, a line-break is not allowed after a space if it is followed by an exclamation mark, a question mark, a colon, a semicolon or a closing guillemet, nor if it is preceded by an opening guillemet. Of course one can come by many more language-specific rules, but they can be added little by little, as native speakers start to point out the deficiencies. And perhaps one day, the rules may be appended to include even language-specific hyphenation algorithms. But for now, I suppose that's something we can only dream of.
One problem is that we often don't know what the language is. Defining some break opportunities as "emergency breaks" that are only used if there are no regular break opportunities on the line would require some significant changes, but could definitely be done.
The generic rules should allow even non-Latin characters to behave in the way that was most likely expected of them in their natural context. Thus, a line-break would be allowed after any CJK character. However, if put into a Latin context, a non-Latin character should rather be treated as a symbol character inherent in Latin scripts (and thus, linebreaking would not be allowed); this can be specified at the language-dependent level. I suppose the same principle for embedding exotic characters should be valid even for other alphabetic scripts (e.g., Cyrillic and Greek), while a reversed approach may sometimes be suitable for alphabetic characters put into a CJK context. For better typography, a couple of clarifications might be added to the generic linebreaking rules: - symbol characters (such as @, $ and %) should generally be treated similarly to alphabetic characters - two adjacent hyphens could be considered equivalent to a single em-dash.
I have updated the URL, because the original one no longer exists. The page is exactly the same.
Really? I can access to the original one.
Oops. The original one is redirected to the new one.
(In reply to comment #147) > The language-specific additional rules may be specified as needed, for example: > - in French, a line-break is not allowed after a space if it is followed by an > exclamation mark, a question mark, a colon, a semicolon or a closing guillemet, > nor if it is preceded by an opening guillemet. > I this case there should be nonbreakable thin space (U+202F) instead of standard space (U+0020). Thin space because it looks better and nonbreakable because it suppress line break on white space character. I.e. you don't need any special rule. If web page authors wrote typographically clear text, web browser wouldn't need crystal ball. > Of course one can come by many more language-specific rules, but they can be > added little by little, as native speakers start to point out the deficiencies. > For example in Czech, hyphen (U+002D) can be used between two words which are tightly connected (e.g. black-white). If you want to break this compound word on the hyphen, you will need to repeat the hyphen on the next line: … the black- -white pattern … So, this is example how your proposed rules break typography in non-English language. If I could, I would allow word wrapping only before breakable white characters (and on extraordinary long strings). The other rules (like breaking at hyphen/dash) are language specific and therefore should be implemented independently as a language add-on.
Is there anywhere that the "FIXED" version can be downloaded and tested? The bug is definitely not fixed in Firefox 3.0b4 under Windows Vista Business, when viewing the example page:
We're breaking after some hyphens on that page, but not others where we should. I'm not sure why, we should be able to break after the hyphen in "arabino-hexopyranoside" for example. Need a reduced testcase.
And it should probably go in a new bug.
(In reply to comment #156) The originally-reported problem has not been fixed, so there is no justification for opening a new bug. There are 162 hyphens in the IUPAC cell of the example page, but the "FIXED" version only seems able to break after 5 of them.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
I only just realized that none of the patches here ever got checked in, so reusing this bug does make sense, sorry about that.
Status: REOPENED → RESOLVED
Last Resolved: 12 years ago → 11 years ago
Resolution: --- → FIXED
I'm afraid that I don't have the cycles to work on this for Gecko 1.9. A reduced testcase would help us get it fixed in the next release.
Flags: wanted-next+
oops
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
(In reply to comment #157) > There are 162 hyphens in the IUPAC cell of the example page, but the "FIXED" > version only seems able to break after 5 of them. Please accept my apologies for the incorrect information in #157. Robert O'Callahan is absolutely correct, we do need a simpler test case. Try this file: and the long systematic names DO wrap. My thanks to everyone who has worked on this bug. However, the data sheets in my pesticide website still don't display properly in Firefox 3.04b. This is now because of problems with wrapping InChIs, which did not exist when I first filed the bug. Here is a test file for InChIs: Lines are not being broken after hyphens (ASCII decimal 45 or hexa 0x2D) that separate 2 numbers: 52-37-25-16-26-38-52,47-59(9,10)53-39-27-17-28-40-53 resulting in some very long lines. Interner Explorer, Safari and Opera do break after these hyphens. Would allowing breaks after these hyphens in Firefox cause problems for any other data? If not, would it be simple to amend the new wrapping code? Firefox is also not breaking after hyphens like these: 33+,34-,35-,36-,37-, Internet Explorer and Safari do break after these hyphens, but although this allows wrapping, it puts a comma as the first character in a line, which does not look good. Would it cause any problems to allow breaks after these commas?
> Firefox is also not breaking after hyphens like these: > 33+,34-,35-,36-,37-, I have no idea what that string might be about, but I recognize that this can be a genuine issue for chemists. Nevertheless, I don't like the idea of allowing breaks after commas. If I see a break after a comma, I generally assume that there is a whitespace after the comma -- but I suppose it would be a (big?) mistake in this case. Seeing a line beginning with a comma could be distracting, but at least it would give me a hint that there is something exceptional going on. Unfortunately, allowing breaks between a hyphen and a comma may cause other problems. For example, in Finnish it is possible for a (compound) word to end with an elliptical hyphen (indicating that the last part of the compound has been omitted), and sometimes the hyphen may be followed by a comma. Now, as the comma would normally be followed by a whitespace, I grant that usually the odd comma is likely to fit in the same line as the preceding word with hyphen. But occasionally there would not be enough space and the comma would have to be moved to the next line. I think this would be rather unfortunate in a natural language context, where the reader expects the text to flow according to the general orthographic conventions. There may be other problematic cases too. In many languages, comma is used as the decimal marker (instead of the decimal point, as in English). If, in addition, a leading zero is replaced with a hyphen, you may see strings such as "-,50" (e.g., in a price tag). Here it would clearly be undesirable to break after the hyphen (and even more so after the comma). Perhaps some kind of an emergency break rule could be composed, though. The rule could allow exceptional breaks between a hyphen and a comma, but only in very long strings and if there was no better break opportunity within 10 (or even more?) characters.
(In reply to comment #162) > I have no idea what that string might be about, but I recognize that this can > be a genuine issue for chemists. I don’t really think that this is a “genuine issue”, as you put it.. This should give authors what they want 100% of the time, provided that the browser supports the behavior. The described behavior for U+2010 Hyphen was supposed to have been addressed in Bug 388096 (which I created to address Comment #136); unfortunately, I don’t have a Moz. Firefox 3 beta build installed with which to verify that it was indeed fixed. Opera 9.26, Safari 3.1 (525.7) (beta), and Win. Internet Explorer 7.0.5730.13 all seem to support automatic line‐breaks after U+2010 Hyphen, or, at least, they pass the test case that I wrote for that bug, so cross‐browser compatibility shouldn’t be an issue here.
(In reply to comment #163) >. Sorry, but this is not a solution for InChIs. They are specified by IUPAC as containing only ASCII characters, and so the horizontal line has to be the hyphen-minus.
(In reply to comment #164) > (In reply to comment #163) > > If chemists or others really desire automatic line‐breaking behavior, > > they should be using the more specific character, U+2010 Hyphen, > > Sorry, but this is not a solution for InChIs. They are specified by IUPAC as > containing only ASCII characters, and so the horizontal line has to be the > hyphen-minus. > And we are back to soft hypen which Firefox does not still support.
(In reply to comment #165) > And we are back to soft hypen which Firefox does not still support. > Reverting. Bug #9101 seems implementing soft hyphen. Accually, you have to choices now: (1) Use proper Unicode characters which signals where the word break is acceptible (e.g. by adding soft hyphen) or (2) Use ASCII only and be disappointed with suboptimal automatic emeregency break algorithm. IMHO, (2) will never be good enough unless you provide some mark-up that the string is IUPAC compliant chemical name and firefox implements some IUPAC-specific algorithms. This doesn't apply only to chemical names, it's applyable to natural languages too. You can start introducing new langague code <span xml:1-methyl-propan</span> and then we can talk about (3) addind langauge specific hyphenation algorithms. (I believe this is the right way besides (1).)
> (In reply to comment #157) > Interner Explorer, Safari and Opera do break after these hyphens. Would > allowing breaks after these hyphens in Firefox cause problems for any other > data? Allowing line‐breaks after a Hyphen‐Minus character separating two double‐digit numbers could result in line‐breaks within ranges/comparisons (e.g., 20-30), dates (e.g., 03-17-08), and numeric IDs (e.g., Figure 14-20). If you allow them between numbers of arbitrary size, then you may also get breaks within things like phone/fax numbers (e.g., 123-456-7890), zip codes (e.g., 12345-6789), S.S. numbers (e.g., 123-45-0345), serial numbers, etc. If allowing after hyphens immediately followed by a number, you might get breaks between terms such as Final Fantasy X-2, Carbon-14, ISO-8859-1, etc. Of course, most of the above could be addressed with use of more specific characters too… (In reply to comment #164) > Sorry, but this is not a solution for InChIs. They are specified by IUPAC as > containing only ASCII characters, and so the horizontal line has to be the > hyphen-minus. Perhaps this issue should be addressed by IUPAC instead? Could you get around this issue by use of the shorter InChIKey format? Maybe implementation of the CSS3 Text text-wrap: unrestricted or word-wrap: break-all declarations would address the issue? (In reply to comment #166) > (1) Use proper Unicode characters which signals where the word break is > acceptible (e.g. by adding soft hyphen) or I don’t think that use of U+00AD Soft Hyphen is what Alan is looking for. He wants line‐breaks after hyphens that are intended to always be invisible. Soft hyphens are visible only when they occur just before a line‐break.
Firefox 3 seems to break lines on hard hyphens, while Firefox 2 does not. Firefox 2 has the correct behavior. The HTML 4.01 specification at states: "In HTML, there are two types of hyphens: the plain hyphen and the soft hyphen. The plain hyphen should be interpreted by a user agent as just another character. The soft hyphen tells the user agent where a line break can occur." Firefox should not be break lines on hard hyphens -- is it possible to fix this bug in Firefox 3?
HTML 4 does not specify how line breaking should be performed. We're quite within our rights to break lines after hard hyphens.
I'm sorry, I thought that "The plain hyphen should be interpreted by a user agent as just another character" was crystal-clear.
Yes, it must be interpreted as just another character -- meaning that is should always be displayed. The soft hyphen is placed inside a word to mark a place where a line break CAN occur. If the break DOES occur, then it is displayed. Otherwise, it is hidden. So, the soft hyphen is NOT a normal character, because it sometimes is displayed, sometimes isn't. Also, the plain hyphen (or hyphen-minus) is no different from other common characters in regard to line breaks: the user-agent may decide whether to break a line inside a word, using whatever algorithm is appropriate. It just happens that an algorithm to break words on the hyphen is a rather simple one, and works in most Western languages -- while more sophisticated algorithms have to be language-specific, and even then they will make weird mistakes. The HTML spec does not forbid breaking lines inside words, so it does not forbid breaking lines after hyphens.
You said that "the user-agent may decide whether to break a line inside a word, using whatever algorithm is appropriate.". 2. If what you say is true, then how can a site designer tell a remote user-agent to not split on a hyphen under any circumstances? Using , we can address this problem for "inter-word space." Is there a corresponding solution for "intra-word space"? (I think that the answer is no because intra-word space is not to be inserted arbitrarily.) I suspect that the PRE environment is not really the right answer, either, since it carries other semantic baggage as well.
Many languages, including Thai and Chinese, do not use spaces to indicate word breaks. They have paragraphs of text containing no spaces at all. You can use white-space:nowrap to inhibit line breaking. It can be applied to inline elements. I'm not sure what this bug is about anymore. We should probably close it and have people file new bugs about issues which are clearly actual bugs.
I think the reason the bug remains open is that the rules from UAX14 aren’t implemented yet? What is implemented now is based off what IE does (as far as I understand), but I don’t think it is complete/applies to all languages... Anyway, a clear, fresh bug about that is probably better than this one with its 173 comments. The basic behaviour that the bug reporter describes has been implemented.
(In reply to comment #172) >. Yes there is. Right on the same page you pointed, in , you can see: "When formatting text, user agents should identify these words and lay them out according to the conventions of the particular written language (script) and target medium." So: as long as the conventions of a language allow it, and the HTML spec does not forbid it, then the user-agent may do it. In fact, if the programmers bother to do it, the user-agent may even break words that DON'T have hyphens -- as long as they follow the rules of the language (meaning, identifying syllables correctly and such). But it's hard doing it right, in particular for the Web, where you can't trust the language to be identified correctly, so AFAIK nobody went to the trouble. Breaking after hyphens, however, it's easy in comparison, and for most people is an acceptable compromise -- hyphenated words tend to be long, so allowing them to be broken lessens the problem of ugly lines with too much white space between words.
(In reply to comment #172) > 1. Is there a specification which indicates that intra-word space is permitted? > I must be reading the wrong document. You might want to note that the title of this bug references Unicode Standard Annex #14 (UAX #14). You can find a link to it in comment #64 or, more specifically, you can find the information that you’re looking for at <>; search for the string “HY: Hyphen”. It specifically says that, in situations where the U+002D HYPHEN-MINUS character is used as a hyphen, there’s a line break opportunity after the character. > 2. If what you say is true, then how can a site designer tell a remote > user-agent to not split on a hyphen under any circumstances? You should be able to use either a U+2011 NON-BREAKING HYPHEN character in place of the HYPHEN-MINUS character or the U+2060 WORD JOINER character immediately subsequent to the HYPHEN-MINUS character to get this behavior. (Firefox 3 doesn’t seem to support for the latter though; I don’t see a bug report for it either.)
(In reply to comment #164) > Sorry, but this is not a solution for InChIs. They are specified by IUPAC as > containing only ASCII characters, and so the horizontal line has to be the > hyphen-minus. Firefox 3.1a2 has introduced support for the CSS3 property word-wrap: break-word. This now makes it possible to break InChIs nicely in Firefox, with an appropriate style applied to them. See my updated test file: As far as I am concerned, this bug can now be closed. My thanks to the Firefox developers.
OK, thanks Alan!
Status: REOPENED → RESOLVED
Last Resolved: 11 years ago → 11 years ago
Resolution: --- → WORKSFORME
Whiteboard: [webcompat] | https://bugzilla.mozilla.org/show_bug.cgi?id=95067 | CC-MAIN-2019-18 | refinedweb | 11,290 | 62.98 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.