text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
25 April 2013 18:48 [Source: ICIS news] HOUSTON (ICIS)--Chemical shipments on Canadian railroads rose by 9.2% year on year to 13,388 railcar loadings in the week ended 20 April, marking their 16th straight weekly increase this year, according to data released by a rail industry association on Thursday. In the previous week, ended 13 April, ?xml:namespace> In the previous week, ended 13 April, US chemical car loadings fell by 5.7%. From 1 January to 20 April, Meanwhile, overall Overall Canadian loadings rose by 1.2% to 83,540, and overall Mexican loadings rose by 2.5% to 15,107.
http://www.icis.com/Articles/2013/04/25/9662725/canada-chem-railcar-traffic-rises-for-16th-straight.html
CC-MAIN-2015-06
refinedweb
105
66.13
.dnd;12 13 14 /**15 * The <code>TransferData</code> class is a platform specific data structure for16 * describing the type and the contents of data being converted by a transfer agent.17 *18 * <p>As an application writer, you do not need to know the specifics of 19 * TransferData. TransferData instances are passed to a subclass of Transfer 20 * and the Transfer object manages the platform specific issues. 21 * You can ask a Transfer subclass if it can handle this data by calling 22 * Transfer.isSupportedType(transferData).</p>23 *24 * <p>You should only need to become familiar with the fields in this class if you 25 * are implementing a Transfer subclass and you are unable to subclass the 26 * ByteArrayTransfer class.</p>27 */28 public class TransferData {29 /**30 * The type is a unique identifier of a system format or user defined format.31 * (Warning: This field is platform dependent)32 * <p>33 * <b>IMPORTANT:</b> This field is <em>not</em> part of the SWT34 * public API. It is marked public only so that it can be shared35 * within the packages provided by SWT. It is not available on all36 * platforms and should never be accessed from application code.37 * </p>38 */39 public int type;40 41 // attributes specific to set/get42 int length;43 int format;44 int pValue;45 46 /**47 * The result field contains the result of converting a48 * java data type into a platform specific value.49 * (Warning: This field is platform dependent)50 * <p>51 * <b>IMPORTANT:</b> This field is <em>not</em> part of the SWT52 * public API. It is marked public only so that it can be shared53 * within the packages provided by SWT. It is not available on all54 * platforms and should never be accessed from application code.55 * </p>56 * <p>The value of result is 1 if the conversion was successful.57 * The value of result is 0 if the conversion failed.</p>58 */59 int result;60 61 }62 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/eclipse/swt/dnd/TransferData.java.htm
CC-MAIN-2018-30
refinedweb
349
61.46
Hi All, I am currently setting up my new project to drive some WS2815 LED strips using incoming DMX data. I'm using the TeensyDMX library to read the incoming DMX data, and driving the FastLED from the incoming values. I know the setup is working as i have used the BasicReceive example and working perfectly - the serial monitor is correctly showing any change in channel 1 data and channel 10, 11 & 12. I have also created a super basic "moving dot" sketch for driving the LEDs with FastLED, this also works perfectly. However, when i come to add the "moving dot" sketch into the BasicReceive sketch to use the values to change the dots colour, it doesnt work - the lights flicker int he right direction like its trying to do the moving dot (but they do change colour according to DMX data), and serial monitor does show the right values however after a significant lag and erratic. Its seems that the teensy is unable to drive the WS2815 LEDs and ALSO receive the serial data at the same time. I have included the code below: Code:#include <FastLED.h> #define DATA_PIN 8 #define NUM_LEDS_PER_STRIP 72 #define NUM_STRIPS 1 #define NUM_LEDS NUM_LEDS_PER_STRIP CRGB leds[NUM_LEDS_PER_STRIP * NUM_STRIPS]; volatile byte r1 = 255; volatile byte g1 = 255; volatile byte b1 = 0; #include <cstring> #include <TeensyDMX.h> namespace teensydmx = ::qindesign::teensydmx; // Create the DMX receiver on Serial1. teensydmx::Receiver dmxRx{Serial1}; // The last value on the channel, for knowing when to print a change // (Example 1). uint8_t lastValue = 0; // Buffer in which to store packet data (Example 2). uint8_t packetBuf[3]{0}; // The last values received on channels 10-12, initialized to zero. uint8_t rgb[3]{0}; void setup() { // Serial initialization, for printing things Serial.begin(115200); while (!Serial && millis() < 4000) { // Wait for initialization to complete or a time limit } Serial.println("Starting BasicReceive."); // Turn on the LED, for indicating activity pinMode(LED_BUILTIN, OUTPUT); digitalWriteFast(LED_BUILTIN, HIGH); // Start the receiver dmxRx.begin(); // Print the first values lastValue = dmxRx.get(1); Serial.printf("Channel 1: %d\n", lastValue); // Note: If this reads < 3 bytes then the other values will stay at zero // (because 'rgb' was initialized to zero, above) dmxRx.readPacket(rgb, 10, 3); Serial.printf("RGB: %d %d %d\n", rgb[0], rgb[1], rgb[2]); FastLED.addLeds<NUM_STRIPS, WS2812B,DATA_PIN,GRB>(leds, NUM_LEDS_PER_STRIP); } void loop() { // The following two examples print values when they change // Example 1. Get the current value of channel 1. // This will return zero for no data (and also for data that's zero) uint8_t v = dmxRx.get(1); if (v != lastValue) { lastValue = v; Serial.printf("Channel 1: %d\n", v); } // Example 2. Read channels 10-12. // A return of -1 means no data, and a value < 3 means that there was data, // but the received packet wasn't large enough to contain channels 10-12 int read = dmxRx.readPacket(packetBuf, 10, 3); if (read == 3) { if (memcmp(packetBuf, rgb, 3) != 0) { memcpy(rgb, packetBuf, 3); Serial.printf("RGB: %d %d %d\n", rgb[0], rgb[1], rgb[2]); } } for(int dot = 0; dot < NUM_LEDS; dot++) { leds[dot].r = rgb[0]; leds[dot].g = rgb[1]; leds[dot].b = rgb[2]; FastLED.show(); // clear this led for the next time around the loop leds[dot].r = 0; leds[dot].g = 0; leds[dot].b = 0; delay(50); } }
https://forum.pjrc.com/threads/61274-TeensyDMX-and-FastLED-cant-get-them-to-work-simultaneously?s=aa92487155a571bae3edbd2b3cf18475&p=242557&viewfull=1
CC-MAIN-2020-50
refinedweb
557
58.28
The large public cloud vendors including Amazon, Microsoft and Google have invested heavily to provide Infrastructure as a Service (IaaS) to their customers. The result of this intense competition has been a race to the bottom for pricing of the basic compute and storage services. Great news for the customers. We are also beginning to see a similar convergence around Platform as a Service as more and more basic tooling for building cloud apps becomes standard. Clearly each vendor has some unique capabilities, but the real differentiation between vendors is taking place at the level of cloud software-as-a-service (SaaS) offerings. The most interesting areas for science relevant services are machine learning, stream analytics and big data analysis tools. Each of the three big cloud vendors have offering in this space and so do others like IBM, Salesforce and Cloudera. The next few posts will be about some experience using these tools. I will start with AzureML because I have access to it. I decided to redo some of my streaming scientific text analysis projects (described earlier: part 1 and part 2) using Microsoft’s new AzureML. I’ll return to the full streaming topic using Azure stream analytics in the next post. If you are not familiar with AzureML, the machine learning toolkit for Microsoft Azure, you should give it a try. In fact, you can try it for free. Go to. AzureML is based on a “drop-and-drag” component composition model where you can build a solution to a machine learning problem by dragging parts of the solution from a pallet of tools. This post is not intended as a tutorial for AzureML. There are tons of good tutorials on line. I would start with the ones on the studio.azureml.net home page. This post is a description of what I was able to do with AzureML for the basic task of classifying scientific documents. More specifically, we have a collection of RSS feed documents that describe new scientific results and research papers from various sources, but the best stuff for our purposes comes from the Cornell University Library ArXiv RSS feed. Each item in the collection is a tuple consisting of the article title, the abstract and a classification into one of the basic science disciplines including Physics, Mathematics, Computer Science, Biology and Finance. Here is a sample. ['A Fast Direct Sampling Algorithm for Equilateral Closed Polygons. (arXiv:1510.02466v1 [cond-mat.stat-mech])', . ', 'Physics'] The challenge is to use only the abstract to predict the classification. (As you can see from this example a reasonable guess might be Physics, Math or Computer Science, so it is not that easy.) A typical AzureML solution looks like the one below that has been configured to train a neural network to classify the items in our list of science abstracts. It is easy to mistake this diagram for a data flow graph but it is really a workflow dependency diagram represented as a directed acyclic graph. Each arrow represents a dependency of the output of one task as part of the input of the next. Each box represents one of the analysis subtasks. When you run the training experiment the subtasks that complete have a green checkmark. It is possible to inspect the result of any subtask by clicking on the tail of the result arrow. Doing this presents you with several possible choices that include saving the result, visualizing it in tabular form or, in some some cases, viewing it in a IPython (Jupyter) notebook. Figure 1. AzureML Studio workflow diagram for the Multiclass Neural network and the Python module for creating the studio version of the arxivdata data set. To understand this workflow, it is best to start at the top which is where the data source comes into the picture. In the case here we are going to take the data from a Azure blob storage public archive. The dataset is sciml_data_arxiv.p which is a Python pickle file. A recent addition to AzureML that I was very pleased to see was the introduction of a way to build a new component from R or Python. Hence it was easy to write a small Python preprocessing file that could read the data and clean it up a bit and present it to the rest of the AzureML studio. The data interface between Python and AzureML is based on Pandas data frames. The output of the Python module can be accessed on the output labeled 1. We could have put this directly into the next stage in our workflow, but we can also save it to AzureML studio. We have done that in this case and we used a copy of that dataset as the box “arxivdata”. The data set has three columns and each row represents one document and it is a triple (classification, the abstract of the documents, the title of the document). As we move through the workflow we will add columns and, for various tasks, we restrict attention to only a few columns The second box down is “Feature Hashing”. This box builds a vectorizer based on the vocabulary in the document collection. This version comes from the Vowpal Wabbit library and its role is to convert each document into a numerical vector corresponding to the key words and phrases in the document collection. This numeric representation is essential for the actual machine learning phase. To create the vector, we tell the feature hasher to only look at the document text. What happens on the output is that the vector of numeric values for the abstract text is now appended to the tuple for each document. Our table now has a very large number of columns: class, the document text, the title, vector[0], … vector[n] where n is the number of “features” In the next box “Split Data” we split the resulting table into a training set and a test set. In this case we have configured the Split Data box to put 75% into the training set and the remainder in the test set. For the machine learning we need to select a machine learning algorithm, and some columns of the data to use for the training. To select the columns for the training we use a “project columns” task and we select the “class” and the feature vector components. (we don’t need the document text or title.) AzureML has a reasonably large number of the standard machine learning modules. We found three that were good but by a small margin “Multiclass Neural Network” was the best performer. Each machine learning module has various parameters that can be selected to tune the method. For all the experiments described here, we just used the default parameter settings[i]. The “Train Model” component accepts as one input a binding to one of the ML methods (recall this is not a dataflow graph) and the other input is the projected training data. The output of the Train Model task is not data per se but a trained model that may also be saved for later use. This trained model can now be used to classify our test data and that is what we do with the “Score Model” component. The score model appends another new column to our table called Scored Label which is the classification predicted by the trained model for that each row. Finally can see how we did by using the “Evaluate Model” component which computes a confusion matrix. Each row of the matrix tells up how the documents in that class were classified. In this experiment the confusion matrix is shown in Figure 2 below. Figure 2. Confusion matrix for the ArXiv data set which includes some duplicate of bio and math documents. There are several points worth noting here. First bio and finance documents were recognized with high accuracy. This is somewhat artificial because documents in those category were each repeated twice in the original data. Hence after the splitting (by ¾ test, ¼ training) a large fraction of the test set for these documents (about 75%) will be in both the training set and the test set hence they are easily recognized. We have a more recent collection of ArXiv documents which do not include any of the training set items. Figure 3 below shows the confusion matrix for this case. It is clear than the classifier had a hard time distinguishing Physics, Math and Computer Science. We have no doubt that we could achieve better results if fine-tuned the neural net parameters. Figure 3. Confusion matrix for the multiclass neural network classifier using the more recent ArXiv data We will show a better classifier in the second half of this article. [i] By not tuning the parameters of each ML algorithm we are doing them an injustice. But it takes expert knowledge of what the algorithm does and lots of time to experiment to find the right settings. I was surprised at how well the default parameters worked. Creating a web service from our trained classifier One of the most impressive features of AzureML is how easily it can convert a trained model like the one above into a function web service. In fact, it is very cool. One click on the pallet button for creating a web service transform the diagram in Figure 1 to the diagram in Figure 4. Figure 4. A web service that was automatically created from the experiment in Figure 1. We can test this webservice from the studio or we can go ahead and deploy it to the web. Once it has been deployed AzureML will even generate the C# or Python or R code you can use to deploy it. In this case the code for Python is import urllib2 import json data = { "Inputs": { "input1": { "ColumnNames": ["class","document","title"], "Values": [["value","value","value"], ["value","value","value"], ] }, }, "GlobalParameters": { } } body = str.encode(json.dumps(data)) url = ' workspaces/5adbae48fb21484b84238008b25d95cb/services/ 9888e0b134724d0c834f009574275f65/execute?api-version=2.0&details=true' api_key = 'abc123' # Replace this with the API key for the web service headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)} req = urllib2.Request(url, body, headers) try: response = urllib2.urlopen(req) result = response.read() print(result) except urllib2.HTTPError, error: print("The request failed with status code: " + str(error.code)) print(error.info()) print(json.loads(error.read())) The input is defined by the data template where you can supply one or more of the arxiv tuples. A copy of an IPyhon notebook that invokes the webservice and computes the confusion matrix is linked here. Creating a more interesting classifier. The example so far does not fully illustrate the power of the AzureML studio. I decided to try to build a classifier that uses three different machine learning algorithms all trained on the same data. Then I would use a majority vote to select the wining choice. I have argued in previous posts that picking a single choice for this science data is not a good idea because science is very multidisciplinary. The example sited above illustrates this point. So I trained two additional ML algorithms: a boosted decision tree and a two class support vector machine (converted to a multiclass algorithm using a “one vs all multiclass” module I found in the studio pallet. I saved the trained models for each. Then I started with the diagram in Figure 4 and I started adding things. The result is in Figure 5 Figure 5. A best of three classifier service created by modifying the service in Figure 4. You will notice that this is almost like three copies of the service in Figure 4. The difference is that I needed to reduce the output to a smaller set of tuples to give to a simple python module to do the majority voting. This was done with column projections. The leftmost project selects the “class” and “Scored Labels” columns (discarding the title document and the doc vector) the second selects “class” and “Scored Labels” and the third selects only the “Scored Labels”. Then using an “Add Column” component we append the last column to the output of the second project. By doing this we now have two inputs to a Python Script module (which is limited to two dependences). The Python code inside the script module is shown below. In this version we assume that if all three scored labels disagree we should pick the first (from the multiclassNN classifier) as the main one and, arbitrarily pick the second scored Label as the “second choice”. Otherwise if any two agree that becomes the first choice. # Param: a pandas.DataFrame # Param: a pandas.DataFrame def azureml_main(dataframe1 = None, dataframe2 = None): tclass = dataframe1["class"] scored1 = dataframe1["Scored Labels"] scored2 = dataframe2["Scored Labels"] scored3 = dataframe2["Scored Labels (2)"] scored = [] second = [] lclass = [] for i in range(0, len(tclass)): lclass.extend([tclass[i]]) if scored2[i] == scored3[i]: scored.extend([scored2[i]]) second.extend([scored1[i]]) else: scored.extend([scored1[i]]) second.extend([scored2[i]]) data = {'class': lclass, 'Scored Labels': scored, 'second': second} df = pd.DataFrame(data, columns=['class', 'Scored Labels', 'second']) # Return value must be of a sequence of pandas.DataFrame return df, The webservice now returns three values. The original class designation from ArXiv, our best-of-three choice and a second choice (which may be the same of best-of-three). If we now look at the confusion matrix for the original arxivdata data (including the training examples) we get Figure 6. Best-of-three with original data. Figure 7 shows the result when we use the dataset arxiv-11-1-15 that contains no documents from the training. Figure 7. Best of three with arxiv-11-1-15 data. The improvement over the original method shown in Figure 3 is about 15%. Of course we are now giving the classifier two chances to get the answer right. As mentioned above we probably could do much better by tuning the ML parameters. But the point of this post was to show you what is possible with a very modest effort with AzureML. In the next post we will look at performance issues, scalability and streaming examples.
https://esciencegroup.com/2015/11/
CC-MAIN-2017-13
refinedweb
2,365
62.48
So my goal is to use inheritance and as you can see I inherited Point Example which has x, y as variables. Well for some odd reason my Length seems to be in correct and I'm not sure where I went wrong. Any guidance will be very much appreciated. Just updated the code and found some more errors that I just fixed... now there are warnings that won't allow me to run the program?... class PointExample { private double x; private double y; public PointExample(double a, double b) { x = a; y = b; } } class Length extends PointExample { private double middleLength; private double leftLength; private double rightLength; public Length(double a, double b, double c) { super(a,b); leftLength = a; middleLength = b; rightLength = c; } public class InheritanceTest { public void main(String args[]) { Length a = new Length(14, 9, 15); double overallMiddle = rightLength - middleLength; System.out.println("The right length - the middle length = " + overallMiddle); } } } Error received
https://www.daniweb.com/programming/software-development/threads/394183/finding-the-middle-length
CC-MAIN-2018-22
refinedweb
155
50.57
Generate MIDI files from time series data. You can control can control what octaves and octave ranges you want. Project description Do you have time time series data you want to play as music? Of course you do! MIDITime converts any kind of time series data into pitch, velocity and duration values based on musical options that you set up, then outputs a .mid file. MIDI files aren’t technically audio – they’re instructions on how software instruments should be played. You can either play .mid files directly in some music applications, or import them into a wide variety of music editors (like ProTools, Ableton, MaxMSP) and add a ton of bells and whistles to get broadcast-ready audio. We used MIDITime to produce the data sonification in this episode of Reveal. The musical track – without the talking – is here. Installing pip install miditime Usage Very basic: from miditime.miditime import MIDITime # Instantiate the class with a tempo (120bpm is the default) and an output file destination. mymidi = MIDITime(120, 'myfile.mid') # Create a list of notes. Each note is a list: [time, pitch, velocity, duration] midinotes = [ [0, 60, 127, 3], #At 0 beats (the start), Middle C with velocity 127, for 3 beats [10, 61, 127, 4] #At 10 beats (12 seconds from start), C#5 with velocity 127, for 4 beats ] # Add a track with those notes mymidi.add_track(midinotes) # Output the .mid file mymidi.save_midi() A little more fun, a lot more control: Instantiate the class with a tempo (120bpm is the default), an output file destination, the number of seconds you want to represent a year in the final song (default is 5 sec/year), the base octave (C5 is middle C, so the default is 5, and how many octaves you want your output to range over (default is 1). from miditime.miditime import MIDITime mymidi = MIDITime(120, 'myfile.mid', 5, 5, 1) Bring in some data (this is some earthquakes). I’m assuming your data is already in date order, from oldest to newest.} ] Convert your date/time data into an integer, like days since the epoch (Jan. 1, 1970). You can use the days_since_epoch() helper method, or not: my_data_epoched = [{'days_since_epoch': mymidi.days_since_epoch(d['event_date']), 'magnitude': d['magnitude']} for d in my_data] Convert your integer date/time to something reasonable for a song. For example, at 120 beats per minute, you’ll need to scale the data down a lot to avoid a very long song if your data spans years. This uses the seconds_per_year attribute you set at the top, so if your date is converted to something other than days you may need to do your own conversion. But if your dataset spans years and your dates are in days (with fractions is fine), use the beat() helper method. my_data_timed = [{'beat': mymidi.beat(d['days_since_epoch']), 'magnitude': d['magnitude']} for d in my_data_epoched] Get the earliest date in your series so you can set that to 0 in the MIDI: start_time = my_data_timed[0]['beat'] Set up some functions to scale your other variable (magnitude in our case) to match your desired mode/key and octave range. There are helper methods to assist this scaling, very similar to a charting library like D3. You can choose a linear or logarithmic scale. def mag_to_pitch_tuned(magnitude): # Where does this data point sit in the domain of your data? (I.E. the min magnitude is 3, the max in 5.6). In this case the optional 'True' means the scale is reversed, so the highest value will return the lowest percentage. scale_pct = mymidi.linear_scale_pct(3, 5.7, magnitude) # Another option: Linear scale, reverse order # scale_pct = mymidi.linear_scale_pct(3, 5.7, magnitude, True) # Another option: Logarithmic scale, reverse order # scale_pct = mymidi.log_scale_pct(3, 5.7, magnitude, True) # Pick a range of notes. This allows you to play in a key. c_major = ['C', 'D', 'E', 'F', 'G', 'A', 'B'] #Find the note that matches your data point note = mymidi.scale_to_note(scale_pct, c_major) #Translate that note to a MIDI pitch midi_pitch = mymidi.note_to_midi_pitch(note) return midi_pitch Now build your note list note_list = [] for d in my_data_timed: note_list.append([ d['beat'] - start_time, mag_to_pitch_tuned(d['magnitude']), 100, # velocity 1 # duration, in beats ]) And finish # Add a track with those notes mymidi.add_track(note_list) # Output the .mid file mymidi.save_midi() License This software is released under an MIT license. It would be awful nice if you credited Reveal and Michael Corey somehow if you use this to make something awesome. Credits Many thanks to Julia Smith for helping me to understand musical keys/modes better. MIDITime is a wrapper around the actual midi-making hotness of midiutil, produced by Mark Conway Wirt. I have included midiutil in this package per his recommendation. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/miditime/
CC-MAIN-2019-30
refinedweb
816
64.81
<strong>Originally published by </strong><a href="" target="_blank">Dammnn</a><strong> </strong><em>at </em><a href="" target="_blank"><strong>I</strong>tnext</a> Originally published by Dammnn at Itnext This article returns to the subject of Java’s data type and operators. It discuss arrays, the String type, the bitwise operator, and the ? ternary operator. It also covers Java’s for-each style for loop. Along the way, command line arguments are described! An array is a collection of variables of the same type, referred to by a common name. In Java, arrays can have one or more dimensions, although the one-dimensional array is the most common. Arrays are used for a variety of purposes because they offer a convenient means of grouping together related variables. For example, you might use an array to hold a record of the daily high temperature for a month, a list of stock price average, or a list of your collection of programming books. The principal advantage of an array is that it organizes data in such a way that it can be easily manipulated. For example, if you have an array containing the incomes for a selected group of households, it is easy to compute the average income by cycling through the array. Also, arrays organize data in such a way that it can be easily sorted. Although arrays in Java can be used just like arrays in other programming languages, they have one special attribute: they are implemented as objects. This fact is one reason that a discussion of arrays was deferred until objects have been introduced. By implementing arrays as objects, several important advantages are gained, not the least of which is that unused array can be garbage collected. A one-dimensional array is a list of related variables. Such lists are common in programming. For example, you might use a one-dimensional array to store the account numbers of the active users on a network. Another array might be used to store the current averages for a baseball team. To declare a one-dimensional array, you can use this general form: type array-name[] = new type[size]; Here, type declares the element type of the array. The element type determines the data type of each element contained in the array. The number of elements that the array will hold is determined by size. Since arrays are implemented as objects, the creation of an array is a two-step process. First, you declare an array reference variable. Second, you allocate memory for the array, assigning a reference to that memory to the array variable. Thus, arrays in Java are dynamically allocated using the new operator. Here is an example. The following creates an int array of 10 elements and links it to an array reference variable named sample: int sample []; sample = new int[10]; In this case, when the sample is first created, it refers to no physical object. It is only after the second statement executes that sample is linked with an array. An individual element within an array is accessed by use of an index. An index describes the position of an element within an array. In Java, all arrays have zeros as the index of their first element. Because the sample has 10 elements, it has index values of 0 through 9. To index an array, specify the number of the element you want, surrounded by square brackets. Thus, the first element in sample is sample[0], and the last element is sample[9]. For example, the following program load sample with the numbers 0 through 9: public class arraydemo { public static void main(String [] args) { int sample[] = new int[10]; int i; for(i = 0; i < 10; i = i+1) sample[i] = i; for(i = 0; i < 10; i = i+1) System.out.println("This is sample[" + i + "]: " + sample[i]); } } The output from the program is shown here: This is sample[0]: 0 This is sample[1]: 1 This is sample[2]: 2 This is sample[3]: 3 This is sample[4]: 4 This is sample[5]: 5 This is sample[6]: 6 This is sample[7]: 7 This is sample[8]: 8 This is sample[9]: 9 Arrays are common in programming because they let you deal easily with large numbers of related variables. For example, the following program find the minimum and maximum values stored in the nums array by cycling through the array using a for loop: public class minmax { public static void main(String [] args) { int nums[] = new int[10]; int min, max; nums[0] = 99; nums[1] = -10; nums[2] = 100123; nums[3] = 18; nums[4] = -978; nums[5] = 5623; nums[6] = 463; nums[7] = -9; nums[8] = 287; nums[9] = 49; min = max = nums[0]; for (int i = 1; i < 10; i++) { if(nums[i] < min) min = nums[i]; if(nums[i] > min) min = nums[i]; } System.out.println("min and max: " + min + " " + max); } } The output of the following program is shown here: min and max: -978 100123 In the preceding program, the nums arrays were given values by hand, using 10 separate assignment statements. Although perfectly correct, there is an easier way to accomplish this. Arrays can be initialized when they are created. The general form for initializing a one-dimensional array is shown here: type array-name[] = {val1, val2, val3,......., valN}; Here, the initial values are specified by val1 through valN. They are assigned in sequence, left to right, in index order. Java automatically allocates an array large enough to hold the initializers that you specify. There is no need to explicitly use the new operator. For example, here is a better way to write the MinMax program: public class Minmax2 { public static void main(String [] args) { int nums[] = {99, -10, 100123, 213, -9873, 5623, -9, 287, 49}; int min, max; min = max = nums[9]; for(int i =1; i < 9; i++) { if(nums[i] < min) min = nums[i]; if(nums[i] > max) max = nums[i]; } System.out.println("Min and max: " + min + " " + max); } } Array boundaries are strictly enforced in Java; it is a runtime error to overrun or underrun the end of an array. If you want to confirm this for yourself, try the following program that purposely overruns an array: public class arrayerr { public static void main(String [] args) { int sample[] = new int[10]; int i; //generate an array overrun for(i = 0; i < 100; i = i+1) sample[i] = i; } } As soon as i reaches 10, an ArrayIndexOutOfBoundException is generated and the program is terminated. Although the one-dimensional array is the most commonly used array in programming, multidimensional arrays are certainly not rare. In Java, a multidimensional array is an array of arrays. The simplest form of the multidimensional array is the two-dimensional array. A two-dimensional array is, in essence, a list of a one-dimensional array. To declare a two-dimensional integer array table of size 10, 20 you would write. int table[][] = new int[10][20]; Pay careful attention to the declaration. Unlike some other computer languages, which use commas to separate the array dimensions. Java places each dimension in its own set of brackets. Similarly, to access point 3, of array table, you would use table[3][5]. public class twod { public static void main(String [] args) { int t, i; int table[] [] = new int [3] [4]; for(t = 0; t < 3; ++t) { for (i = 0; i < 4; ++i) { table [t] [i] = (t*4)+i+1; System.out.println(table[t][i] + " "); System.out.println(table [t] [i] + " "); } System.out.println(); } } } In this example, table[0][0] will have the value 1, table[0][1] the value 2, table[0][2] the value 3, and so on. The value of table[2][3] will be 12. Conceptually, the array will look like that shown down here: When you allocate memory for a multidimensional array, you need to specify only the memory for the first dimension. You can allocate the remaining dimensions separately. For example, the following code allocated memory for the first dimensions of table when it is declared. It allocates the second dimension manually. int table[][] = new int [3][]; table[0] = new int[4]; table[1] = new int[4]; table[2] = new int[4]; Although there is no advantage to individually allocating the second dimensions arrays in this situation, there may be in other. For example, when you allocate dimensions separately, you do not need to allocate the same number of elements for each index. Since multidimensional arrays are implemented as arrays of arrays, the length of each array is under your control. For example, assume you are writing a program that stores the number of passengers that ride an airport shuttle. If the shuttle runs 10 times a day during the week and twice a day on Saturday and Sunday, you could use the rider arrays shown in the following program to store the information. Notice that the length of the second dimensions for the first five indices is 10 and the length of the second dimensions for the last two indices is 2. public class ragged { public static void main(String [] args){ int riders[][]= new int[7][]; riders[0] = new int[10]; riders[1] = new int[10]; riders[2] = new int[10]; riders[3] = new int[10]; riders[4] = new int[10]; riders[5] = new int[2]; riders[6] = new int[2]; int i, j; // fabricates some fake data for(i = 0; i < 5; i++) for (j = 0; j < 10; j++) riders[i][j] = i + j + 10; for(i = 5; i < 7; i++) for(j = 0; j< 2; j++) riders[i][j] = i + j + 10; System.out.println("riders per trip during the week: "); for(i = 0; i < 5; i++) { for(j = 0; j < 10; j++) System.out.println(riders[i][j] + " "); System.out.println(); } System.out.println(); System.out.println("Riders per trip on the weekend: "); for (i = 5; i < 7; i++) { for(j=0; j < 2; j++) System.out.println(riders[i][j] + " "); System.out.println(); } } } The use of irregular multidimensional arrays is not recommended for most application, because it runs contrary to what people expect to find when the multidimensional array is encountered. However, irregular arrays can be used effectively in some situations. For example, if you need a very large two-dimensional array that is sparsely populated, an irregular array might be a perfect solution. Java allows arrays with more than two dimensional. Here is the general form of a multidimensional array declaration: type name []....[] = new type[size1][size2]....[sizeN]; For example, the following declaration creates a 4 x 10 x 3 three-dimensional integer array. int multidim[][][] = new int [4][10][3]; A multidimensional array can be initialized by enclosing each dimensions initializers list within its own set of curly braces. For example, the general form of array initialization for a two-dimensional array is shown here: type-specifier array_name[][] = { {val, val, val,....val}, {val, val, val,....val}, . . . . {val,val, val, val....val} }; Here, Val indicates an initialization value. Each inner block designates a row. Within each row, the first value will be stored in the first position of the subarray, the second value in the second position, and so on. Notice that commas separate the initializers block and that a semicolon follows the closing }, For example, the following program initializers an array called sqrs with the number 1 through 10 and their squares: public class square { public static void main(String [] args) { int sqrs[][] = { {1, 1}, {2, 4}, {3, 9}, {4, 16}, {5, 25}, {6, 36}, {7, 49}, {8, 64}, {9, 81}, {10, 100} }; int i, j; for(i = 0; i < 10; i++) { for(j = 0; j < 2; j++) System.out.print(sqrs[i][j] + " "); System.out.println(); } } } Here is the output of the program: 1 1 2 4 3 9 4 16 5 25 6 36 7 49 8 64 9 81 10 100 This is a second form that can be used to declare an array: type[] var-name; Here, the square brackets follow the type specifier, not the name of the array variable. For example, the following two declarations are equivalent: int counter[] = new int[3]; int[] counter = new int[3]; The following declarations are also equivalent: char table [] [] = new char[3][4]; char[] [] table = new char[3][4]; This alternative declaration form offers convenience when declaring several arrays at the same time. For example, int[] nums, nums2, nums3; //create three arrays This creates three arrays variables of type int. It is the same as writing int num[], num2[], num3[]; //also, create three arrays The alternative declaration form is also useful when specifying an array as a return type for a method. For example, int[] someMeth() { .... This declares that someMeth() returns an array of type int. Because both forms of array declarations are in widespread use. As with other objects, when you assign one array reference variable to another, you are simply changing what object that variable refers to. You are not causing a copy of the array to be made, nor are you causing the contents of one array to be copied to the other. For example, consider this program: public class assignreff { public static void main(String [] args) { int i; int nums1[] = new int[10]; int nums2[] = new int[10]; for(i = 0; i < 10; i++) nums1[i] = i; for(i = 0; i < 10; i++) nums2[i] = -i; System.out.println("Here is num1: "); for(i = 0; i < 10; i++) System.out.print(nums1[i] + " "); System.out.println(); System.out.println("Here is num2: "); for(i = 0; i < 10; i++) System.out.print(nums2[i] + " "); System.out.println(); nums2 = nums1; // now nums2 refers to nums1 System.out.println("Here is nums2 after assignment: "); for(i = 0; i < 10; i++) System.out.print(nums2[i] + " "); System.out.println(); // now operate on nums1 array through nums2 nums2[3] = 99; System.out.println("Here is nums1 after assignment: "); for(i = 0; i < 10; i++) System.out.print(nums1[i] + " "); System.out.println(); } } The output from the following code is: Here is num1: 0 1 2 3 4 5 6 7 8 9 Here is num2: 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 Here is nums2 after assignment: 0 1 2 3 4 5 6 7 8 9 Here is nums1 after assignment: 0 1 2 99 4 5 6 7 8 9 As the output shows, after the assignment of nums1 to nums2, both array reference variables refer to the same object. Because arrays are implemented as objects, each array has associated with it a length instance variable that contains the number of elements that the array can hold. Here is a program that demonstrates this property: public class lengthdemo { public static void main(String [] args) { int list[] = new int[10]; int nums[] = {1, 2, 3 }; int table[][] = {// a variable length table {1, 2, 3}, {4, 5}, {6, 7, 8, 9} }; System.out.println("length of ist is " + list.length); System.out.println("length of nums is " + nums < list.length; i++) list[i] = i * i; System.out.println("here is a list: "); // now use length to display list for(int i = 0; i< list.length; i++) System.out.print(list[i] + " "); System.out.println(); } } Here is the output of the following code: length of ist is 10 length of nums is 3 length of table is 3 length of table[0] is 3 length of table[1] is 2 length of table[2] is 4 here is a list: 0 1 4 9 16 25 36 49 64 81 Pay special attention to the way length is used with the two-dimensional array table. As explained, a two-dimensional array is an array of arrays. Thus, when the expression table.length is used, it obtains the number of arrays stored in a table, which is 3 in this case. To obtain the length of any individual array in the table, you will use an expression such as this, table[0].length which, in this case, obtains the length of the first array. One other thing to notice in LengthDemo is the way that list.length is used by the for loops to govern the number of iterations that take place. Since each array carries with it its own length, you can use this information rather than manually keeping track of an array’s size. Keep in mind that the value of length has nothing to do with the number of elements that are actually in use. It contains the number of elements that the array is capable of holding. The inclusion of the length member simplifies many algorithms by making certain types of array operations easier — and safer — to perform. For example, the following program uses length to copy one array to another while preventing an array overrun and its attendant runtime exception. public class Acopy { public static void main(String [] args) { int i; int nums1[] = new int[10]; int nums2[] = new int[10]; for(i = 0; i < nums1.length; i++) nums1[i] = i; //cppy nums1 to nums2 if(nums2.length >= nums1.length) for(i = 0; i < nums1.length; i++) nums2[i] = nums1[i]; for(i = 0; i < nums2.length; i++) System.out.println(nums2[i] + " "); } } Here, length helps perform two important functions. First, it is used to confirm that the target array is large enough to hold the contents of the source array. Second, it provides the termination condition of the loop that performs the copy. Of course, in this simple example, the sizes of the arrays are easily known, but this same approach can be applied to a wide range of more challenging situations. When working with arrays, it is common to encounter situations in which each element in an array must be examined, from start to finish. For example, to compute the sum of the values held in an array, each element in the array must be examined. The same situation occurs when computing an average, searching for a value, copying an array, and so on. Because such “start to finish” operations are so common, Java defines a second form of the for loop that streamlines this operation. The second form of the for implements a “for-each” style loop. A for-each loop cycles through a collection of objects, such as an array, in strictly sequential fashion, from start to finish. In recent years, for-each style loops have gained popularity among both computer language designers and programmers. Originally, Java did not offer a for-each style loop. However, with the release of JDK 5, the for loop was enhanced to provide this option. The for-each style of for is also referred to as the enhanced for loop. The general form of the for-each style for is here: for(type itr-var: collection) statement-block Here, type specifies the type, and itr-var specifies the name of an iteration variable that will receive the elements from a collection, one at a time, from beginning to end. The collection being cycled through is specified by the collection. There are various types of collections that can be used with the for, but the only type used in this book is the array. With each iteration of the loop, the next element in the collection is retrieved and stored in itr-var. The loop repeats until all elements in the collection have been obtained. Thus, when iterating over an array of size N, the enhanced for obtains the elements in the array in index order, from 0 to N–1. Because the iteration variable receives values from the collection, type must be the same as (or compatible with) the elements stored in the collection. Thus, when iterating over arrays, the a strictly sequential order. This is accomplished by manually indexing the nums array by i, the loop control variable. Furthermore, the starting and ending value for the loop control variable and its increment must be explicitly specified.; fo, it also prevents boundary errors. Here is an entire program that demonstrates the for-each version of the for just described: public loop sums only the first five elements of nums: //sum only the first 5 elements. for(int x: nums) { System.out.println("Value is: " + x); sum += x; if(x == 5) break; // stop the loop when 5 is obtained } There is one important point to understand about the for-each style for: public The enhanced for also works on multidimensional arrays. Remember, however, that in Java, multidimensional arrays consist of implication of this, consider the following program. It uses nested for loop to obtain the elements of a two-dimensional array in row order, from first to last. public class foreach2 { public static void main(String [] args){ int sum = 0; int nums[][] = new int[3][5]; //give nums some value for (int i = 0; i < 3; i++) for(int j = 0; j < 5; j++) nums [i][j] = (i + 1) * (j + 1); // use for each for loop to display and sum the values for(int x[] : nums) { for(int y: x) { System.out.println("Value is: " + y); sum += y; } } System.out.println("Summation: " + sum); } } The arrays in nums, beginning with the array specified by nums[0]. The inner for loop then cycles through each of these arrays, displaying the values of each element. Since the for-each style for can only cycle through an array sequentially, from start to finish, you might think that its use is limited. However, this is not true. A large number of algorithms require exactly this mechanism. One of the most common is searching. For example, the following program uses a for loop to search an unsorted array for a value. It stops if the value if found. public class search { public static void main(String [] args){ int nums[] = { 6, 8, 3, 7, 5, 6, 1, 4}; int val = 5; boolean found =false; //use for each styple for to search nums for val. for(int x: nums) { if(x == val) { found = true; break; } } if(found) System.out.println("value found!"); } } The for-each style is an excellent choice in this application because searching an unsorted array involves examining each element in the sequence. Other types of application that benefit from for each style loops include computing an average, finding the minimum or maximum of a set, looking for duplicates, and so on. From a day-to-day programming standpoint, one of the most important of Java’s data type is String. String defines and supports character strings. In many other programming languages, a string is an array of characters. This is not the case with Java. In Java, strings are objects. Actually, you have been using the String class since day one, but you might not know it. When you create a string literal, you are actually creating a String object. For example, in the statement System.out.println("In Java, Strings are objects."); the string, “In Java, strings are objects.” It automatically made into a String object by Java. Thus, the use of the String class has been “below the surface” in the preceding programs. In the following sections, you will learn to handle it explicitly. Be aware, however, that the String class is quite large, and I will only scratch its surface here. It is a class that you will want to explore on its own. You can construct a String just like you construct any other type of object: by using new and calling the String constructor. For example: String str = new String("Hello"); This creates a String object called str that contains the character string “Hello”. You can also construct a String from another String. For example: String str = new String("Hello"); String str2 = new String(str); After this sequence executes, str2 will also contain the character string “Hello”. Another easy way to create a String is shown here: String str = "Java strings are powerful."; In this case, str is initialized to the character sequence, “Java strings are powerful.” Once you have created a String object, you can use it anywhere that a quoted string is allowed. For example, you use a String object as an argument to println(), as shown in this example: public class stringdemo { public static void main(String [] args) { // declare strings in various ways String str1 = new String("Java strings are objects"); String str2 = "they are constructed various ways."; String str3 = new String(str2); System.out.println(str1); System.out.println(str2); System.out.println(str3); } } The output from the program is shown here: Java strings are objects they are constructed various ways. they are constructed various ways. Like any other data type, strings can be assembled into arrays. For example: public class StringArrays { public static void main(String [] args) { String strs[] = {"this", "is", "a","test."}; System.out.println("Original array: "); for(String s: strs) System.out.print(s + " "); System.out.println("\n"); // change a string strs[1] = "was"; strs[3] = "test, too!"; System.out.println("Modified array: "); for(String s : strs) System.out.print(s + " "); } } The output of the following program is: Original array: this is a test. Modified array: this was a test, too! The contents of a String object are immutable. That is, once created, the character sequence that makes up the string cannot be altered. This restriction allows Java to implement strings more efficiently. Even though this probably sounds like a serious drawback, it isn't. When you need a string that is a variation on one that already exists, simply create a new string that contains the desired changes. Since unused String objects are automatically garbage collected, you don’t even need to worry about what happens to the discarded strings. It must be made clear, however, that String reference variable may, of course, change the object to which they refer. It is just that the contents of a specific String object cannot be changed after it is created. To fully understand why immutable strings are not a hindrance, we will use another of String’s methods: substring(). The substring() method returns a new string that contains a specified portion of the invoking strings. Because a new String object is manufactured that contains the substring, the original string is unaltered, and the rule of immutability remains intact. The form of substring() that we will be using is shown here: String substring(int startIndex, int endIndex) Here, startIndex specifies the beginning index, and endIndex specifies the stopping point. Here is a program that demonstrates substring() and the principle of immutable strings: public class Substr { public static void main(String[] argss){ String str = "Java makes the web move. "; //construct a substring String substr = str.substring(5, 18); System.out.println("str: " + str); System.out.println("substr: "+ substr); } } Here is the output of the following program: str: Java makes the web move. substr: makes the web As you can see, the original string str is unchanged, and substr contains the substring. A switch had to be controlled by an integer type, such as int or char. This precluded the use of a switch in situations in which one of the sveral actions is selected based on the content of a string. Instead, an if-else-if ladder was the typical solution. Although an if-else-if ladder is semantically correct a switch statement would be a more natural idiom for such a selection. Fortunately, this situation has been remedied. Today, you can use a String to control a switch. This results in a more readable, streamlined code in many situations. public class Stringswithc { public static void main(String [] args) { String command = "cancel"; switch (command) { case "connect": System.out.println("connecting"); break; case "cancel": System.out.println("canceling"); break; case "disconnect": System.out.println("Disconnecting"); break; default: System.out.println("Command Error!"); break; } } } As you would expect, the output from the program is: Canceling The string contained in command is tested against the case statement. When a match is found, the code sequence associated with that sequence is executed. Being able to use strings in a switch statement can be very convenient and can improve the readability of some code. For example, using a string based switch is an improvement over using the equivalent sequence of if/else statement. However, switching on strings can be less efficient than switching on integers. Therefore, it is best to switch on strings only in cases in which the controlling data is already in string form. In other words, don’t use strings in a switch unnecessarily. -------------------------------------------- Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter Learn More Java Hibernate Tutorial for Beginners ☞ 50+ Java Interview Questions for Programmers ☞ Top 5 Java Test Frameworks for Automation in 2019 ☞ Spring Boot + JPA + Hibernate + Oracle ☞ Functional programming for Java developers: Learn Java Programming Using Practical Assignments. Start Building Back-end Web Applications Robust Test Automation Frameworks By End Of The Course. Learn More! Description This is only Java related course and it's great because it covers just the right amount of Java which is needed to leaning programming, java. This is a comprehensive yet simple course on java programming language and it concentrates on Java programming concepts. ***************************. What are you waiting for? Enroll today and learn the powerful Java language !!! Basic knowledge Nothing else! It’s just you, your computer and your hunger to get started today Java concepts are covered in the course, no experience needed Windows/MAC computer What will you learn To learn more: Fundamentos de Java: Aprende Java desde cero, sin misterios Description This is the best course to learn to program in Java in Spanish from scratch and without any experience in this fabulous programming language . This is the first course where we will study the Java Fundamentals, and we will take you step by step until you acquire the basics of the Java language and so you can start studying more advanced Java topics. The content is divided into perfectly structured levels , each level supported by the previous one, with the aim of adding Java knowledge incrementally so that you can focus on mastering the issues little by little and gradually. So ensure the success of your Java training. In other offer support of any doubt teaching materials included in this course Fundamentals of Java. To make matters worse, we handle a new teaching methodology that we have called Speed Learning. This methodology consists of concise videos that go directly to the point to study, complemented with eBooks with explanations and step-by-step images (which you can print, or search for any text you need, or use for your offline study), since as we know we cannot do text search within a video. In addition, our methodology includes perfectly structured and very didactic exercises, which will allow you to accelerate your eLearning learning. Without wasting time on videos where you have to watch the instructor codify an exercise, too much theory, little practice or anything like that. Our Speed Learning methodology guarantees that in the shortest possible time you will acquire the necessary knowledge for the professional and professional world of Java. The Java Fundamentals course includes the following topics of study: Level. Java basics Lesson 1 - Starting with Java Technology The amazing world of Java programming What is Java technology (from a practical approach) Our first Java program from scratch Lesson 2 - Variables and Operators in Java Use of Variables in Java and what we use them for Data types in Java and how they are classified Operator Management and Classification in Java Lesson 3 - Control sentences in Java Use of the if-else structure and where to use it Management of the switch structure and when to apply it Lesson 4 - Cycle Management in Java Use of the for cycle and its use Use of the while cycle and how to apply it Use of the do-while cycle and when to use it Lesson 5 - Object Oriented Programming Introduction to Object Oriented Programming (OOP) Class Management in Java Using Objects in Java Lesson 6 - Functions in Java Declaration of Methods or Functions in Java Use and call of functions in Java Lesson 7 - Data Management in Java Using Arrangements in Java Matrix Management from the other side. Ing. Ubaldo Acosta Founder of Global Mentoring Passion for Java Technology Who this course is for: Anyone who wants to learn to program in Java Basic knowledge Basic knowledge of PC use Basic operation of an operating system such as Windows, Mac or Linux It is not required to know how to program, we will start from scratch !!! The attitude and desire to start coding and learning Java once and for all from scratch !!! What will you learn Have the basics of the programming language with Java You will know the basic syntax of the Java language Will handle the concept of Variables and Operators in Java We will study Object Oriented Programming with Java You will learn Control Sentences and Cycles in Java We will see the concept of Functions with Java We will study the concept of Inheritance in Java We will learn to use Arrangements in java We will handle the concept of Matrices in Java We will learn to Design Classes in Java We will make a final application with everything learned in the course To continue:
https://morioh.com/p/6a9522ea9ce6
CC-MAIN-2020-05
refinedweb
5,557
51.89
On 07/12/2013 14:33, Christopher Schultz wrote: > In this case, it's pretty clear that there is a quite desirable > feature missing from the spec and I think it might be reasonable > to violate it in this instance. I'd prefer to get Mark or > Konstantin to weigh-in on such a step, because it might set a bad > precedent for Tomcat. The spec doesn't say the container can't put its own objects in to the session user properties collection :) We already have a custom property to enable users to control the blocking read/write timeout (another spec oversight). I'd have no objection to an org.apache.tomcat.websocket.SERVLET_CONTEXT property being added. If the WebSocket spec adds a property it will be in the javax.websocket namespace and we can always support both. They may opt to simply add a property on the session. The patch to do this looks to be pretty minimal. > I'm certainly not going to commit that myself. :) Trunk and 7.0.x are CTR so you would be well within your rights to commit first. It is often useful to discuss more complex / invasive patches before the commit but I don't think there is much more to discuss here. Mark --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org For additional commands, e-mail: users-help@tomcat.apache.org
http://mail-archives.apache.org/mod_mbox/tomcat-users/201312.mbox/%3C52A36EE9.3010707@apache.org%3E
CC-MAIN-2015-35
refinedweb
232
66.23
09-03-2014 02:55 PM The occurrence class has a method GetMatrix() which gives a matrix for converting from part coordinates to assembly coordinates and back. But this yields a 16 array, and I see no methods for using this matrix. Given there are different ways it could be encoded, I hope somewhere there are methods for multiplying these matrices together, and transforming points. Anyone? 09-08-2014 05:23 AM Alan, I am by no means a matrix expert but I do know there are plenty of 3rd party libraries for handling matrices. Solid Edge leaves it up to you to choose the library of your choice or roll your own. I've worked with math gurus in the past who totally got Solid Edge matrices and had no issue working with them. They knew what they were doing though and I don't... I threw together a small demo of 2 free libraries that I'm familar with. OpenTK & SharpGL. Again, I don't for a second pretend that I know how to use them. Just sharing what I do know. using SolidEdgeCommunity; using SolidEdgeCommunity.Extensions; using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace MatrixDemo { class Program { [STAThread] static void Main(string[] args) { OleMessageFilter.Register(); var application = SolidEdgeUtils.Connect(); var assembly = application.GetActiveDocument<SolidEdgeAssembly.AssemblyDocument>(); var occurrences = assembly.Occurrences; foreach (var occurrence in occurrences.OfType<SolidEdgeAssembly.Occurrence>()) { // Allocate array. Array matrix = Array.CreateInstance(typeof(double), 16); // Get the occurrence matrix. occurrence.GetMatrix(ref matrix); // Convert Array to double[] for ease of use. double[] m = matrix.OfType<double>().ToArray(); // NuGet package id: OpenTK DemoOpenTK(m); // NuGet package id: SharpGLCore DemoSharpGL(m); } OleMessageFilter.Unregister(); } static void DemoOpenTK(double[] m) { var matrix4d = new OpenTK.Matrix4d( m[0], m[1], m[2], m[3], m[4], m[5], m[6], m[7], m[8], m[9], m[10], m[11], m[12], m[13], m[14], m[15]); matrix4d.ClearProjection(); matrix4d.ClearRotation(); matrix4d.ClearScale(); matrix4d.ClearTranslation(); // Demo of multiplying matrices. *Static method* // OpenTK.Matrix4d.Mult(left, right); } static void DemoSharpGL(double[] m) { // Convert array to multidimensional array. double[,] ma = { { m[0], m[1], m[2], m[3] }, { m[4], m[5], m[6], m[7] }, { m[8], m[9], m[10], m[11] }, { m[12], m[13], m[14], m[15] } }; var matrix = new SharpGL.SceneGraph.Matrix(ma); // Demo of multiplying matrices. *Static method* // SharpGL.SceneGraph.Matrix.Multiply(left, right); } } } If it were me, once I settled on a solution, I would simply write extension methods over the Solid Edge API to make working with the matrices easier. There are plenty of examples of how to write Solid Edge API extension methods on GitHub (SolidEdge.Community). 09-08-2014 07:07 AM Pinging @AdityaG and @jay_carlton to see if their group can offer any guidance or sample code. 09-08-2014 11:13 AM Since asking the question, I elected to download and wrap a public C# maths library. I have no difficulty working with matrices, except that there are a lot of methods and I preferred to start with something rather than start with nothing. About these extension methods, my impression is there's nothing magic about them and they are simply useful methods that use the SolidEdge API. 09-08-2014 07:44 PM Solid Edge uses the computer graphics row major matrix ordering, as opposed to the math's textbook's column major. That means the 16 values are xx xy xz 0 yx yy yz 0 xz zy zz 0 tx ty tz 1 consisting of a 3x3 rotation matrix and a translation vector. Given a 3D vector (vx, vy, vz) in the 'local' coordinate system, or space as I like to call it, then the premultiplication (vx vy vz 1) X [xx xy xz 0] [yx yy yz 0] [zx zy zz 0] [tx ty tz 1] produces the transformed vector. The applies the rotation followed by the translation. The inverse transformation is formed by multiplying the reverse translation, a negation, followed by the reverse rotation, a transpose. [ 1 0 0 0] X [xx yx zx 0] [ 0 1 0 0] [xy yy zy 0] [ 0 0 1 0] [xz yz zz 0] [tx ty tz 1] [ 0 0 0 1] The appropriate methods should be in the maths library. There is a very brief summary on homogeneous math at the beginning of my article on codeproject at 09-09-2014 10:06 AM "The appropriate methods should be in the maths library." I grabbed a library and it has appropriate methods. Did you have something in mind when you said "the maths library"? 09-10-2014 08:50 AM No, I had nothing in mind.
https://community.plm.automation.siemens.com/t5/Solid-Edge-Developer-Forum/Matrix-methods/td-p/269133
CC-MAIN-2018-26
refinedweb
785
57.37
Kivy has now added official support for IOS platform. You can now package your Kivy application for the Ipad (and other iOS devices, testers needed). The current instructions are available here: For the full story on the multiple Apple limitations we are working with, i want to share the hardest part for Python integration: Apple’s prohibition on using dlopen() to load dynamic libraries. In a typical case, a compiled Python library’s extension is a “.so”, and the “.so” is dlopen() at import. That being said, as we did for the python-for-android project published in January, we are redirecting the compilation object to create static libraries. Theses libraries are included in the final application binary. But it’s not enough: we must also tell to Python to look for the library entry point in the application binary, instead of dlopen(). So in the python dynload loader: return (dl_funcptr) dlsym(RTLD_MAIN_ONLY, funcname) This way, Python will always look at the application binary, and never need to use dlopen(). If you are worried that Apple would reject a Python-based application, or even using Kivy altogether, we have tested it for you: the game that won the Kivy contest has been packaged for IOS, submitted to Apple… and accepted. You can found Deflectouch on iTunes (source code). Anyway, Kivy is now officially supporting 5 platforms: Windows, Linux, MacOSX, Android and IOS! Enjoy
http://kivy.org/planet/2012/03/ios-support-for-kivy/
CC-MAIN-2015-18
refinedweb
233
53.81
Details - Type: Bug - Status: Reported - Priority: P3: Somewhat important - Resolution: Unresolved - Affects Version/s: 5.14.0 - Fix Version/s: None - Component/s: Widgets: Layout - Labels:None - Platform/s: Description **Hi, I use some QFrames in my iPhone widgets app, and I notice that sometimes the geometry for them is distorted. Steps to reproduce: make a new, empty vanilla Widget app. Then change your mainwindow.cpp to look like this: #include "mainwindow.h" #include "ui_mainwindow.h" #include "qtimer.h" #include "qframe.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent) , ui(new Ui::MainWindow) { ui->setupUi(this); QTimer::singleShot(0,[this] { auto f1 = new QFrame(this); f1->setGeometry(100,100,100,100); f1->setFrameShape(QFrame::Box); f1->show(); auto f2 = new QFrame(this); f2->setFrameShape(QFrame::Box); f2->setGeometry(110,110,100,100); f2->show(); }); } MainWindow::~MainWindow() { delete ui; } Then run the app on an iPhone. See the screenshot: if you run it on an iPhone without a home button, you'll get the result on the left. But if you run it on an iPhone with a home button, you'll get the result on the right. If I change the code so that f2->setGeometry() occurs before f2->setFrameShape(), i.e. the same order as for the f1 QFrame, then all iPhones, even those with a home button look ok (both QFrames are quadratic). If I comment out the QTimer so that the QFrames are constructed directly in MainWindow's ctor (and not in the lambda), then all iPhones, even those with a home button look ok (both QFrames are quadratic). If this is a feature: I cannot find any documentation about QFrames having to be initialized in the order: first set the geometry, then the shape. Please fix the documentation.
https://bugreports.qt.io/browse/QTBUG-81478
CC-MAIN-2021-39
refinedweb
291
56.15
MIDI works with all Teensy models. You can use Serial.print() to observe what your program is doing, while using MIDI, or USB MIDI. MIDI connects using the UART's transmit and receive pins. Teensy LC and 3.x support alternate pins for some of their serial ports. Use setTX() and setRX() before MIDI.begin(). Details can be found on the serial page. MIDI input requires an optically isolated input. Output requires only 2 resistors. This schematic is recommended for Teensy 3.x & LC: For Teensy 2.0, use this schematic: When connecting the signals on the 5 pin DIN connectors, be sure to connect to the correct pins. 6N138 and some 6N139 optocouplers may be used instead of PC900. Many work with pin 7 unconnected, but some may require a 10K resistor from pin 7 to ground. Define which serial port will be used. Multiple ports may be used if each has a unique name, rather than the default MIDI. Initialize the MIDI library. The receive channel may be specified, or MIDI_CHANNEL_OMNI to receive all 16 channels. This only affects reception. You can send on any MIDI channel regardless of this setting. Transmit basic MIDI messages. These allow you to easily send each MIDI message. Receive a MIDI message. This returns true if a message has been received, or false if no new message has arrived. After returning true, the get functions can be used to obtain the MIDI message information. Returns the type of message received. The types are: midi::NoteOff midi::NoteOn midi::AfterTouchPoly midi::ControlChange midi::ProgramChange midi::AfterTouchChannel midi::PitchBend midi::SystemExclusive These return the 2 data bytes of the received MIDI message. This simple examples sends a rapid sequence of notes. Not very exciting, but a simple and easy test. #include <MIDI.h> MIDI_CREATE_INSTANCE(HardwareSerial, Serial1, MIDI); const int channel = 1; void setup() { MIDI.begin(); } void loop() { int note; for (note=10; note <= 127; note++) { MIDI.sendNoteOn(note, 100, channel); delay(200); MIDI.sendNoteOff(note, 100, channel); } delay(2000); } /*)"); } } Single Side Version Standard Version The board in the photos was fabricated by Futurlec. Their single-side service is probably the least expensive for only a single piece. When PJRC made this prototype, Futurlec's service was very slow.... so slow that we have not used them since. For the highest quality, OSH Park is usually best. Using the normal files, you should see this preview when using their website. The extra diode between the 2 MIDI connectors is a 5.6V zener. Other than this extra diode, the circuit board followed the schematic shown above. The zener diode is not necessary for normal opertation, but does provide extra protection if external voltage is applied to the MIDI OUT port.
https://www.pjrc.com/teensy/td_libs_MIDI.html
CC-MAIN-2019-09
refinedweb
455
61.12
The C language allows nested if statements in which the if block and/or else block of an if statement contains if or if-else statements. The inner if statement(s) may in turn contain other if statements and so on. Consider the general form of the if else statement given below. if (expr) statement1 ; else statement2 ; The general form of a two-level nested if statement obtained by replacing both statement1 and statement2 in the above form with the if-else statement itself is given below. Observe how the inner if-else statements are indented (shifted to the right) to improve readability. Also note that since the if block and else block of the outer if-else statement comprise a single if-else statement, they need not be included within braces. However, we can surround inner if-else statements with braces to improve readability as illustrated in the right-hand side form. As explained earlier, the use of such braces also prevents errors when we add one or more statements to these if and else blocks. The flowchart for this general two-level nested if statement is given in Fig a. Observe that only one of the four alternative statements (statement}, statement2, statement3, statement4) will be executed depending on the values of the test expressions expr1, expr2 and expr3. Also note that these alternative statements can be any valid C statement such as a simple statement (assignment, function call, etc.), control statement or compound statement. The execution of this statement is as follows: Initially, expr1 is evaluated. If it is true (non-zero), expr2 is evaluated and depending on its value, either statement1 or statement2 is executed, i. e., if expr2 is true, statement] is executed; otherwise, statement2 is executed. After execution of either statement1 or statement2, the control leaves nested if statement. Note that expr3 is not evaluated in this case. On the other hand, if expr1 evaluates as false, expr3 is evaluated and depending on its value, either statement3 or statement4 is executed. After execution of either statement, control leaves the nested if statement. Note that expr2 is not evaluated in this case. Consider that either the if block or the else block in the general form of the if-else statement is replaced with an if-else statement, but not both. Now we have two other forms of two-level nested if statement, as shown below. The flowcharts for these statements are given in Fig b and Fig c, respectively. The second form which contains an if-else statement only in the else part is called as if else- if statement and is explained shortly in detail. As the else block in an if-else statement is optional, we can omit the else blocks in the inner if statements to obtain several additional forms of the two-level nested if statement. However, while writing such statements we need to remember the following rule of nested if statements: An else clause is associated with the nearest if statement that is not already associated with an else clause. If we omit the else clause in the inner if statements of a general two-level nested if statement, we get two other forms given below and illustrated in Fig. Note that braces are required around the inner if statement in the first format for correct interpretation that the else-clause of the inner if statement is omitted, as illustrated in Fig. In the absence of these braces, the else clause of the outer if statement becomes incorrectly associated with the inner if statement. Also note that we do not require such braces in the second case. Two other forms of two-level nested if statements are given below. In the first form, the else clause of the outer if statement is omitted. Whereas, in the second form, the else clause of the outer and inner if statements are omitted. Illustrates nested if (expressions) #include <stdio.h> void main () { float A, B, C; clrscr(); printf("Enter two integers."); scanf ("%f %f", &A, &B ); printf("You have entered the following two numbers.\n"); printf("A = %.3f\t B = %.3f \n", A, B ); if (A!=B) { if (A >B) printf("A is greater than B.\n"); if (A<=B) printf("A is less than or equal to B.\n" ); } C = (A+ B)/2; printf("%.3f is mean of the two numbers %.3f and %.3f\n",C,A
http://ecomputernotes.com/what-is-c/control-structures/nested-if-statements
CC-MAIN-2018-17
refinedweb
735
62.68
Having fixed my ELF relocation issue, I've now got my basic kernel running quite well on the RPi - it seems that the "architecture independent" heap manager I wrote on x86 seems to work quite nicely on ARM. I'm working on my UART debug interface now, and have got quite a lot of useful information, but there's an annoyance. I'm at 115200 baud and can see my console output. Data is being passed through an Arduino DUE which is transparent - passing data direct from one UART to another (and has been tested with other serial devices to test this - including my CubieTruck). The problem is that the last 16-17 characters are not appearing on my terminal, when sent with my putc() function. If I send additional dummy characters to the UART transmit buffer at the end of transmission, I can see all the "real" output. Also, if I spin on a getc() function, this seems to cause the transmit to complete correctly. My code: I call the above code from a C++ function (arch::Arm.initialise();): Code: Select all inline char getc( void ) { while ( (volatile uint32_t)readRegister(UartRegister::FR) & (1 << 4) ); // bit 4 == Rx Fifo Empty Flag return (char)(readRegister(UartRegister::DR) && 0xFF); } inline void putc(uint8_t c) { while ( ((volatile uint32_t)readRegister(UartRegister::FR)) & (1 << 5) ) ; // bit 5 == Tx Fifo Full flag writeRegister(UartRegister::DR, c); } My console output for the above is as follows: Code: Select all ... // UART initialisation and write barrier for(int i = 0; i < 4; ++i) uart0.puts("UART TEST"); uart0.puts("UART0 has been initialised."); ... // write barrier and GPIO initialisation And if I spin on a call to getc(): Code: Select all UART TESTUART TESTUART TESTUART TESTUART0 has Is there some sort of buffer flush that I should be regularly calling in order to flush the Tx buffer? I'm using the memory barriers shown here: ... 6&p=582004 on a RPi Model B (version 1). Code: Select all UART TESTUART TESTUART TESTUART TESTUART0 has been initialised. Many thanks for any help, Adam EDIT: In fact, calling uart0.getc() *twice* seems to be enough to cause the tx to happen correctly. There must be some timing issue here that Ive missed...
https://www.raspberrypi.org/forums/viewtopic.php?t=105121
CC-MAIN-2021-17
refinedweb
370
61.67
I observed that when executing my R# test project the first time after I open the respective solution, the tests fail. Actually, what fails is the setup of the test environment: Failed to start the tests host. 216 exceptions were thrown. #001: The JetBrains.Platform.ReSharper.IDE assembly could not be located in the C:\%path to solution%\KaVE.VsFeedbackGenerator.Tests\bin\Debug directory. #002: <same problem for other R# dependencies> I see two strange things here: - the test project's name (and folder) is actually called "KaVE.VsFeedbackGenerator.RS8Tests\". I have another, completely unrelated unit-test project named "KaVE.VsFeedbackGenerator.Tests". But there's not even a reference between them... - when reexecuting the test, everything works fine. I never see this error again, until I close and reopen the solution. It's not that this is really a big issue for now, but it's at least annoying. Anyone with any ideas about why this happens? Thanks! Best regards, Sven It's possibly the test runner getting its knickers in a twist due to two SetUpFixtures with the same name and namespace, but in different projects. If you check the AssemblyInfo.cs file, you'll see a TestEnvironmentAssembly class. This sets up the ReSharper test environment, and returns a list of assemblies to load - by default, the assembly that owns the test fixture. Because both test projects will have the same type name and namespace, the test runner might be trying to run it as well. I'd suggest renaming one of the fixture classes. Are both test projects building to the same location? There is only one SetUpFixture. The second test assembly contains only normal unit tests and, thus, no R# fixture setup. Both projects build to different locations. I renamed the class anyways, to make sure it's not some leftover from before the renaming that's causing the problem. I'll give it a try and report back, should the problem still appear. Thanks!
https://resharper-support.jetbrains.com/hc/en-us/community/posts/205991339-Missing-Assembly-on-First-Time-Execution-of-R-Tests
CC-MAIN-2020-16
refinedweb
328
67.15
Ticket #1604 (closed enhancement: fixed) [PATCH] Improved import list for SQLAlchemy model template Description The model template currently imports from SQLAlchemy like that: from sqlalchemy import (Table, Column, String, DateTime, Date, Integer, DECIMAL, Unicode, ForeignKey, and_, or_) That list is pretty arbitrary. Why is DECIMAL imported, but not the other type aliases? Why are and_ and or_ imported, but not not_? The following three options would be more reasonable: - use from sqlalchemy import * (this provides a quite reasonable set of names) - import only the names that are really used in the standard model.py - import a more complete list explicitly I have added a patch providing the third option. Also, the patch changes the multi-line import to use Python 2.3 syntax. Attachments Change History Changed 9 years ago by chrisz - attachment model_for_sa.patch added comment:1 Changed 9 years ago by chrisz - Status changed from new to closed - Resolution set to fixed comment:2 Changed 9 years ago by chrisz - Status changed from closed to reopened - Resolution fixed deleted Sorry, I meant solved with option 1. Reopened this ticket since the issue of "import *" is just discussed on the mailing list. Note: See TracTickets for help on using tickets. Better SQLAlchemy imports in model template (this is for 1.1)
http://trac.turbogears.org/ticket/1604
CC-MAIN-2017-17
refinedweb
212
61.06
As the Visual C++ Runtimes version 8.0 is now a side-by-side component, you may have seen what looks like an unreasonably complexly named path from which parts of the CRT are loaded. "Golly, what can they possibly be thinking - creating a directory whose name is full of underscores, numbers, and dots?" The good news is, we definitely were thinking something.. In the component world, each component has what's called an identity. This is the unique name of the component, generated by the component author and referred-to in manifests and user interfaces. No two components have exactly the same set of properties; if they did, even if the file contents were different, they would be considered the same component. (Note: the CLR abuses this rule and often ships new bits under old identities - that's outside the scope of today's missive.) There's a whole set of rules around how identities work, which I may get to at some point. The key thing is that identities are basically property bags of string triplets - namespace, name, and value - for each attribute. Those attributes in the bag without a namespace are called well known attributes, and there are only a few of those (only Microsoft is currently allowed to define new ones...). Further, certain well-known attributes have rules around their values - the version attribute has to be a dotted-quad version, the public key token attribute has to be a string of hex digits of nonzero but even length. Other well-known attributes like name can be whatever you like - "Foo:Bar:Bas" is ok, as is just "Q".. So, Mike Grier came up with the idea for a key form of identities. This key form would be a reasonably-unique one-way noncryptographic readable representation of the major defining attributes of an identity. What he ended up with was the following: proc-arch_name_public-key-token_version_culture_hash proc-arch_name_public-key-token_version_culture_hash The italicized strings (except for the hash) are replaced by the values from the identity for their respective properties. If the property was unset, then "none" was put in it place. In the identity model name, processor architectureand culture are allowed to have very laxly-validated contents, so they may contain "unfriendly" characters that have to be filtered. Characters not in the group "A-Za-z0-9.\-_" are removed from the attribute value before being written into the string. Certain attributes have upper-limits places on their values (nameis limited to 64 characters, processor architecture to 8, culture to 12) achieved by dropping characters from the middle of the filtered string and replacing them with "..". Finally, the whole string is lower-cased using a clone of the unicode casing table that shipped in Windows XP RTM. Voila! A string representing the identity that's filesystem friendly! But wait... what about all those characters that got dropped? Couldn't I construct an identity whose keyform matched the keyform of another? Yes, if it weren't for the _hash value on the end of the keyform. This hash (not in the cryptographic sense) is of all the namespaces, names and values of properties in the identity. Anything that didn't appear in the keyform text would have been represented in this hash. The two identities whose names are "Foo!" and "Foo?" will generate different keyforms - while the ! and ? were dropped from the keyform, they still appear in the hash. A coworker did some experiments and determined that while it was possible to reset the hash generation function, it would involve a ridiculous amount of work. The algorithm for generating the keyform overall (especially the hash) is undocumented at this time. Not because we're trying for security through obscurity, but because the keyform is merely an implementation artifact at this point. Maybe someday we'll lose our heads entirely and store the component payloads in a database of some sort, in a compound storage document, CAB file, whatever. Also, the algorithm has changed for Vista, and no "normal" use cases for knowing the algorithm exist. If you're trying to find files in the WinSxS directory, you should be using the CreateActCtx/ActivateActCtx/SearchPath set of APIs. If you're trying to write files into the WinSxS directory, you should be using MSI which knows about installing components into the right places. If you're writing your own binder, don't - it's really hard to get right. It should be sufficient to say "the generation of this string is opaque and must be assumed to change." But wait... what stops Evil Bob from creating a component with the exact same name and overwriting what Nice Jill had already shipped? Suffice it to say, that public key token has something to do with it - I'll explain that next time, when I talk about signing catalogs for side-by-side components.
http://blogs.msdn.com/b/jonwis/archive/2005/12/28/507863.aspx
CC-MAIN-2014-42
refinedweb
814
61.36
[ ] Shawn Jiang commented on GERONIMO-4587: --------------------------------------- I can't recreate this problem. I used a EJB with three method in remote interface. public String getName(); public String getName1(String name); public String getName2(String name, int pos); @DeclareRoles( { "MANAGERS_ROLE", "USERS_ROLE" }) in EJB class. @RolesAllowed( { "MANAGERS_ROLE"}) in each of the method. Then use a user in USERS_ROLE to execute all three of the methods. All the access attempts failed with "javax.ejb.EJBAccessException: Unauthorized Access by Principal Denied" Can you share more info on how to reproduce this jira ? BTW, what do you mean when you talked about "We have also confirmed that the security system fails if a "proper array" is used instead of the "vararg array". " ? Thanks. > Array security issue > -------------------- > > Key: GERONIMO-4587 > URL: > Project: Geronimo > Issue Type: Bug > Security Level: public(Regular issues) > Components: security > Affects Versions: 2.2 > Environment: Java 6 on OS X 10.5. > Reporter: Trygve Hardersen > > We have a stateless session bean called SSB, with a method called getX: > SSB#getX(java.lang.String) > Our security model has 5 roles; admin, anonymous, customer, partner and system. Users can only be in one role. SSB is accessible for all roles, but the getX method does not allow anonymous access. So we have these annotations: > @DeclareRoles({ > Constants.ROLE_ADMIN, > Constants.ROLE_ANONYMOUS, > Constants.ROLE_CUSTOMER, > Constants.ROLE_PARTNER, > Constants.ROLE_SYSTEM}) > public class SSB .... > @RolesAllowed({ > Constants.ROLE_ADMIN, > Constants.ROLE_CUSTOMER, > Constants.ROLE_PARTNER, > Constants.ROLE_SYSTEM}) > public X getX(String y) > In out test suite I have a simple test case to verify that access by users in the anonymous role (unauthenticated web users) is not permitted for the getX method: > SSB anonymous_service = LOG_IN_AS_ANONYMOUS_USER.... > X obj = null; > EJBAccessException eae = null; > try{ > obj = anonymous_service.getX("test") > ; > }catch (EJBAccessException e) { > eae = e; > } > Assert.assertNull(obj); > Assert.assertNotNull(eae); > Assert.assertEquals(eae.getMessage(), "Unauthorized Access by Principal Denied"); > We've not had issues with this test case for months. However yesterday we decided to change the method signature of getX to support an optional list of int flags than control the object initialization (which related records to get from the DB): > public X getX(String y, int... flags) > After this the test shown above fails. An object is returned back and no exception is raised. The security system still works; we can check the user manually using the SessionContext resource. But the container authorization does not trigger. > We have also confirmed that the security system fails if a "proper array" is used instead of the "vararg array". We have not had a chance to test whether using a XML-based configuration solves the issue. > Since the security system is accessible through the SessionContext we work around this issue by manually checking the user role from our code. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200906.mbox/%3C934043460.1243935307396.JavaMail.jira@brutus%3E
CC-MAIN-2016-26
refinedweb
471
56.76
In a previous blog post of mine, I discussed how you can replace the body of a Cloud Integration message with the contents of an attachment. Tom van Rooijen commented on that post, asking how you can add a binary attachment to a Cloud Integration message, and have that attachment delivered by a receiver mail channel. Since the answer is probably of interest to others, I’m going to describe an approach to solving Tom’s problem in this blog post. The addAttachmentObject method Unsurprisingly, the key to adding an attachment to a message is the Message interface. I’ve previously blogged about the methods of this interface, and among them we find the addAttachmentObject method: public void addAttachmentObject(java.lang.String id, org.apache.camel.Attachment content) The method expects a string, which is the name of the attachment, and an object of a class implementing the Apache Camel Attachment interface. The Attachment interface is implemented by the DefaultAttachment class located in the org.apache.camel.impl package. Creating an attachment object In order to construct a DefaultAttachment object, we need an object wrapping the actual contents of the attachment. In the following, I will assume that we have the contents in a byte array. In Java we have a convenient way of wrapping a byte array: The ByteArrayDataSource class, which implements the DataSource interface. Luckily, this class is available in Cloud Integration. To construct a ByteArrayDataSource object, we need to provide the byte array and a string. The string is the MIME type of the attachment. Configuring the receiver mail channel In order for the receiver mail channel to pass our attachment on to the email’s recipient, we need to instruct it to add attachments from the message. To do so, go to the channel’s Connection tab, and make sure that the Add Message Attachments checkbox is selected. Assembling the pieces We now know everything we need to know in order to add an attachment in code. In the Groovy code below, the attachment is a PNG image, which I decode from Base64 in order to get a byte array. The binary content can be anything else, of course (but remember to adjust the MIME type accordingly). import com.sap.gateway.ip.core.customdev.util.Message import org.apache.camel.impl.DefaultAttachment import javax.mail.util.ByteArrayDataSource def Message processData(Message message) { // 1: Get the binary attachment content in the form of a byte array def imageBytes = 'iVBORw0KGgoAAAANSUhEUgAAAFsAAAALCAIAAACf2mY5AAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAAEySURBVEhL7ZTBGcMgCIUzlwM5j9NkmQyTPggq+IGml6SH/idFwOcL7Xb+sZAje96YvOtlgJuxLnucuSQ+VUdHSVsqBy1lRkpqx3tORZYxKr/jBt9lIskewSLZTh2pPm/Vv8YtR3r518ODTwauW+n78YYOvJ58nkpxpmKQJK04aemIZBLX3VRda7C2g7N2RJfTfaOnC1CumtFXQnnck41aSeKKtrT5CMg2nhEuUpgnrR2RRwj2gXdgGVV4ySIp6mlPOvraQcOgqG1jR9DAzoVmaHdhgvb+SPGEPQNUoTRJq7jnHUeQ1GaE1t5MAXKkTgNlyM+HkyGANxdOSAbHDfa2jBUWqdFQV36A9sHpOdzuBqskFYbP9f8BKE0yIw9Cl6/9eI/HHYEh8Y/xF3h+Rn6b8/wA8YzyCnrn53gAAAAASUVORK5CYII='.decodeBase64() // 2: Construct a ByteArrayDataSource object with the byte array and the content's MIME type def dataSource = new ByteArrayDataSource(imageBytes, 'image/png') // 3: Construct a DefaultAttachment object def attachment = new DefaultAttachment(dataSource) // 4: Add the attachment to the message message.addAttachmentObject('hello-world.png', attachment) return message } Execute the code in a script step in your integration flow, and the following image will be attached to the message and delivered by the receiver mail channel: Hello Morten, Thank you for this write up, I have tested it and it works of course. I understand the attachment checkbox in the mail adapter better now. The remaining item is to get the pdf that I have created (I think) in groovy into to a base64 format so that the decodeBase64 can do its work. The pdf is generated from an example like this: Now how can I get the binary content of this "pdfatt" object so I can generate the message attachment? Thanks & regards Tom Hi Tom Base64 is not needed; that was just a convenient way for me to get some binary content into the script. What you need, is to get at the binary contents of the PDF generated by the iText library. The PDFWriter.getInstance method requires an OutputStream object. One way to solve this is to pass it a ByteArrayOutputStream, which collects bytes into a byte array. So something like the following: At this point, the pdfBinary byte array contains the PDF document, which you can go on to create the attachment with. Remember to import the java.io.ByteArrayOutputStream class. Regards, Morten By the way, remember to change the MIME type to application/pdf 🙂 Regards, Morten Hi, I am trying to code the scripts in eclipse neon and having problem with import of com.sap.gateway.ip.core.customdev.util.Message; It doesn't seem to resolve org.apache.camel.Attechment. I have tried different version of apache jars but nothing seem to find this package. Please help... Thanks Athar Hi Athar Have you added the Script API JAR file to your Eclipse project? It contains the Message interface. The JAR file can be downloaded from. Regards, Morten Hi Morten, Yes,That was the first thing I tried, but as soon as I try o use the Message object, it throws the compilation time error that org.apache.camel.Attechment cannot be resolved. I tried using org.apache.camel.impl.DefaultMessage but it doesn't have the same implementation as Message becuase Message class is an abstract from above. Athar Hello again Ah, sorry - I replied too quickly 🙂 To have the Apache Camel classes available in Eclipse, you need to add a Camel JAR. Download the latest version, and add the lib/camel-core-<version>.jar file to your build path. Regards, Morten Hi Morten, Thank you so much. It worked. I download Camel Core JAR which has org.apache.camel.impl package. But I ran into another issue now. What I am trying to do is to develop the scripts and test them in eclipse environment prior to deploying it in SCPI. It seems that I cannot use SAP Message object to create a new message and pass it to the function that has the below signature. def Message processData(Message message) It seems that Message class is an abstract class and cannot be instantiated. Thanks Athar Hi Athar Take a look at this blog post by Eng Swee Yeoh. Regards, Morten This is exactly what I was looking for. Thank you so much Morten. Regards, Athar Is it now allowed to send those attachments in an HTTPS call? Hi Morten, Thanks for the great blog, but when i followed the code example i was getting the following error. com.sun.mail.smtp.SMTPSendFailedException: 552-5.7.0 This message was blocked because its content presents a potential 552-5.7.0 security issue. Please visit 552-5.7.0 to review our 552 5.7.0 message content and attachment content guidelines. f192sm28347935wmg.30 - gsmtp So, i tried to send the attachment as in the following way and it worked. Step 1: I set the "binary attachment content in the form of a byte array" to message.SetBody(bytearray) Step2: Hi Kannan Nice approach. The error is probably specific to Google and your particular attachment. Regards, Morten Hi Morten, I need to convert a Base64 image to image URL in SAP CPI, I have tried Javascript but Blob and Uint8Array not supported in javascript od SAP CPI. Can you please tell me if we can do that using Groovy, I have already used groovy to create the binary image from base64 image data, but don't know how to create image URL from that. Kindly help on this. Regards, Aman Raj Hi Morten, That's a great blog! I have a requirement to handle SOAP response which is MIME as attachment. Can we achieve..? Regards, NTR
https://blogs.sap.com/2017/10/03/adding-cloud-integration-attachments-in-code/
CC-MAIN-2021-17
refinedweb
1,248
55.34
To set the Expires header in my web application I used to use the Zope DateTime function rfc822() which doesn't return the date in GMT format. Here's how I used it: >>> from DateTime import DateTime >>> hours = 5 >>> then = DateTime() + hours/24.0 >>> then.rfc822() 'Thu, 16 Aug 2007 20:43:59 +0100' Then I found out (from using YSlow) that it's better to use the GMT format (RFC 1123), and here's how to do that in Zope: >>> from App.Common import rfc1123_date >>> from time import time >>> rfc1123_date(time() + 3600*hours) 'Thu, 16 Aug 2007 19:45:12 GMT' (notice that even though my locale is here in London, because of the summer time an hour is added) Well, I thought I'd do a quick benchmark to compare the two approaches because I suspected that rfc1123_date() was faster because you don't have to create a DateTime() object. Here's what the two methods looked like to benchmark it: def foo1(self, hours): t0 = time() now = DateTime() then = now+float(hours/24.0) x=then.rfc822() return time()-t0 def foo2(self, hours): t0 = time() x = rfc1123_date(time() + 3600*hours) return time()-t0 The result was as I expected, rfc1123_date() was much faster. Here are the results for 10,000 iterations: Times1: 1.25969386101 Times2: 0.16867017746 round(1.25969386101/0.16867017746)=7.0 But a benchmark on something like this is a bit non-sense. Why? Because even if there's a 7 times difference, you'll only ever need one of these iterations per page. Not 10,000. The first function foo1() takes 0.00013 seconds. Conclusion, worry more about getting the right output rather than speed at this kind of level.
http://www.peterbe.com/plog/rfc822-vs-rfc1123_date
CC-MAIN-2014-15
refinedweb
288
76.15
This chapter covers ■ Getting started with wxPython ■ Creating a minimum wxPython program ■ Importing wxPython ■ Learning the Python programming language ■ Putting it all together Here's a simple wxPython program. It creates a window with one text box that displays the position of the mouse pointer. Counting white space, it's about 20 lines long. Listing 1.1 A working wxPython program in a mere 20 lines #!/bin/env python import wx class MyFrame(wx.Frame): wx.Frame._init_(self, None, -1, "My Frame", size=(300, 300)) panel = wx.Panel(self, -1) panel.Bind(wx.EVT_MOTION, self.OnMove) wx.StaticText(panel, -1, "Pos:", pos=(10, 12)) self.posCtrl = wx.TextCtrl(panel, -1, "", pos=(40, 10)) def OnMove(self, event): pos = event.GetPosition() self.posCtrl.SetValue("%s, %s" % (pos.x, pos.y)) if __name__ == '__main__': app = wx.PySimpleApp() frame = MyFrame() frame.Show(True) app.MainLoop() What can we say about the program in listing 1.1? It's very short, for one thing. Admittedly, it doesn't do a whole lot, but still, creating a window, populating it, getting it to respond to mouse events—that's not bad for 20 lines. It's not an exaggeration to say this example could easily be three or four times longer in some, more caffeinated, programming languages. Figure 1.1 shows the running program. The code sample is quite readable. Even if you don't know the details of Python or wxPython, if you have any experience with interface programming you likely have a sense of what words like Frame, _init_, EVT_MOTION, TextCtrl, and MainLoop mean. The indentation might seem a bit weird if you aren't used to Python (where are all those closing braces, anyway?), and you probably don't know what all the arguments mean (what's with those -1s?), but you could quite easily come to some rough understanding of the code without much help. In this book, we'll show you why wxPython is one of the easiest, most powerful ways of building a real graphical user interface (GUI) program that there is. Most toolkits that make the building of the interface itself easier (such as a Visual Basic style tool) don't have an implementation language with the clarity, flexibility, and power of Python. Most of the toolkits that have the functionality of wxPython force you to use a language that is ill-suited to rapid development. You'll find wxPython right in the sweet spot, where you get the maximum bang for your development buck. Even better, wxPython is an open-source project, with both the source code and the binary installations distributed under a license that allows it to be freely used in both commercial and open source development. By the time you've reached the end of this book, you'll know how to build a state-of-the-art GUI using the wxPython toolkit. You'll be able to create and manipulate common interface elements such as buttons and menus, as well as less common ones such as trees and HTML editors. So there's quite a bit of ground for us to cover. In this chapter, we'll get you started with wxPython, and discuss what wxPython does and why you might choose it for your programming needs. A good interface allows the user to access the functionality of the application as simply and cleanly as possible, with a stylish look that is attractive to the users. A bad interface can keep users from finding the functionality in the program, and can even cause people to assume that a perfectly working program is malfunctioning. In wxPython, you can create the interface you want with less effort than you'd expect. Was this article helpful?
https://www.pythonstudio.us/wxpython/welcome-to-wxpython-1.html
CC-MAIN-2019-51
refinedweb
621
63.59
Tutorial: Automate resizing uploaded images using Event Grid Azure Event Grid is an eventing service for the cloud. Event Grid enables you to create subscriptions to events raised by Azure services or third-party resources. This tutorial is part two of a series of Storage tutorials. It extends the previous Storage tutorial to add serverless automatic thumbnail generation using Azure Event Grid and Azure Functions. Event Grid enables Azure Functions to respond to Azure Blob storage events and generate thumbnails of uploaded images. An event subscription is created against the Blob storage create event. When a blob is added to a specific Blob storage container, a function endpoint is called. Data passed to the function binding from Event Grid is used to access the blob and generate the thumbnail image. You use the Azure CLI and the Azure portal to add the resizing functionality to an existing image upload app. In this tutorial, you learn how to: - Create a general Azure Storage account - Deploy serverless code using Azure Functions - Create a Blob storage event subscription in Event Grid Prerequisites To complete this tutorial: You must have completed the previous Blob storage tutorial: Upload image data in the cloud with Azure Storage. If you don't have an Azure subscription, create a free account before you begin. If you've not previously registered the Event Grid resource provider in your subscription, make sure it's registered. Register-AzureRmResourceProvider -ProviderNamespace Microsoft.EventGrid az provider register --namespace Microsoft.EventGrid the Azure CLI version 2.0.14 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI. If you are not using Cloud Shell, you must first sign in using az login. Create an Azure Storage account Azure Functions requires a general storage account. Create a separate general storage account in the resource group by using the az storage account create command. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only. In the following command, substitute your own globally unique name for the general storage account where you see the <general_storage_account> placeholder. az storage account create --name <general_storage_account> \ --location westcentralus --resource-group myResourceGroup \ --sku Standard_LRS --kind storage Create a function app You must have a function app to host the execution of your function. The function app provides an environment for serverless execution of your function code. Create a function app by using the az functionapp create command. In the following command, substitute your own unique function app name where you see the <function_app> placeholder. The function app name is used as the default DNS domain for the function app, and so the name needs to be unique across all apps in Azure. For <general_storage_account>, substitute the name of the general storage account you created. az functionapp create --name <function_app> --storage-account <general_storage_account> \ --resource-group myResourceGroup --consumption-plan-location westcentralus Now you must configure the function app to connect to the Blob storage account you created in the previous tutorial. Configure the function app The function needs the connection string to connect to the Blob storage account. The function code that you deploy to Azure in the following step looks for the connection string in the app setting myblobstorage_STORAGE, and it looks for the thumbnail image container name in app setting myContainerName. Get the connection string with the az storage account show-connection-string command. Set application settings with the az functionapp config appsettings set command. In the following CLI commands, <blob_storage_account> is the name of the Blob storage account you created in the previous tutorial. storageConnectionString=$(az storage account show-connection-string \ --resource-group myResourceGroup --name <blob_storage_account> \ --query connectionString --output tsv) az functionapp config appsettings set --name <function_app> \ --resource-group myResourceGroup \ --settings myblobstorage_STORAGE=$storageConnectionString \ myContainerName=thumbnails FUNCTIONS_EXTENSION_VERSION=~2 The FUNCTIONS_EXTENSION_VERSION=~2 setting makes the function app run on version 2.x of the Azure Functions runtime. You can now deploy a function code project to this function app. Deploy the function code The sample C# script (.csx) resize is available on GitHub. Deploy this Functions code project to the function app by using the az functionapp deployment source config command. In the following command, <function_app> is the name of the function app you created earlier. az functionapp deployment source config --name <function_app> \ --resource-group myResourceGroup --branch master --manual-integration \ --repo-url The image resize function is triggered by HTTP requests sent to it from the Event Grid service. You tell Event Grid that you want to get these notifications at your function's URL by creating an event subscription. For this tutorial you subscribe to blob-created events. The data passed to the function from the Event Grid notification includes the URL of the blob. That URL is in turn passed to the input binding to obtain the uploaded image from Blob storage. The function generates a thumbnail image and writes the resulting stream to a separate container in Blob storage. This project uses EventGridTrigger for the trigger type. Using the Event Grid trigger is recommended over generic HTTP triggers. Event Grid automatically validates Event Grid Function triggers. With generic HTTP triggers, you must implement the validation response. To learn more about this function, see the function.json and run.csx files. The function project code is deployed directly from the public sample repository. To learn more about deployment options for Azure Functions, see Continuous deployment for Azure Functions. Create an event subscription An event subscription indicates which provider-generated events you want sent to a specific endpoint. In this case, the endpoint is exposed by your function. Use the following steps to create an event subscription that sends notifications to your function in the Azure portal: In the Azure portal, click the arrow at the bottom left to expand all services, type functions in the Filter field, and then choose Function Apps. Expand your function app, choose the imageresizerfunc function, and then select Add Event Grid subscription. Use the event subscription settings as specified in the table. - Optional: In case you need to create additional containers in the same blob storage for other purposes in the future, you can use Subject filtering features in Filters tab for more granular targeting of blob events to ensure your function app is called only when blobs are added to images container specifically. - Click Create to add the event subscription. This creates an event subscription that triggers imageresizerfuncwhen a blob is added to the images container. The function resizes the images and adds them to the thumbnails container. Now that the backend services are configured, you test the image resize functionality in the sample web app. Test the sample app To test image resizing in the web app, browse to the URL of your published app. The default URL of the web app is https://<web_app>.azurewebsites.net. Click the Upload photos region to select and upload a file. You can also drag a photo to this region. Notice that after the uploaded image disappears, a copy of the uploaded image is displayed in the Generated thumbnails carousel. This image was resized by the function, added to the thumbnails container, and downloaded by the web client. Next steps In this tutorial, you learned how to: - Create a general Azure Storage account - Deploy serverless code using Azure Functions - Create a Blob storage event subscription in Event Grid Advance to part three of the Storage tutorial series to learn how to secure access to the storage account. - To learn more about Event Grid, see An introduction to Azure Event Grid. - To try another tutorial that features Azure Functions, see Create a function that integrates with Azure Logic Apps.
https://docs.microsoft.com/en-us/azure/event-grid/resize-images-on-storage-blob-upload-event?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json
CC-MAIN-2019-04
refinedweb
1,281
54.63
Most regular users of XML are quite familiar with XML namespaces and accept them as a basic part of XML. They shake their heads at the occasional oddities of namespaces, but in general don't give them all that much thought. Among XML experts, however, XML namespaces have been very controversial from day one. This controversy is for good reason: Namespaces solve a difficult problem and are one of many approaches to solving this problem, each of which has its pros and cons. The W3C XML namespaces specification is a compromise, and as with all compromises, it often falls short of addressing each user's needs. Even after all this time, namespaces have proven very difficult to incorporate smoothly into XML information architecture, and lack of care with namespaces can cause a lot of complications for XML processing tools. In this article, I go over design considerations that can help you avoid such problems when using XML namespaces. In general my suggested guidelines will be in boldface. This article will cover XML namespaces 1.0 (including all errata). XML namespaces 1.1 is mercifully modest in its changes, but it is a brand new specification and isn't yet well supported by the tools. I expect that XML namespaces 1.1 will soon become the norm (unlike, say, XML 1.1 which I'm not sure will ever really catch on). The mechanism of XML namespaces has several moving parts: local names, namespace URIs, prefixes, and declarations. The most important step in using namespaces effectively is to learn how to keep these straight. The point of namespaces is that you can use the best concise name for each element or attribute within each context and then put these names in a namespace that distinguishes the context. The concise part of the name that only need be unique within its own context is the local name. Be sure to take advantage of the distinguishing context and don't repeat in local names information that's already inherent in the namespace itself. For example, you don't. A namespace is a string with the syntax of a URI (often redundantly called the "namespace URI"). The namespace is an integral part of the element's or attribute's name. The combination of a local name and a namespace is called a universal name. In order to highlight the namespace's importance, XML pioneer James Clark developed a notation for universal names that emphasizes how fundamentally namespace and local name are bound (see Resources). For example, the universal name with local part customer and namespace is written in Clark's notation as {}customer. Choosing the namespace URI is important. Whether it's better to use URLs or URNs is the source of some debate. The former have the advantage of familiarity, but people often create namespace URLs that do not have any corresponding resource -- that is, if you browse to the equivalent. I recommend placing an RDDL 1.0 document (see Resources) at URLs that correspond to namespaces, unless more specialized conventions apply. For example, in RDF/XML documents, namespaces often lead to RDF schema documents when resolved as URLs. URNs have many classes identifier entities as defined in SGML and XML). When specifying a universal name in an XML document, you use an abbreviation based on an optional prefix that's attached to the local name. This abbreviation is called the qualified name or qname. The prefix is optional because a special syntactical form allows you to specify a default namespace which is associated with qnames that have no prefix. The prefix is strictly a syntactic convenience; in general, it is not really a matter of XML language design but rather a matter of author or tool preference. I call such issues instance details and I only cover them in these articles on design when in my experience the designer has no choice but to consider them. I recommend that you publish well-known prefixes for namespaces but never make any prefix mandatory. Choose well-known prefixes for a namespace when creating documents but accept any chosen prefix for a namespace when reading documents. The namespace declaration is the syntactic device through which prefixes are assigned to namespaces in an XML document. This is technically an instance detail, but important enough that I devote a section (see below) to guidelines for namespace declarations. Use and evolution of namespaces Some designers start out not using namespaces and later on adopt namespaces as they feel the need to mix vocabularies. Such a cautious approach can seem sensible considering how tricky namespaces can be. The problem is that since namespaces are a fundamental part of XML names, this change is more significant than you might realize. It requires extensive changes in tools and other related materials. You can deal with name clashes in other ways. Other than namespaces, the leading approaches are ideas based on SGML architectural forms, in which names are directly declared and modified by tools in case of clashes. Try to think as hard as possible about future developments for your XML design and be decisive about whether to deal with name clashes, and how to do so. I have come to agree with many of the criticisms of XML namespaces and dearly wish for a cleaner mechanism that was well established in tools. For practical reasons based on my experience, these days I use namespaces in almost all of my XML designs. It is also difficult to decide when to evolve or differentiate a namespace. A namespace can be used for versioning, or to differentiate concepts within a domain. The key to best deciding when to do so is to remember that the namespace is a basic part of the name. Change or differentiate the namespace only when you want to make a real, fundamental distinction that defines each element and attribute. If a version change significantly alters the meaning of names in an XML vocabulary, then a namespace change is probably in order. Otherwise, use other versioning mechanisms such as adding a version attribute to top-level elements. The pitfalls of using namespaces to make distinctions within a domain are best illustrated by example. In 1999 XHTML 1.0 became a finalized proposal. It was really just an XML variation on HTML 4.01, which has three separate DTDs: strict, transitional, and frameset. The XHTML working group decided to use three separate namespaces for the corresponding XHTML DTDs. This decision was met with an uproar in the XML community. The main problem was that even though three separate DTDs existed, the meaning of each element didn't change significantly from one to another; a code element in the XHTML transitional DTD essentially means the same thing as a code element in the XHTML strict DTD. By changing the names in each case, the XHTML design was working against this fact. In the end, the XHTML working group corrected things by issuing new specifications that used a single namespace across the XHTML 1.0 domain. You should heed this lesson well. Make distinctions in XML namespaces only when there are truly distinctions between the things being named. Unfortunately, things are rarely black and white. A common situation is when a new version of a vocabulary adds new elements. The meaning of the carried over elements may not have changed and so a namespace change may seem improper. But if you use the same old namespace, it may also seem improper to place the elements added in the new vocabulary in the original namespace. Using a different namespace for only the new elements is rarely a sensible option. In the end, you have to use your judgment to decide whether or not to evolve the namespace with the vocabulary. Some tricks with namespaces may give you other options (see Resources for a tip on using Namespaces for versioning), but you should use even these with care. The Joe English metaphors of namespace sanity XML namespace declarations are scoped, meaning that the declared prefix (or default namespace) is in force for the element on which the declaration occurs (as well as its descendant elements), except where another namespace declaration overrides it. However, this flexibility can cause some problems with processing. Joe English, an XML expert working for Advanced Rotorcraft Technology, Inc., famously explained these problems using mental health metaphors which I copy below (see Resources for the original article). The following are usage patterns for namespaces that I suggest avoiding. In a borderline document (presumably a reference to Borderline Personality Disorder), more than one prefix maps to one namespace: In a neurotic document, the same prefix is used for more than one namespace: In my experience, this pattern is most common where the author goes to some lengths to avoid prefixes. Take the following example, which is neurotic because the default namespace is different depending on where you are in the document: In a psychotic document, two different prefixes are declared for the same namespace in the same scope: A document is in namespace normal form if all namespace declarations are on the root element and no two prefixes are declared for the same namespace: Avoid borderline, neurotic, and psychotic documents. Try to stick to documents in namespace normal form wherever possible because they are simplest to read and to process. XML namespaces seem simple on their face, but buried in their nuances is the danger of real complexity and clumsiness if you don't take care while using them. Understand thoroughly the meaning, rules, and implications of the various concepts that make up the XML namespaces mechanism, and stick consistently to simple conventions while designing vocabularies using namespaces and creating actual instance documents. - Don't miss the any of articles in this series on XML design: - "When to use elements versus attributes" (developerWorks, March 2004). - "Element structures for names and addresses" (developerWorks, August 2004). - Get the authoritative word on XML namespaces in the W3C's XML Namespaces 1.0 and XML Namespaces 1.1 recommendations. - Find XML namespaces, among many other technologies, in the author's "Survey of XML standards (part 1)" (developerWorks, January 2004). - Read James Clark's essay "XML Namespaces," which examines namespaces and introduces a popular notation for describing namespaces. - If you prefer introduction through a series of examples, check out ZVON's XML namespaces tutorial. - Bookmark the XML Namespaces FAQ, maintained by Ronald Bourret. - Read the author's "Tip: Namespaces and versioning" (developerWorks, June 2002) which introduces a mechanism for using XML namespaces to mark the version of XML formats. - For another look at the subject, read David Marston's articles "Plan to use XML namespaces, Part 1" and "Plan to use XML namespaces, Part 2" (developerWorks, November 2002). - Read Joe English's important post "A plea for Sanity" on the XML developer's mailing list. - Learn more about RDDL in Elliotte Rusty Harold's introduction "RDDL Me This: What Does a Namespace URL Locate?," or just go to the RDDL 1.0 specification, which is simple and readable. - Find details on URIs in RFC 2396: Uniform Resource Identifiers. See the URN charter for more on URNs and URN namespaces. -.
http://www.ibm.com/developerworks/xml/library/x-namcar/index.html
crawl-003
refinedweb
1,861
53.31
Affiliate feeds jobs I need an Android app. I would like it designed and built. Create primary feeds from WIX website for Google Merchant Center. I wish to have my WIX products online on google shopping ads. Both design & development required from scratch. Images will be provided The website will require a search function with 4 category options which will pull feeds to gather the information submitted into search options. SEARCH FUNCTIONS Date > Type of announcement > Industry > how it will sort the category PULLED FROM 3RD PARTY FEED SITE We have a new project starting Looking to hire a google merchant and promotions expert. Experience in creating data feeds which is compliant with google merchant guidelines. Looking to display latest news headlines from various publishers on site. We have an existing rss ingester which can be modified, or YQL can be used ([login to view URL]) or another method. Must know javascript, some python. Our company needs to have someone create pivot tables for some of our inventory data. That way we can get an accurate inventory feed of KIT items. . only bid 20 or under, anything more I will not accept p... pages 6. fix alignment in some pages ( remove some inf Analytics. I need to optimize This is a one-page deal memo I'd like to be converted to ultradocs from Webmergy wordpress website. It needs to dynamically u... [login to view URL], [login to view URL], noveltysocksus....au, [login to view URL], [login to view URL] & [login to view URL] that are all linked. I currently have a shopping campaign for the Australian site but I would like to add feeds for the other countries and create additional shopping campaigns based on the correct currencies. import xml feeds to my site .... My team would like to update our website to include content feeds from a specific niche in the way ...advertising options and features to make dynamic, optimized, feature rich, interactive websites for the purpose of producing income through advertising options, sales though affiliate links, brand development, and link building / search engine placement from optimization. The sites will house multiples sections: articles (with article creator for paying] hi, I'm...site - [login to view a backend on my own. : [login to view URL]
https://www.dk.freelancer.com/job-search/affiliate-feeds/
CC-MAIN-2018-39
refinedweb
377
65.42
31996/how-print-all-the-instance-instance-state-using-python-boto3 You can simply loop through using a for loop. Using for loop you can traverse through all the instances. Here is a simple program that you can use after configuring your IAM using using AWS CLI. import boto3 ec2 = boto3.resource('ec2') for instance in ec2.instances.all(): print (instance.id , instance.state) Hope this helps. Hey! You can try something like this: for region in `aws ec2 describe-regions --output text | cut -f3` do echo -e "\nListing Instances in region:'$region'..." aws ec2 describe-instances --region $region done Here, it loops through all the regions, printing the ec2 instances that exist in them. The following will print the instance status in all the regions. for region in `aws ec2 describe-regions --output text | cut -f3` do echo -e "\nInstances status in region: '$region'" aws ec2 describe-instance-status --include-all-instances done import boto3 ec2 = boto3.resource('ec2') instance = ec2.create_instances( ...READ MORE You can use a for loop to ...READ MORE You just need to have the list ...READ MORE You can refer to this question here: code, it ...READ MORE Both terminating and stopping has same code ...READ MORE OR
https://www.edureka.co/community/31996/how-print-all-the-instance-instance-state-using-python-boto3
CC-MAIN-2019-30
refinedweb
204
69.38
Opened 8 years ago Closed 7 years ago #5845 closed (worksforme) _get_next_or_previous_by_FIELD returns the same row in first row Description I'm using sqlite3 when i use: next = post.get_next_by_date_added() if post is the first date: next becomes equale to post. otherwise this works fine... i didn't want to change the sql generated or something like that because i'm really new to the framework, so i decided to do little fixing before returning a value: def _get_next_or_previous_by_FIELD(self, field, is_next, **kwargs): op = is_next and '>' or '<' where = '(%s %s %%s OR (%s = %%s AND %s.%s %s %%s))' % \ (backend.quote_name(field.column), op, backend.quote_name(field.column), backend.quote_name(self._meta.db_table), backend.quote_name(self._meta.pk.column), op) param = str(getattr(self, field.attname)) q = self.__class__._default_manager.filter(**kwargs).order_by((not is_next and '-' or '') + field.name, (not is_next and '-' or '') + self._meta.pk.name) q._where.append(where) q._params.extend([param, param, getattr(self, self._meta.pk.attname)]) try: i=0 while q[i] == self: i=i+1 return q[i] except IndexError: raise self.DoesNotExist, "%s matching query does not exist." % self.__class__._meta.object_name I changed return q[0] into i=0 while q[i] == self: i=i+1 return q[i] Change History (1) comment:1 Changed 7 years ago by PJCrosier - Needs documentation unset - Needs tests unset - Patch needs improvement unset - Resolution set to worksforme - Status changed from new to closed Note: See TracTickets for help on using tickets. I'm going to be bold and close this because I've been trying hard to break it and I can't, behaviour is as expected on SQLite with the latest trunk. Please re-open if you're still experiencing this and you can be a bit more explicit about how we can replicate it.
https://code.djangoproject.com/ticket/5845
CC-MAIN-2015-32
refinedweb
302
56.45
You can read content of an URL object aftter connecting to it. You have to call the URL's openStream() method to get a stream from which you can read the contents of the URL. The openStream() method returns a java.io.InputStream object, so reading from a URL is as easy as reading from an input stream. Below example shows sample code: package com.myjava.url; import java.io.IOException; import java.io.InputStream; import java.net.MalformedURLException; import java.net.URL; public class MyUrlContentRead { public static void main(String a[]){ URL url = null; InputStream is = null; try { url = new URL(""); is = url.openStream(); byte[] b = new byte[8]; while(is.read(b) != -1){ System.out.print(new String(b)); } } catch (MalformedURLException ex) { ex.printStackTrace(); } catch (IO].
http://java2novice.com/java_networking/url_content_read/
CC-MAIN-2020-05
refinedweb
128
63.36
go to bug id or search bugs for Description: ------------ Changes made in libxml2 between version 2.6.23 and 2.6.24, exactly rev. 1.366 of tree.c, exactly added return(NULL) in xmlSetProp(), cause attributes with unknown namespaces to be silently dropped, instead of added into the document. PHP SOAP extension, when making a client request, when running in "encoded" mode, is adding xsi:type attributes to nodes (soap.c line 3948). The xmlns:xsi namespace is added later in that function (soap.c line 4073). Paired with the behaviour of a new libxml2 release, this leads to the xsi:type attribute not being added. Possible solution is to move if (use == ENCODED) block above the for (i = 0; i < arg_count; i++) block in serialize_function_call() in soap.c. Reproduce code: --------------- $client = new SoapClient("", array("trace"=>true)); $client->getProfil(4); Expected result: ---------------- ... <ns1:getprofil><profil xsi:4</profil></ns1:getprofil> ... Actual result: -------------- ... <ns1:getprofil><profil>4</profil></ns1:getprofil> ... Add a Patch Add a Pull Request Fixed in CVS HEAD and PHP_5_2, but not in PHP_5_1.
https://bugs.php.net/bug.php?id=37523
CC-MAIN-2014-10
refinedweb
177
59.9
If you're writing a function that can be implemented as either a member or as a non-friend non-member, you should prefer to implement it as a non-member function. That decision increases class encapsulation. When you think encapsulation, you should think non-member functions. While discussing the degrees of encapsulation, he explains that the amount of encapsulation in a class is inversely proportional to the number of functions that may break if the implementation of the class has to change. And that being the case, it becomes clear that a class with n member functions is more encapsulated than a class with n+1 member functions. I was then an avid C++ programmer and this article hit me like a bolt. This was possibly the first instance of an exposition (that I found) on OO that makes you think away from the kingdom of nouns. Keep the number of verbs hanging off the nouns to a minimum, use the C++ namespace as the module of encapsulation, so as to keep the class interfaces complete and minimal. Shortly afterwards, Herb Sutter, in one his Guru-of-the-Week columns, Monoliths "Unstrung", dissected std::basic_stringto identify the set of functions which should have been implemented as member functions instead of the current count of 103. Very recently, Reg Braithwaite and Buko Obele discussed the inappropriateness of applying the noun/verb metaphor to object oriented programming. It is not correct to model all verbs hanging off the nouns - Obele goes on to say .. The question of who verbs 'belong to' is always the wrong question; instead it is always a matter of how a new concept can be meaningfully introduced into our existing system. In fact, long back, the design of the C++ Standard library was based on the concept of making the algorithms (the verbs) the first class citizens implemented generically on the containers that they operate on. It's not the objects carrying the functions that make the OO way of modeling things - the language has to support powerful abstractions that help programmers write extensible code. The Expression Problem This has been one of the classical examples to demonstrate how mainstream OO languages lack abstractions for modeling an extensible solution. The problem has been described simplistically in this paper by Zenger and Odersky : Suppose we have a datatype which is defined by a set of cases and we have processors which operate on this datatype. There are primarily two directions along which we can extend such a system: • The extension of the datatype with new data variants, • The addition of new processors. The classical OO approach to solve this problem involves designing a polymorphic hierarchy of datatypes, with each concrete subclass modeling a data variant and implementing the set of processors. With this approach, extending datatypes is easy, just add a variant as another subclass. But extending processors is invasive, violates the Open/Closed principle and forces you to dig into every existing data variant. A typical fallout of embracing the noun/verb paradigm of modeling. The alternate not-a-strictly-OO approach adopted by programmers to solve these problems is by implementing some sort of double dispatch using the Visitor design pattern. This is an example where the solution gets overtly complex and forces you to write code that simulates the run time type dispatch mechanism of the underlying programming language. The Visitor design pattern is yet another example of a workaround for the noun/verb metaphor in mainstream OO languages. The Visitor abstractions allow a physical separation of the processors from the datatype, but the individual processors very much match one-to-one with each individual datatype. In this paper (Matching Objects with Patterns) discussing the Scala language capabilities, Burak Emir et. al. states this problem with Visitors more succinctly : The visitor design pattern causes a relatively high notational overhead for framework construction, because a visitor class has to be defined and matchWith methods have to be provided in all data variants. The pattern matching itself is disciplined but very verbose, especially for deep patterns. Visitors in their standard setting do not maintain representation independence, because case methods correspond one-to-one to data alternatives. How can we get around this accidental complexity and make our OO code more succinct ? The answer is simple - more powerful abstractions. Scala Pattern Matching Scala offers pattern matching as a solution to the expression problem. Pattern matching has been one of the key features in many functional languages like Erlang - Scala has brought this feature to the OO paradigm. Martin Odersky has been quite a strong advocate of pattern matching, a feature which many purists consider orthogonal to encapsulation of abstractions. But definitely pattern matching in Scala provides a solution away from the typical noun/verb syndrome in today's OO modeling paradigms. Have a look here for a complete example of pattern matching in Scala to solve a problem which would have been much more verbose and complex using visitors. And Java ? Java has so long been the kingpin of the noun kingdom and there are tons of Java code that model verbs hanging off the nouns. However, with more and more functional paradigms crying out on the sidelines for their entry into the noun kingdom, things look to be changing. I have been dabbling a bit with the prototype of Closures released by Neal Gafter, and things have already started looking interesting. Closures in Java is definitely a very very welcome feature, and the community has already started battling for their introduction in Java 7. Ricky Clarkson has implemented pattern matching in Java using Neal's prototype. Though at a very early stage, it is indeed very heartening to see more powerful forms of abstractions making their way into the Java programming language. Java, as a platform, has already been enriched with languages like Scala and JRuby contributing many powerful abstractions of functional programming. Reg has demonstrated the Equivalence relationship using the visitor pattern in a manner conforming to the tradition of verbosity in Java. With pattern matching and closures, things will improve towards more succinctness. Here is a version based on the pattern matching engine of Ricky : class Main { public static void main(String[] args) { Collection coll1 = new ArrayList<Integer>() {{ add(12); add(15); add(10); }}; Collection coll2 = new TreeSet<Integer>() {{ add(12); add(15); add(10); }}; // equivalence of List-List and List-Set System.out.println(equivalent(coll1, coll2) .add(List.class, List.class, {List l1, List l2 => CollectionUtils.isEqualCollection(l1, l2)}) .add(List.class, Set.class, {List l1, Set s1 => CollectionUtils.isEqualCollection(l1, s1)}) .done()); Map<Integer, Integer> coll3 = new HashMap<Integer, Integer>() {{ put(1, 12); put(2, 15); put(3, 10); }} // equivalence of List-List, List-Set, List-Map System.out.println(equivalent(coll2, coll3) .add(List.class, List.class, {List l1, List l2 => CollectionUtils.isEqualCollection(l1, l2)}) .add(List.class, Set.class, {List l1, Set s1 => CollectionUtils.isEqualCollection(l1, s1)}) .add(List.class, Map.class, {List l1, Map m1 => CollectionUtils.isEqualCollection(l1, m1.values())}) .done()); } public static <T,R> Matcher<T,R> equivalent(T t1, T t2) { return new Matcher<T,R>(t1, t2); } public static <T,R> Matcher<T,R> match(T t1, T t2) { return new Matcher<T,R>(t1, t2); } public static class Matcher<T,R> { public final T t1; public final T t2; public R r; public Matcher(T t1, T t2) { this.t1 = t1; this.t2 = t2; } public <U1 extends T, U2 extends T> Matcher<T,R> add(Class<U1> aCase, Class<U2> bCase, {U1,U2=>R} f) { if (aCase.isInstance(t1) && bCase.isInstance(t2)) r=f.invoke(aCase.cast(t1), bCase.cast(t2)); return this; } public R done() { return r; } } } Definitely a more succinct way to model equivalence than what we are used to in the Javaland. And this is only a start. There are quite a few rough edges to smoothen out, particularly with respect to handling generics and type erasure issues. But, I am sure we will see more and more power of abstractions coming into Java with people like Neal Gafter and Doug Lea pitching in. A tonne of thanks to these thoughtleaders for still keeping the Java language an interesting platform to play around with. 3 comments: I do love how Java and C# are developing in lock-step. Why use 'car' and 'cdr' when their meaning is so extremely non-obvious? 'Head' and 'tail',
http://debasishg.blogspot.com/2007/11/towards-more-powerful-abstractions-in.html
CC-MAIN-2018-13
refinedweb
1,402
50.77
I am trying to save objects made available by the request.headers in my Flask app. I want to render my index.html upon page load, but I also want to grab the visiting user’s email so I can use it for other functions / processes. # routes @app.route('/') def index(): return render_template('index.html') def find_aad(): aad_email = request.headers.get('X-MS-CLIENT-PRINCIPAL-NAME') # aad email return aad_email If I try to run find_aad() on its own, user_email = find_aad() # cant run I will get the typical error: Working outside of request context. How can I on an initial load of the website secure these headers and save them to an object without having these errors? Answer You could get at it this way, perhaps: On that first call to index, you can create a UUID for the “session” and use that as an identifier for the user, then you pass that code back inside the rendered UI elements for stashing on the client-side. Then, on every subsequent call to the backend, you send that UUID with the rest of the request. On those subsequent requests, you can access the email value via that UUID as the key to the data structure you’re using to store client information on the backend. This concept is the idea of a “session” with a “session id” that is common in client/server communications. Using sockets or possibly even built in or supplemental libraries for Flask would probably be a good idea instead of “rolling your own”. Sorry if I’m being unhelpful or stupid – it’s late where I’m at. EDIT: By request here’s some simple pseudocode for this: from flask import Flask import uuid ... uuid_to_email = {} ... @app.route('/') def index(): user_id = str(uuid.uuid4()) uuid_to_email[user_id] = request.headers.get('X-MS-CLIENT-PRINCIPAL-NAME') return render_template('index.html', uuid=user_id) # where it is implied that you would then use the uuid in the client-side code to story it and pass it back to the endpoints you want to do that with
https://www.tutorialguruji.com/python/flask-how-to-render-a-template-and-save-request-headers-info/
CC-MAIN-2021-43
refinedweb
344
63.39
This is David Beckett's personal position paper for the W3C RDF Next Steps Workshop primarily dealing with how RDF syntaxes (serializations of the RDF graph) could be worked on in a future RDF working group. This is my personal position and not that of my employer. My background and interest is in the lower levels of of the semantic web stack - RDF triple data, URIs, formats and storage, rather than higher levels such as inference, reasoning and trust. I have worked on the revising of RDF in 2004 as editor of the RDF/XML (Revised) W3C Recommendation[4], co-editor of the RDF Test Cases W3C Recommendation[5], and in 2008 I co-authored a W3C Team Note on the Turtle[6] RDF Syntax. I have also implemented the RDF model, multiple syntaxes and SPARQL querying in the Redland libraries[7] over the period 2000-present. With the focus on the practical, I will outline my position primarily on syntax issues as well as some important model-related changes. In general, the approach to updating RDF should be on improving on items that were discovered during implementations. It should NOT add features that have not been tested in multiple systems in practice over some time. This work should not be a research project. There are a few RDF model parts that should be deprecated (or removed if that seems possible), in particular reification which turned out not to be widely used, understood or implemented even in the RDF 2004 update. In terms of additions, there is one major addition that should be added since toolkits and SPARQL have implicitly or explicitly supported it for some time: Named Graphs. There should be an RDF datamodel concept of a graph with a name, and set of graphs forming a dataset. These use cases can be taken from the Linked Data community and SPARQL query execution, especially in the forthcoming SPARQL 1.1 changes. This might mean turning an RDF statement into a 4-part 'quad' which is already a quite common implementation technique for RDF triples to be stored in "quadstores" rather than "triplestores". Other model choices are possible such as triples belonging to (contained by) a graph with a name. Graph names or graph literals have mostly been seen as URIs (IRIs), sometimes blank nodes but hardly ever as RDF literals. The RDF statement (subject, predicate, object) has always been asymmetric in that there are RDF terms that cannot be put into the subject or predicate that are allowed in object. It would be worth considering making an RDF statement a 3-tuple of any RDF Term. That would allow blank-node predicates, literal subjects and literal predicates. However nice this would be for the semantics, the major problem with this would be that most existing serializations would not support it, RDF/XML especially would be hard to update to fix this (see below). My position is that if this change is made, the consequences on serialization should be seriously considered. There are some minor additions and corrections that can be made to the model such as replacing "RDF URI References" with "IRI References". I was involved with specifying N-Triples, RDF/XML and Turtle formats as well as implementing other RDF syntaxes. In this section I outline my position on future RDF syntaxes. In general: leave it alone, it works well for the job it was designed for. If the RDF model changes to a quad (4-ary RDF Term) model, the specification should be updated for that. If the model changes to a triple + scoped graph model then something more like Turtle would be appropriate to use. If the specification does have to change for model needs, then the major thing that people have used with N-Triples is to add prefixes (aka a subset of Turtle and N3). This can be seen in several existing specifications such as the RDF Primer and is also widely used when people are educating RDF (URIs are too long for slides and new users). If this is done, the best approach would be to add the Turtle/N3-like @prefix and XML-like QNames (curies?), although heading along that route will need careful explanation of namespaces, prefixes and allowed names. In retrospect, if there was is one thing I would change if I was re-specifying N-Triples now, it would be to move it from 7-bit ASCII to Unicode UTF-8. This will introduce the need for new test cases to deal with escaping and UTF-8 issues (bad UTF-8 encoding, encoding of delimiters etc). I would not change the line-based specification since that has proved very useful for UNIX command-line filtering and streaming approaches to RDF processing. In terms of presentation, the current specification should be moved to a separate REC-track document and any errata folded in and the existing test cases made more prominent. RDF/XML[4] has many flaws that I discussed in detail in 2003 in [1]. To the list in that paper, add the lack of support for named graphs. My position is to leave this format alone and do not try to alter it. Not that it is perfect, but that it works for (part of) the job it was designed - machine-to-machine transmission of a single RDF graph - and has been widely implemented and tested. The current document could be updated for clarity and fixing some errata and ambiguity. It might need a better way to explain the syntax to RDF triples mapping, although the current one plus the test cases has seemed fine over the years. If the specification has to be changed, make the syntax it simpler by deprecating or removing these parts of the syntax: rdf:bagID: deprecate this feature that was needed for reification. There have been several attempts over the years to make new XML syntaxes for RDF graphs such as RXR in [2] by the author, TRiX[8] (an XML version of Turtle) and more recently GRIT[9]. None of these gained any substantial traction in implementations. The problems of writing down a graph in a sequential document representing a tree such as the XML DOM has proved just too hard to make it easy to do and clear. I recommend that any future WG does NOT attempt to make a new XML syntax, even if the RDF model changes. The current state of the art in data model syntaxes is in the area of textual syntaxes such as JSON or Turtle that are both easy for humans to create/read as well as possible for machines to interpret. The focus should be to make it easy for people, which means "not XML" in 2010. It should be clear from this that this is why I created Turtle as a conservative subset of N3, and I contend that this syntax style and approach was successful in that Turtle has become widely used and implemented, even without having a W3C REC-track document available to define it. The next section discusses Turtle specification needs. One issue for new syntaxes is the following: should new syntaxes allow encoding a single graph, a set of graphs or both? There are pros and cons to all of these approaches since sometimes you want to know you have just one graph at hand. If a graph is always a document URI, a syntax that is a single graph/document could work, with the graph name embedded inside e.g. in a similar fashion to @base in Turtle, there could could be an @graph directive to name the graph in the current document. Turtle[6] has been very successful when measured by number of uses in explaining RDF, in examples in specifications, and in implementations. It has been recognized as an easy to author and easy to read format. The current team note document has issues that need resolving especially in the area of providing a much clearer mapping to RDF triples, although the test cases have been sufficient for implementers to figure that out. Turtle needs better alignment with the SPARQL triples pattern language since there are differences in QName / Curie formats as well as some other minor differences. Turtle's design has been conservative - standardise what people use - rather than adding new syntax items and hoping people use them. A standard JSON encoding of RDF is possible to create and is a good idea to better align with the current web development language space focused on Javascript. JSON does, however, make encoding RDF rather verbose and harder to read since the format does not include native URI datatype support or ways to abbreviate them. There have been several approaches to improve on this by QName (Curie) or other abbreviations, to moderate success. This will tend to generate patterns that look like Turtle written in JSON with prefixes, blocks and sequences of predicate/objects or objects. This work should be done with a strong focus on providing usability in Javascript, and may even be worth creating a standard JS API for the RDF graph. It should take note of and potentially be based on existing RDF JSON work such as: RDF/JSON[10] Freebase / Acre[11], irON[12], SIMILE Exhibit, RDFj[15], and should be aligned with the SPARQL JSON result format[13] that may be developed by the current SPARQL WG. The proliferation of formats here indicates that a standardization effort may be of great value. Do not do this - the case for binary XML has been slowly made and had little take-up. There are other approaches such as portable application object serializations - Protocol Buffers, Thrift that can be used in co-operation with streaming compression libraries, all of which are widely available. My position is that the RDF next steps should be cautious and primarily (90%) based on existing implementation experience, not on research. If it does not have 2 or 3 major, complete and independent implementations today, it should not be done. [1] A retrospective on the development of the RDF/XML Revised Syntax, David Beckett, ILRT Tech Report, University Bristol, 2003. [2] Modernising Semantic Web Markup, David Beckett, paper presented at XML Europe 2004. [3] RDF Syntaxes 2.0, David Beckett, January 2010. [4] RDF/XML Syntax Specification (Revised), David Beckett (ed.), W3C Recommendation 10 February 2004. [5] RDF Test Cases, Jan Grant and David Beckett (eds.), W3C Recommendation 10 February 2004. [6] Turtle - Terse RDF Triple Language, David Beckett and Tim Berners-Lee, W3C Team Submission 14 January 2008. [7] Redland Libraries, David Beckett. [8] RDF Triples in XML, J.J. Carroll and P. Stickler HP Labs Technical Report HPL-2003-268, 11 February 2004. [9] GRIT - Grokkable RDF Is Transformable, Niklas Lindström, January 2010. [10] RDF/JSON, Talis Inc. [11] Freebase Data, Metaweb, [12] irON Frédérick Giasson and Michael Bergman, [13] Serializing SPARQL Query Results in JSON, Kendall Grant Clark, Lee Feigenbaum and Elias Torres (eds) W3C Working Group Note 18 June 2007, [14] SIMILE Exhibit JSON Format, MIT, April 2008. [15] RDFj Mark Birbeck, Dec 2009.
http://www.w3.org/2009/12/rdf-ws/papers/ws11
CC-MAIN-2014-52
refinedweb
1,848
59.13
Untitled done in the main script file. I want to create a standalone app from UI built with the UI Editor. I am able to build and export my target .ipa and install it on a jailbroken device. I have my Python script named Script.py and my UI file named Script.pyui. No matter how I define a path for the the pyui file, the call to ui.load_view() consistently fails and I see the following error message on my device: "Error (line nn): UI file could not be loaded" The nn will be the actual line number that the ui.load_view() call is in my script. I presume this error message is coming from the ui module itself. Any suggestions on how to get around this problem to load the UI from a file in a standalone app? The script and UI file load and run fine within PYthonista itself. I have tried the following, without luck for a file actually named Script.pyui and called from the script called Script.py: ui.load_view() ui.load_view("Script") ui.load_view(".\Script") ui.load_view("Script.pyui") ui.load_view(".\Script.pyui") Since I never saw any note or documentation to suggest this is not officially supported -- to package and run UI code from a loaded UI file in a standalone app, I figure there must be a way to do this but it is just not documented anywhere. Can anyone tell me how to accomplish this? Sorry... I fixed the script above. Please try again with this new formulation (double backslashes). Here is a verbose version of ui.load_view()that might help. It takes the real, absolute path of this script and adds 'ui' to it.<br> It prints out what that new filepath is and if that file exists or not.<br> It tries to ui.load_view() it and if it fails it repeats the process with all / converted to \. def ui_load_view_verbose(pyui_filepath = None): pyui_filepath = pyui_filepath or os.path.realpath(os.path.abspath(__file__)) + 'ui' fmt = 'Attempting to load_view({}) -- File {}found.' print(fmt.format(pyui_filepath, '' if os.path.isfile(pyui_filepath) else 'not ')) try: return ui.load_view(pyui_filepath) except ValueError as e: print(e) return None if '\\' in pyui_filepath else ui_load_view_verbose(pyui_filepath.replace('/', '\\')) # replace your current call to ui.load_view() with the following: v = ui_load_view_verbose() # with no parameters!! if not v: print('Bummer! ui_load_view_verbose() did not work.') sys.exit() If this fails, please send us the printed output. One problem with using __file__is if os.chdirhas been called. __file__only has the relative path from the time you import it. If you chdir after import, you can't find it again. I struggled with this problem when making a plugin for my javascript editor script... The calling script had imported the module, then changed directory. In retrospect, I probably could have done something in init, but here's what I ended up with. So possibly using inspect to locate things might be another route. origPath= os.path.dirname(inspect.stack()[-1][1]) fullScriptPath= os.path.join(origPath,os.path.splitext(__file__)[0]) Does sys.argv[0]suffer from the same problem as __file__? Try this.. Which will display all files ending with pyui... def findpyui(): import os startingpath = os.path.expanduser('~') # other things to try.... #startingpath = '.' #starting path = '..' # i.e in pythonista, start above documents, to see into library,etc print startingpath for root, dirs, files in os.walk(startingpath): for name in files: if name.endswith('pyui'): print os.path.join(os.path.abspath(root),name) findpyui() The solution above worked. Thanks. What was the path of the .pyiu file??
https://forum.omz-software.com/topic/1000/untitled/21
CC-MAIN-2022-33
refinedweb
599
78.45
Create a blog using Nikola static website generator If you don’t know what is Nikola, it is a static website/blog generator (like gatsby and other tools). It’s written in Python and it is working out of the box for rendering markdown, rst, latex math formula and jupyter notebook files. I like to understand what I am using, and pushing it to some limits to really get what I want from it, and making a blog with Nikola was no exception. Here I tried to summarize all the informations I found, and all the experimentation I did. I hope you’ll enjoy ! 🙏🏼 1) Installation The first step is to have Python 3 installed on your computer, I recommend using virtual environment management. Once you have create your virtual environment : 2) Create the blog After installing Nikola, creating your site will be very easy, just use the command nikola init <directory_name>. You can add the --demo argument if you want a website built with demo content. All the configurations are done in a single conf.py file, at the root of your blog folder. You can now build your site and see how it looks like. Use the command nikola auto to use a server with automatic rebuilds when changes is detected in your files. Visit to see your site. 3) Add a Post Now if you want to add a post in your blog you should use the command nikola new_post(the default use reStructuredText format, add -f markdown if like me you prefer to write in markdown). The CLI will ask for the title of your blog post and then create the file in the folder posts/*.md. 4) Enable Jupyter Notebook file format Just add *.ipynb as recognizable formats: POSTS = ( ("posts/*.rst", "blog", "post.tmpl"), ("posts/*.md", "blog", "post.tmpl"), ("posts/*.txt", "blog", "post.tmpl"), ("posts/*.html", "blog", "post.tmpl"), ("posts/*.ipynb", "blog", "post.tmpl"), # new line ) PAGES = ( ("pages/*.rst", "", "page.tmpl"), ("pages/*.md", "", "page.tmpl"), ("pages/*.txt", "", "page.tmpl"), ("pages/*.html", "", "page.tmpl"), ("pages/*.ipynb", "", "page.tmpl"), # new line ) You can create a blog post with nikola new_post -f ipynb or add your jupyter notebook in your posts folder. Don’t forget to add and configure these line in the metadata of your jupyter notebook file if you don’t let nikola create the file for yourself : "nikola": { "category": "", "date": "2020-03-28 16:27:51 UTC+01:00", "description": "", "link": "", "slug": "jupyter-notebook-test", "tags": "", "title": "Jupyter Notebook Test", "type": "text" } 5) Using Markdown for your post Nikola handle markdown files by default. The meta are auto generated when you use nikola new_post but I prefer to do it differently. Add the markdown.extensions.meta to your conf.py file. MARKDOWN_EXTENSIONS = ['markdown.extensions.fenced_code', 'markdown.extensions.codehilite', 'markdown.extensions.extra', 'markdown.extensions.meta'] Now you can simply add these line on top of your markdown files, in a pelican style, to indicate Nikola all the information it needs to build your post : Title: Test post in markdown Date: 2020-04-01 Slug: test-post Tags: markdown, test Categories: Tutorial In my situation I decided to use pandoc instead of the default markdown compiler. I did this because I often have code blocks nested in numbered or bullet list and the default markdown compiler does not render those properly. It also looses the numbered list sometimes whereas pandoc is doing an absolute great job. Thanks to this great blog post that was explaining how to do it ! To use pandoc instead of the default markdown you need to first install it. You can use brew install pandoc if you have a mac, or look here for more instructions. Then you can change those lines in the conf.py : COMPILERS = { "rest": ('.rst', '.txt'), # "markdown": ('.md', '.mdown', '.markdown'), "textile": ('.textile',), "txt2tags": ('.t2t',), "bbcode": ('.bb',), "wiki": ('.wiki',), "ipynb": ('.ipynb',), "html": ('.html', '.htm'), # PHP files are rendered the usual way (i.e. with the full templates). # The resulting files have .php extensions, making it possible to run # them without reconfiguring your server to recognize them. "php": ('.php',), # Pandoc detects the input from the source filename # but is disabled by default as it would conflict # with many of the others. "pandoc": ('.md', 'txt'), } ... PANDOC_OPTIONS = ['-f', 'gfm', '--toc', '-s'] See how I commented the markdown compiler and uncommented pandoc for .md files. The last two PANDOC_OPTIONS (–toc and -s) are used for automatically generating a Table of Content on the HTML generated output. Once you use pandoc for compiling your markdown file, for creating a new blog post in markdown you need yo use this command nikola new _post -f pandoc and not markdown anymore. Adding solely the pandoc as markdown compiler is unfortunately not enough because we lose the ability to use the CODE_COLOR_SCHEME = monokai option in conf.py. One solution is to use pandoc generated css for one of its code highlight theme (kate in my case). Create a custom.css file in files/assets/css/ as explained here and add this css code : code {white-space: pre-wrap;} span.smallcaps {font-variant: small-caps;} span.underline {text-decoration: underline;} div.column {display: inline-block; vertical-align: top; width: 50%;} a.sourceLine { display: inline-block; line-height: 1.25; } a.sourceLine { pointer-events: none; color: inherit; text-decoration: inherit; } a.sourceLine:empty { height: 1.2em; position: absolute; } .sourceCode { overflow: visible; } code.sourceCode { white-space: pre; position: relative; } div.sourceCode { margin: 1em 0; } pre.sourceCode { margin: 0; } @media screen { div.sourceCode { overflow: auto; } } @media print { code.sourceCode { white-space: pre-wrap; } a.sourceLine { text-indent: -1em; padding-left: 1em; } } pre.numberSource a.sourceLine { position: relative; } pre.numberSource a.sourceLine:empty { position: absolute; } pre.numberSource a.sourceLine::before { content: attr(data-line-number); position: absolute; left: -5em; text-align: right; vertical-align: baseline; border: none; pointer-events: all; -webkit-touch-callout: none; -webkit-user-select: none; -khtml-user-select: none; -moz-user-select: none; -ms-user-select: none; user-select: none; padding: 0 4px; width: 4em; background-color: #ffffff; color: #a0a0a0; } pre.numberSource { margin-left: 3em; border-left: 1px solid #a0a0a0; padding-left: 4px; } div.sourceCode { color: #1f1c1b; background-color: #ffffff; } @media screen { a.sourceLine::before { text-decoration: underline; } } code span. { color: #1f1c1b; } /* Normal */ code span.al { color: #bf0303; background-color: #f7e6e6; font-weight: bold; } /* Alert */ code span.an { color: #ca60ca; } /* Annotation */ code span.at { color: #0057ae; } /* Attribute */ code span.bn { color: #b08000; } /* BaseN */ code span.bu { color: #644a9b; font-weight: bold; } /* BuiltIn */ code span.cf { color: #1f1c1b; font-weight: bold; } /* ControlFlow */ code span.ch { color: #924c9d; } /* Char */ code span.cn { color: #aa5500; } /* Constant */ code span.co { color: #898887; } /* Comment */ code span.cv { color: #0095ff; } /* CommentVar */ code span.do { color: #607880; } /* Documentation */ code span.dt { color: #0057ae; } /* DataType */ code span.dv { color: #b08000; } /* DecVal */ code span.er { color: #bf0303; text-decoration: underline; } /* Error */ code span.ex { color: #0095ff; font-weight: bold; } /* Extension */ code span.fl { color: #b08000; } /* Float */ code span.fu { color: #644a9b; } /* Function */ code span.im { color: #ff5500; } /* Import */ code span.in { color: #b08000; } /* Information */ code span.kw { color: #1f1c1b; font-weight: bold; } /* Keyword */ code span.op { color: #1f1c1b; } /* Operator */ code span.ot { color: #006e28; } /* Other */ code span.pp { color: #006e28; } /* Preprocessor */ code span.re { color: #0057ae; background-color: #e0e9f8; } /* RegionMarker */ code span.sc { color: #3daee9; } /* SpecialChar */ code span.ss { color: #ff5500; } /* SpecialString */ code span.st { color: #bf0303; } /* String */ code span.va { color: #0057ae; } /* Variable */ code span.vs { color: #bf0303; } /* VerbatimString */ code span.wa { color: #bf0303; } /* Warning */ Now your code blocks should be highlighted. It’s up to you to custom it further to change the color background or stuff like this. I also wanted to style the table of content : .p-summary.entry-summary #TOC { display: None; /* disable showing the TOC in my blog home page teasers */ } #TOC { background-color: #e9f8f8; border-radius: 3px; padding: 18px 0px 1px 6px; margin-bottom: 20px; } 6) Pages vs Posts Nikola has two type for entries on your website, POSTS and PAGES. These are your blog posts. POSTS are added to feeds, indexes, tag lists and archives. PAGES These are generally static pages that may be built when you design your website. Once your design will be done you should not been making many new pages. For example in PAGES, I have the following pages: - Resume (html) - Cheatsheet (html) 7) Customizing the navigation bar Customization of the navigation top bar is done, again, in the conf.py file. NAVIGATION_LINKS = { DEFAULT_LANG: ( ("/resume/", "Resume"), ("/cheatsheet/", "Cheatsheet"), ("/archive/", "Archive"), ) } This is an example of how I’ve done mine. 8) Indexes as a list of links or list of posts Nikola allows you to categorize posts in a number of ways such as category, tags, archives, and authors. For each means of categorizing, an associated index page is generated so that viewers can see all available posts (*_PAGES_ARE_INDEXES = True) or links associated to that category (_PAGES_ARE_INDEXES = False*). You can choose for these indexes to produce a list of the full posts (or showing teasers instead of the full post) or a list of links to each post. Depending on your needs, you can change any of the following index settings in conf.py to True. CATEGORY_PAGES_ARE_INDEXES = False TAG_PAGES_ARE_INDEXES = False ARCHIVES_ARE_INDEXES = False AUTHOR_PAGES_ARE_INDEXES = False This is what makes Nikola so customizable. For example, since there are less Categories, and you may have more posts under each category, you might want them as a list of links. Alternatively, with Tags there are usually more of them, so less posts under each Tag so you want a list of posts. 9) Enable a comment system Because static sites do not have databases, you need to use a third-party comment system as documented on the official doc. - On Disqus, select “Create a new site” (or visit). - During configuration, take note on the “Shortname” you use. Other configs are not very important. - At “Select a plan”, choosing the basic free plan is enough. - At “Select Platform”, just skip the instructions. No need to insert the “Universal Code” manually, as it is built into Nikola. Keep all default and finish the configuration. In conf.py, add your Disqus shortname: Deploy to GitHub and now the comment system should be enabled. 10) Deploying your website My workflow is separated in two parts : - Github Pages - Netlify Github Pages I decided to host my blog files on GitHub and use their free service, GitHub Pages, for deploying my blog on this address. For doing that you will need to have a GitHub account, and enable GitHub Pages. Once you created your repository as explained for GitHub Pages, initialize GitHub in your source directory The conf.py should have the following settings. GITHUB_SOURCE_BRANCH = 'src' GITHUB_DEPLOY_BRANCH = 'master' GITHUB_REMOTE_NAME = 'origin' GITHUB_COMMIT_SOURCE = True Create a .gitignore file with the following entries as a minimum. You may use gitignore.io to generate a suitable set of .gitignore entries for your platform by typing in the relevant tags (e.g., mac, nikola, jupyternotebooks). By using the nikola github_deploy command, it will create a src branch that will contain your contents (i.e., *.ipynb, *.md, and a master branch that will only contain your html output pages that are viewed by the browser. Netlify extra steps Because of all these reasons I wanted to use Netlify for deploying my blog with a custom domains,. I simply configured a trigger on Netlify to start building my blog when it detects any new push on my GitHub blog repository. It is as simple as said, everything kind of works out of the box and the service provided by Netlify has been very stable and giving me very good SEO statistics. 11) Archives Nikola has many options for how you would display your archive of posts. I’ve kept it pretty simple on my end. # Create per-month archives instead of per-year CREATE_MONTHLY_ARCHIVE = False # Create one large archive instead of per-year CREATE_SINGLE_ARCHIVE = False # Create year, month, and day archives each with a (long) list of posts # (overrides both CREATE_MONTHLY_ARCHIVE and CREATE_SINGLE_ARCHIVE) CREATE_FULL_ARCHIVES = False # If monthly archives or full archives are created, adds also one archive per day CREATE_DAILY_ARCHIVE = False # Create previous, up, next navigation links for archives CREATE_ARCHIVE_NAVIGATION = False ARCHIVE_PATH = "archive" ARCHIVE_FILENAME = "archive.html" 12) Content Footer I use the recommended license : <img alt="Creative Commons License BY-NC-SA" style="border-width:0; margin-bottom:12px;" src=""></a>""" You might want to have a specific message (i.e., license, copyright, contact e-mail address) at the footer of every page and this is where to do it. In this case I’ve added a Mailchimp link so that readers can subscribe to my page. CONTENT_FOOTER = ''' <center> ''' + MAILCHIMP_SIGNUP + ''' <br> Contents © {date} <a href="mailto:{email}">{author}</a> <a href=""><i class="fab fa-dev" title="mattioo's DEV Profile"></i> </a> - Powered by <a href="" rel="nofollow">Nikola</a> {license} - favicon <a href="">FlatIcon</a> </center> <br> ''' 13) Rendering math equations I have enabled KaTeX because its prettier with the $...$ syntax as thats more similar to LaTeX. USE_KATEX = True KATEX_AUTO_RENDER = """ delimiters: [ {left: "$$", right: "$$", display: true}, {left: "\\\\[", right: "\\\\]", display: true}, {left: "\\\\begin{equation*}", right: "\\\\end{equation*}", display: true}, {left: "$", right: "$", display: false}, {left: "\\\\(", right: "\\\\)", display: false} ] """ 14) Implementing Google tools Google search I’ve enabled Google search form to search in my site. <div class="form-group"> <input type="text" name="q" class="form-control mr-sm-2" placeholder="Search"> </div> <button type="submit" class="btn btn-secondary my-2 my-sm-0"> <i class="fas fa-search"></i></button> </button> <input type="hidden" name="sitesearch" value="%s"> </form> """ % SITE_URL Google Analytics Google Analytics can be added to the bottom of <body> to function.</script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', '<YOUR GOOGLE ANALYTICS IDENTIFIER>'); </script> """ 15) Customizing your blog Theme & Template customization To create a new theme, we can use the following command which will create a new folder in themes called brainsorting. It is using the mako templating engine and the parent theme is bootstrap4. We don’t necessarily want to create a theme from scratch, so we base it off the bootstrap4 theme (or whatever theme you want) and make the adjustments that we want. We can also copy over any templates from the parent theme where we want to make modifications by using the following command : If you want to examine all the components of the parent theme (i.e., bootstrap4 in my case), the following command will give you the path to the parent theme for you to explore. The full list of templates is shown below: . ├── authors.tmpl ├── base_helper.tmpl ├── base.tmpl ├── gallery.tmpl ├── index_helper.tmpl ├── listing.tmpl ├── pagination_helper.tmpl ├── post.tmpl ├── tags.tmpl └── ui_helper.tmpl For example if you want to make the nav bar sticky at the top, so that when readers scroll downwards, they can still access the menu bar, you need to update the base.tmpl file as shown below with the command sticky-top. To get the base.tmpl file in your template folder use nikola theme --copy-template=base.tmpl. Setting your favicon Pick an icon and store it in the folder ‘/file/’, then edit the conf.py as follows : Tweaking the CSS This is quite easy and can be done by dropping a custom.css document here files/assets/css/custom.css. This will be loaded from the <head> block of the site when built. If you really want to change the pages radically, you will want to do a custom theme. Files and Listings This two folders are used to transfer any file or code file to the output folder (your generated website). By default, putting anything in the files folder will be available in the root of your website. Anything in listings or subfolder will be available in output/listings. This last folder allows user to view and download any code file you put in this. Date formatting You can customize how the timestamp are displayed on your blog posts. I like to set it like this to have a more human friendly reading with dates displayed like “3 months ago”. Hovering the date with the mouse display the exact date. Using mailchimp for user to subscribe Mailchimp allows you to run e-mail campaigns and contact subscribers when you have new content on your site. See Getting Started with Mailchimp for more detailed instructions. After you created your account you can create your signup form and get a code that looks like this one. Create a MAILCHIMP_SIGNUP variable in conf.py and paste this code : <form action="<YOUR MAILCHIMP IDENTIFIER>" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank" novalidate> <div id="mc_embed_signup_scroll"> <label for="mce-EMAIL">Subscribe</label> <input type="email" value="" name="EMAIL" class="email" id="mce-EMAIL" placeholder="email" required> <!-- real people should not fill this in and expect good things - do not remove this or risk form bot signups--> <div style="position: absolute; left: -5000px;" aria-<input type="text" name="b_3ffb6593478debd1efe5bf3e7_e432d28210" tabindex="-1" value=""></div> <div class="clear"><input type="submit" value="Subscribe" name="subscribe" id="mc-embedded-subscribe" class="button"></div> </div> </form> </div> <!--End mc_embed_signup--> """ To make use of the pace.js loading library I added this code in the base_helper.tmpl file : # below the <title> in the base_helper.tmpl file {% if use_pace %} <script src="/assets/js/pace.min.js"></script> <link href="/assets/css/pace.css" rel="stylesheet" /> {% endif %} I then activate it with in the conf.py settings : Remove .html suffix from archive.html This is just a small annoyance, but by default, the archive is located in /archive.html. If you want it to be in /archive/, add the following lines to your conf.py: Remember to also fix the navigation links: NAVIGATION_LINKS = { DEFAULT_LANG: ( ("/pages/resume/", "My Resume"), ("/pages/cheatsheet/", "Cheat Sheet"), ("/archive/", "Archive"), ), } Short blog post teaser in index page The index.tmpl will generate a list of posts associated to the tag/category/year/author. This index can either be the entire post or post with just a teaser. To just show a teaser of the post, set conf.py as follows: Don’t forget to write in your post file where is the end of the teaser, in Markdown or html or ipynb do like this : In reStructuredText, select the end of your teasers with: If you are using teasers, the default is a Read more… link to access the full post. To make it more informative, you can have statements such as XX minute read… in conf.py as shown below. INDEX_READ_MORE_LINK = '<p class="more"><a href="{link}">{reading_time} minute read…</a></p>' FEED_READ_MORE_LINK = '<p><a href="{link}">{read_more}…</a> ({min_remaining_read})</p>' 16) Optimizing your blog Filters I want to be sure that all the files on my blog are optimized (such as .html, .js, .jpeg, .png, .css) so I am making use of the filters functionality. In my conf.py I have the following settings for FILTERS : FILTERS = { ".html": ["filters.typogrify"], ".css": ["filters.yui_compressor"], ".jpg": ["jpegoptim --strip-all -m75 -v %s"], ".png": ["filters.optipng"] } For them to work I need to have typogify, yui_compressor, jpegoptim, and optipng installed on my machine. Those instructions are for mac using homebrew: brew install yuicompressor brew install optipng brew install jpegoptim poetry add typogrify # or "pip install typogrify" if you don't use poetry Posting automatically on Medium To publish your Nikola posts on Medium there is this plugin. You just need to install it with this command : Add a medium.json file in your blog folder with a generated access token that you can get here : Then simply add the metadata markdown : yes in the blog post you want to publish on medium and run the command : Posting automatically on Dev.to To publish your Nikola posts on Dev there is this plugin. You just need to install it with this command : Add a devto.json file in your blog folder with a generated access token that you can get here : Then simply add the metadata devto : yes in the blog post you want to publish on medium and run the command : Other plugin I didn’t try - similarity : Find posts that are similar to the one being read. Requires install of Natural Language Processing packages such as gensim. Conclusion That’s it for the big tutorial. I hope I was precise enough for giving you all the tools to make a blog very much like you want it. Some part of that article were very inspire (if not copy/pasted for some stuff) from resources down below that helped me a LOT to understand everything I need to understand to reach a result that I like with my blog. My next step will be to automate as much things as possible, you’ll know it in a new article when I will have achieve it 😄 Google group to discuss about Nikola : Resources Themes
https://www.brainsorting.com/posts/create-a-blog-with-nikola/
CC-MAIN-2021-17
refinedweb
3,475
65.12
Conservapedia talk:Conservapedian mathematics/Archive1 Contents - 1 Coffee - 2 category theory - 3 Complex numbers - 4 Proof by Contradiction - 5 Conway's Game of Life - 6 At LemonPeel or whoever knows - 7 This article is great - 8 Roots of zero - 9 Too much serious - 10 multiplication - 11 FBI incident redux - 12 Tychonoff - 13 just some notes - 14 Some other guesses why he hated complex numbers - 15 Re: How long does it take to earn $40? - 16 Cover story status - 17 Is there a good place to put "39+4=44".... - 18 Fuck's sake Coffee[edit] I got the coffee cup - donut but could you just go on from there please, please? Susan... miaow ... 16:15, 21 November 2007 (EST) category theory[edit] we really need something about this. the complete lack of reference in the "added reference to abstract nonsense" is particularly special. in my opinion, he's read an article about Paul Erdös (pronounced airdish(!)) and his concept of "proofs from the book", ie that the best proofs were simple and beautiful and had vast and powerful ramifications, and twisted it to the nth degree (but kept it in the real numbers). airdish 03:35, 23 November 2007 (EST) - Wow, circular definition much? That's an awesome (pair of) link(s) (if you click on the abstract nonsense link), and belongs not only in WIGO, methinks, but "best of CP" as well. human 14:40, 25 November 2007 (EST) As a side note, the Erdos entry is quite funny as well. They are desperate to hold up examples of successful homskullers. But they make only a passing, half joking, mention that he was an atheist, and don't make any mention that he was a homeless drug-addict. All three not very high on the list of things CS likes.Antifly 16:22, 29 June 2008 (EDT) Complex numbers[edit] Someone else seems to have a beef with them. Unless this is Andy's sock. The objection seems to be related to this.--Bayesupdate 15:19, 25 January 2008 (EST) - Probably a psrodtst channeling teh assfly. Andy thinks complex numbers are teh ebil because they steam up his glasses, no matter how many times teh roger assfly explains them to him. human 16:34, 25 January 2008 (EST) OK, having an EE background Andy S's gripes with complex numbers simply blows my mind. The article says they're used in Fourier analysis ... true enough, but Andy's lunacy is deeper than that, because they're used in elementary AC circuit analysis. I mean, Andy's like a plumber who's decided to lash out at pipe wrenches. MrG 71.208.28.238 (talk) 01:10, 23 May 2011 (UTC) Proof by Contradiction[edit] I can kind of see Andy's point about proof by contradiction. I mean, proving something by showing that it's opposite is NOT true? That's just crazy. On the other hand, Andy seems to have dedicated his life to proving creationism by disproving Darwinism. TWIST! User:64.106.84.253 - Proof by contradiction works by showing that denying the hypothesis leads to contradictions, not necessarily (though usually) its opposite. Example. NightFlareSpeak, mortal 18:06, 26 January 2008 (EST) - It was just over 30 years ago when my Uni Math tutor criticised my use of Proof by Contradiction, saying that Proof by Contradiction was only used as a last resort for a reason which would apply to any proof, not just for existential claims. It is that it requires the conjecture to be proved to be either true or false, whereas it might be meaningless. The classic example: Take the propositions, A: B is true and B: A is false . What are their truth values? There is no combination of True or False that can be attached to them that does not generate a contradiction. A non-exhaustive examination of deductions however, could apparently spuriously "prove" one of them by contradiction. Unsigned below seems much more knowledgeable than me but I think their argument is flawed. As they correctly observe, a PBC that works proving P by showing that P' => Q' where Q is known to be true is indeed no more "dangerous" than a normal proof and a proof may always be possible directly. However, what about a PBC that proves P by assuming P' but then nevertheless deducing P from it? This kind of proof is subject to the kind of objection above,and also it may be possible to furnish a proof in this fashion where this none, or at least none discovered, by any other method, not least in the case of meaningless conjectures such as the above. This surely is the kind of PBC that all the fuss is about, and which I at least hope Andy is objecting to. (Am i naive!). PardreObe (talk) 17:32, 15 December 2009 (UTC) There actually have been serious philosophical discussions in the foundations of mathematics, both about proof by contradiction and about elementary methods. Proof by contradiction is generally disfavored by mathematicians (at least for proofs of existential claims), for the obvious reason that a proof of an existential claim by contradiction generally gives no information relevant to an actual construction of the object that is proven to exist. However, for about 99% of mathematicians, this is just a mild sort of disfavoring. There is a school of (mainly Dutch) mathematicians called "constructivists" or "intuitionists" who follow Brouwer (and Kronecker before him), and to some extent also the famous French analysts Borel, Lebesgue, and Hadamard, in rejecting non-constructive proofs of this sort, on the grounds that mathematics is about constructions (since it clearly isn't about physical objects) and therefore an existence proof must be constructive. Most mathematicians reject these claims about the subject matter of mathematics (instead treating math either as a purely formal game with symbols, or as being about an independently existing but totally abstract realm) and therefore are fine with non-constructive proofs, even though constructive ones are often more illuminating. So there is a real controversy here, but it's not exactly central. As for elementary methods, people have at various points considered this a very important idea in mathematics. Certainly in the 19th century, people were often quite surprised when statements not involving the complex numbers seemed to effectively require complex numbers in their proofs. (Technically the complex numbers aren't required, because they are a conservative extension of the real numbers, but duplications of Riemann and Chebyshev's famous arguments basically repeat the complex number constructions in a slightly more complicated way.) So there was a lot of anticipation of some day finding an elementary proof of the Prime Number Theorem - but when Selberg and Erdös finally gave one, people realized that it was really no more illuminating than the complexified argument was. I believe there are some interesting research programs around the notion of elementary proofs, but certainly nothing very mathematically central. For further reference on the topic, I would look at the work of the philosophers of mathematics Andrew Arana, and Mic Detlefsen. Finally, there is one sense in which the complex numbers are a strange structure - there is no mathematical property belonging to i that doesn't also belong to -i. Therefore, there is no way to pick out which one is which, which causes problems for a theory of reference, if we think mathematical language somehow refers to independently existing entities. One way around this is to work with a particular construction of the complex numbers (say, as ordered pairs of reals), which gives us non-mathematical properties that distinguish i and -i. But this is fairly inelegant. The complex numbers are however far from unique in this way - just consider the free group on a single generator, and realize that there is no distinguishing g and -g in this group. Or better yet, consider the unique group of 7 elements, and notice that no non-zero element in this group is distinguished from any other. But since we normally work with particular instantiations of this group structure, rather than the abstract group, this doesn't cause any actual mathematical problems. - I think this article could do a lot better in defending proof by contradiction than it does at the present. Saying it has been accepted for a long time and is widely accepted today are strong arguments but I think we can do more. I freely admit that I am no expert on logic and may well have this wrong but I think that there is no essential difference in proof by contradiction and any other kind of proof. If you prove a statement P by contradiction you assume the converse of P, P' say and you get a contradiction, i.e. you get P' => Q' where Q' is known to be false (Q is true) hence P must be false by the truth table for =>. Now any argument like this can be turned into a normal proof by taking contrapositives, so we know Q is true, we use the reverse argument to get Q => P. using contrapositives we have (P' => Q') <=> (Q => P). So these arguments are equivalent, one cannot be more contravertial than the other. In fact, contradiction is only used for ease, in some cases it is easier to write P' => Q' than Q => P. There is no such thing as a theorem which can only be proved by contradiction, the ones that are proved using it are just easier to write that way. - Andy stated his objection to proof by contradiction was that as ZFC might be inconsistent, so any contradiction reached might not mean the assumption is false. i.e. we have P' => Q', Q' is false so we conclude P is true, however, unknown to us Q' is also true (it is a contradicion) so our conclusion is false. This isn't as dangerous as it seems, from experience of assuming false things by mistake, making false conclusions usually leads you to the mistake (or contradiction in this case) faster. Anyway, as I mentioned before this difficulty is not unique to contradiction. If we assume Q and prove P as in a normal proof, nut Q' is also true then our conclusion is again false and normal proofs are just as dangerous. Again, we have Andy talking with authority and jumping to conclusions about things he is ignorant of. - Perhaps someone more concise than me could add this to the article. - On the point, of constructivism, that is in the realm of the philosophy of maths as to what sorts of proofs are acceptable. It is a perfectly valid opinion but most mathematicians don't accept it and I think that is the correct decision to make. Again the problem has nothing to do with contradiction anyway. If you prove that something exists by showing that its non-existance leads to a contradiction then that is equivalent to assuming the converse of the contradiction and working back to the conclusion that this thing exists. You still have a non-constructive proof of existence but without using contradiction. - About complex numbers, the canonical way to define them is with ordered pairs of reals, this puts them on a footing as well defined and rigorous as any set and is a very elegant way to do it. Saying i is a "number" such that i^2 = -1 and letting complex numbers be a+bi is not a rigorous construction of C and shouldn't be used in formal books. Under this construction there is a difference between i and -i, Im(i) = 1, Im(-i) = -1. If you think this is artificial then you could say the same about 1 and -1. No set is well defined up to a relabelling of its elements anyway. Consider f:R->R f(x) = -x. We relabel R by swapping the sign so we can't "really" tell which number "is" 1 and which "is" -1. Anything you can say about iR you can say about R because iR is just a relabeling of R. In this respect we can't tell the difference between 1 and i. These kind of uniqueness issues aren't a problem for any mathematicians as we always aork up to isomorphism. Rant over. Sorry! — Unsigned, by: 195.112.46.6 / talk / contribs In computer science there is a thing called the Curry-Howard correspondence, according to which many notions in logic have a precise counterpart in the realm of typed lambda calculus. Most fundamentally, propositions are types, and proofs are programs. Each type system corresponds to a different logic. Most functional languages correspond to inconsistent logics because every type has a corresponding term, even the always-false proposition - this is actually what allows them to have unrestricted recursion and thus be Turing-complete. But there are type systems that correspond to predicate logic, such as Martin-Löf's "intuitionistic type theory" (also called Martin-Löf type theory, or sometimes even, rather pretentiously, just "type theory"), and (technically non-Turing-complete) programming languages that use it. For example, if you had a type representing the real numbers (not just IEEE floating point numbers but actual arbitrary precision real numbers), you could write a program of the type The Πs correspond to universal quantifiers, and the Σs to existential quantifiers. The whole type corresponds to the proposition "for every countable sequence of real numbers, there exists a real number n such that n is not in that sequence", i.e. Cantor's diagonalization theorem. A lambda term of that type would correspond to a constructive proof of the theorem - but it's also a program, specifically, it's a function whose arguments are a sequence of real numbers and a proof that that sequence is countable, and whose result is a pair of 1) the number that isn't in the sequence and 2) a proof that it isn't in the sequence. The important point is that the proof is constructive. The logic doesn't include a primitive (i.e. axiom) of the type (or more simply , i.e. double negation elimination) because there is no way, in general, to produce a from a . There might be, for some , but not without knowing anything about . As a consequence, it's impossible to formulate proofs by contradiction in intuitionistic type theory. Now there are proof assistants that allow you to assume double negation elimination and/or excluded middle as an axiom, but using it removes the possibility of creating a program from such a proof. Most mathematicians aren't intuitionists and will be happy to write proofs using double negation elimination, as long as the proof isn't needed in the computation. Thrawcheld (talk) 00:56, 10 February 2016 (UTC) Conway's Game of Life[edit] I think this section needs a little rewriting. For one thing, the Conservapedia quote is partially correct, in that "the slightest change in self-sustaining patterns... usually destroys them." Sufficiently complex structures like guns and rakes generally aren't created with random patterns--in fact, it is quite difficult to create prior states ("synthesis") for most of these complex structures at all! And, as they state, once these complex structures are created, an interaction with a glider can often take them down. On the other hand, our sentence following this, which states that "[r]andom patterns in the Game of Life frequently produce both gliders and stationary self-sustaining patterns," is true, because those objects are very, very simple. A glider, for instance, consists of only five lit cells, as opposed to the more complex structures CP describes, which typically sport dozens, if not hundreds, of lit cells moving in a precise fashion. Furthermore, the Game of Life being described by both parties cannot be used to make an accurate statement about biological evolution at all, because (aside from random initial configurations), it does not invoke randomness at all! Much more useful for such a demonstration would be a probabilistic version of the Game (I found one here, though it seems to have since been taken down), in which probabilistic "cell has 70% chance of staying lit"-type rules simulate the randomness of mutations. Then, of course, we'd need something to model natural selection, but that's beyond me... Anyway, I thought I'd point this out. 71.50.80.77 20:14, 5 August 2008 (EDT) - I think the argument we are making is with the sentence "Only those patterns created by human beings (or discovered and preserved by them) have any chance of being perpetuated." emphasis mine. I gotta go check that link now, adding random mutations and selection pressure (other than the human observer saying "that's pretty!") to life is a neat idea... ħuman 20:21, 5 August 2008 (EDT) - Dead link, can you help? ħuman 20:22, 5 August 2008 (EDT) - Unfortunately not. It was the only probabilistic version I found after hours of Googling one boring day earlier this summer, and I don't know of any other such sites. Sorry. 71.50.80.77 20:35, 5 August 2008 (EDT) At LemonPeel or whoever knows[edit] Does anybody know exactly what was added to the natural logarithm page that got it deleted? Does anyone mind asking LemonPeel at CP? I've been curious. NightFlarei haz a talk page. 06:42, 21 August 2008 (EDT) - I am about 95% certain that Lemonpeel is the same person as Mathoreilly so try hitting him on his talkpage here. 06:58, 21 August 2008 (EDT) - Done. NightFlarei haz a talk page. 07:34, 21 August 2008 (EDT) - I'm pretty sure he is not. Gauss 20:44, 3 September 2008 (EDT) This article is great[edit] That is all. <Jellyfish!</font>Sock of OmesGn 10:09, 21 August 2008 (EDT) Roots of zero[edit] A while ago I added "non-zero" to the part that said every number had two square roots but, on second thought, zero might be included for a reason I don't understand and a few minutes of searching doesn't give a specific answer to this, so whoever knows better please say if it's better or not. I'm leaving the "non-zero" there only because it's technically correct either way. Thanks in advance. NightFlarei haz a talk page. 10:58, 21 August 2008 (EDT) - Whatever the answer is, it's also there at the end of the section. NightFlarei haz a talk page. 11:20, 21 August 2008 (EDT) You were correct in the change you made. Gauss 21:11, 28 August 2008 (EDT) - A victory for intuition! NightFlarei haz a talk page. 00:00, 2 September 2008 (EDT) - I remember before being introduced to the concept of complex numbers, learing that a quadratic had zero, one, or two roots, depending on whether the determinant was -ve, 0, or +ve. A friend who knew about them, said a quadratic always had two roots: if the determinant was -ve, the roots were complex. "What if the determinant is zero, doesnt it still have just one root?" "No, then they're identical." The idea of one equation having two solutions, it's just that they are identical, is reminiscent of things like triune gods and such. Like angels on a pinhead, couldnt there be a thousand on there, all just happening to be identical? The idea of a quadratic always having two roots, but in the case of a zero determinant their being identical, actually necessitates the concept of all numbers having two square roots, just in the case of zero, they are identical.... PardreObe (talk) 16:54, 15 December 2009 (UTC) Too much serious[edit] Here, have some geometry humour. --Kels 23:17, 28 August 2008 (EDT) multiplication[edit] I don't know where else PeD may have done this, but the gross simplification and stupidification of the multiplication article should be mentioned... ħuman 21:07, 3 September 2008 (EDT) FBI incident redux[edit] I am starting to feel like this is heading the direction the FBI incident did, with people going over there, taunting Ed getting banned to see themselves added here or adding themselves here. If it is alright with everyone I will remove the latest blocks and unless there is some major event, the topology category disappears or something this needs no more updating unless what occurs is Ed's doing. 06:30, 4 September 2008 (EDT) - I suspect EPauper, Belgian (and HPoirot, HerculeP, EdmundP, EdwardP,...) to be the same and one user (me perhaps?), who got mad at Ed for reasons not related to the Mathematics articles. He had no aspiration for being mentioned here or in WIGO, I think he didn't even know about this article. I agree that they shouldn't be mentioned. - Instead JohnI is a wonderful parodist (and not me). He should be mentioned: his block's edit summary by Ed Poor wonderfully contrasts with "building on JohnI's last entry - thanks, John!" by the same Ed Poor, for this parody. Editor at CPLiar at RP! 06:57, 4 September 2008 (EDT) - I hope JohnI is a parodist that statement he wrote is incorrect in so many ways. Ed is not qualified to teach even addition to 5 year olds if he calls multiplication a function and thinks there are only two commutative operators. 07:03, 4 September 2008 (EDT) - If this is not a parodist, I don't know who is... His maths entries are so incorrect indeed - and Ed can't be qualified to teach maths to anybody, no way. I hope it's just his CP-Wiki-impersonification and he doesn't really teach anybody. Maybe he's just envious of Andy's 56 homeschoolers. Editor at CPLiar at RP! 08:32, 4 September 2008 (EDT) - Ooo big mistake. Less than 50 edits and is already throwing liberal around. Don't people relies the best way to allay suspicion is get your self known around the place then slowly start using the lingo as though it is growing on you. You shouldn't start the liberal and the this is our site crap until you have been there about a month. 09:03, 4 September 2008 (EDT) Tychonoff[edit] The Tychonoff theorem implies the AC, as shown by Kelly. Have a look here. --LArron 08:26, 5 December 2008 (EST) just some notes[edit] OK I just wanted to add that this sentence: "In fact, the much-touted industrial might of the United States depends on our superb math and science educational system." is honestly (at least on secondary education level) ..er.. slightly exaggerated. The US falls in the PISA studies constantly behind most other developed nations and is in none of the three categories (Math, Science, and Reading) in the top 20. Oh and as a student that has experienced the German educational system for years (and has minor experiences with other European schooling systems) and is currently studying at an US high school; it just does not compare (jop even AP courses). Please don't be offended (weird how Americans take criticism often too personally), I just wanted to spread the truth. Postsecondary institutions are world-class and often on the same or better niveau than leading European and Asian colleges and universities. - OK, I've changed it. (No, I'm not offended.) The intention was not to say that our educational system is superior (which, of course, it isn't) but to say that this alleged superiority is a central tenet of the CP mindset. Gauss 17:49, 14 February 2009 (EST) Some other guesses why he hated complex numbers[edit] - With his putative electrical engineering background (one wonders how he works with stuffs with electrical engineering without complex numbers), i is reserved for current, j is for current density, k is for some arbitrary constants, so all the alphabets used by a quaternion (and as such, complexnumbers) are used for something else (mostly it is i or j for the unit imaginary vector). Thieh 15:18, 1 May 2009 (UTC) - Good point, although many engineers manage to avoid reality in their schooling and training. Engineers make some of the worst amateur scientists due to this. They think they have "learned" how science works, but they have never done any science. Anyway, yes, good point, you can't do any AC engineering without ω. ħuman 02:59, 2 May 2009 (UTC) - Interesting idea. My father was an electrical engineer, keen amateur mathematician, and almost professional religious maniac, and so perhaps was in some ways not dissimilar to Andy, and I can remember him telling me of the different uses of "i" - although isnt current normally represented by capital I? I realize that I am in the company of much sharper mathematical minds than mine, but I cant help wondering that though Andy's wording might be poor, his comments might get harsher treatment than they deserve. His references "a unique root of -1", "consistent results" etc. It is routine when referring to roots of positive reals to ask "What is the root of..." and to answer only giving the +ve root, as if there were only one, and he is essentially only following that convention. The "consistent results" is just another way of questioning the legitimacy of the concept, and the reference to "assumptions" is referring to the fact that an elementary proof dispenses with the need for the concept. Please dont slay me - - I'm just being 'liberal' and going easy on the guy. :) PardreObe (talk) 16:41, 15 December 2009 (UTC) Re: How long does it take to earn $40?[edit] The answers/edits on CP has been removed. may need images as backup. Thieh 01:44, 2 May 2009 (UTC) - If they're gone it's too late (try the wayback machine?). We learned eventually to screencap everything due to this evidence burning habit they have. ħuman 02:56, 2 May 2009 (UTC) - Can we just vape the section? It's just random, unsourced trivia. Peter tanquam ex ungue leonem 05:49, 7 April 2012 (UTC) Cover story status[edit] This article has may have degenerated somewhat, but I don't know why this is a cover story. Its formatting is all over the place, it has external links that should be ref not in the article itself, and it generally reads badly. Can we recede its status pending a rewrite? - π 13:01, 14 May 2010 (UTC) - I am against on the general principle that it is a CP: article. Should we be saying this wherever it was first discussed? ħuman 08:32, 29 May 2010 (UTC) - Apparently it was never discussed. Thumbs down, says me, anyway. ħuman 08:33, 29 May 2010 (UTC) - Also, Pi, note that (as far as I can see) the link to the abstract is borken here. ħuman 08:34, 29 May 2010 (UTC) - Ah that is the namespace. I would like to see this restored to Cover story status, but a rewrite would be needed. - π 08:35, 29 May 2010 (UTC) - If people do get around to rewriting this, an explanation on how imaginary numbers relate to, and can be extended into, real numbers needs to happen. Just posting Euler's formula and Euler's identity should be good enough. The wikipedia on both of these and imaginary numbers are pretty easy for non-mathematicians to grasp. Imaginary numbers have a place any time you are expressing anything in polar cordinates or working with trigonometry at all; they aren't limited in usefulness to just Fourier transforms as the article seems to imply.— Unsigned, by: 173.26.196.66 / talk / contribs Bump. - π 12:34, 19 August 2010 (UTC) - I have had vague intentions for a while to go through this one. The section straight after the intro should be Schlafly's seven crank delusions, as listed by some RW user on their user page. What page was that? - David Gerard (talk) 14:32, 19 August 2010 (UTC) - Ah, User:SamHB - David Gerard (talk) 15:33, 20 August 2010 (UTC) Is there a good place to put "39+4=44"....[edit] In this article or elsewhere? Sourceimg. K61824What is going on? 07:41, 14 September 2010 (UTC) Fuck's sake[edit] I was about to "de-bronze" this crap when I realized why I clicked it into a tab - cover story? This is shite, and boring shite at that. 24 hours of conversation before I demote it to no metal brains (since it is lame, and CP-centric) and kill the cover story thing. Please convince me. ħuman 06:54, 24 September 2010 (UTC) - I think this ties in with the de-CPing of RW. It's a bit tl;dr. I have no prob with it being de-brained. --PsyGremlinPraat! 07:11, 24 September 2010 (UTC) - Silver is fine. But this is a useful and important article that I use to cite to outsiders on just how incredibly on crack Conservapedia is - David Gerard (talk) 10:22, 17 May 2011 (UTC)
https://rationalwiki.org/wiki/Conservapedia_talk:Conservapedian_mathematics/Archive1
CC-MAIN-2019-30
refinedweb
4,852
60.04
This file describes significant changes made to Igor Pro and supporting files since the initial public beta of Igor Pro 6.1. Here are the updates in reverse chronological order: - November 20, 2009: Update Igor 6.12A - November 3, 2009: Update Igor 6.12 - August 27, 2009: Update Igor 6.11 - June 29, 2009: Update Igor 6.10A - June 22, 2009: Release Igor 6.10 - May 22, 2009: Igor 6.10B07 - March 23, 2009: Igor 6.10B06 - February 11, 2009: Igor 6.10B05 - November 25, 2008: Igor 6.10B04 - October 27, 2008: Igor 6.10B03 - September 11, 2008: Igor 6.10B02 - September 9, 2008: Igor 6.10B01 Update Igor 6.12A Note: Igor 6.12A is a Windows-only revision. Igor.exe's file version is now 6.1.2.1. Igor Application NEW FEATURES Added DisplayProcedure /L=lineNo. BUGS FIXED Windows: PNG imges in notebooks are no longer drawn at the wrong size when printing. Windows: Dashed lines in Legend annotations have the correct color when using New Graphics (the default). Windows: Window hook functions were getting two moved events (event code 6) if the hooked window was moved using MoveWindow. The WaveMin and WaveMax functions returned strange values if the range examined contained nothing but NaN. Windows: The Dashed Lines dialog now draws an indicator above the ruler showing the current dash position. If you showed an info box on a control panel, then resized the control panel, the info box didn't move to stay aligned with the edge of the control panel. Procedures Fixed bug in ColorWaveEditor.ipf that caused problems with the initial layout of controls in a client-mode editor panel. Fixed bugs in AppendContourToGizmo.ipf and Extract Contours as Waves.ipf: Now the Gizmo procedures don't automatically add Extract Contours menu items to the Graph menu, and now if you do intentionally include Extract Contours as Waves.ipf into an indepdendent module with menus=1, the Graph menu items work. Update Igor 6.12 Igor Application NEW FEATURES NewPanel/FLT=n; n can now be 2 to ask for a floating panel without a close box. Changes to movie creation and reading: Windows only: New flag for NewMovie, /A causes a native .avi to be created and does not need or use QuickTime. By default, the compression used is Microsoft Video 1. On Mac, /A is ignored. On Windows if QuickTime is not present, the /A flag is not needed and AVI files will be created. Windows only: New keywords for PlayMovieAction: open=fullPath opens an AVI file using native support (no QuickTime) for reading frames. New ref keyword must be used when acessing an AVI file. To get more info if an error occurs, use: SetIgorOption VerboseMode=2 Windows only: if QuickTime is not present, PlayMovie will call the operating system's movie player with the specified file. You can now create movies from pictures in the picture gallery using the new /PICT flag with NewMovie and AddMovieFrame. This can be used to create moves from page layout windows. You can now cause SavePICT to store the picture in Igor's picture gallery by using the magic path name "_PictGallery_" with the fileName set to the desired picture name. For example: SavePICT/E=-5/B=72/P=_PictGallery_ as "myPicture" Added experimental user function name colorization. Turn on via: SetIgorOption colorize,UserFuncsColorized=1 and set color via: SetIgorOption colorize,userFunctionColor=(r,g,b) Added the following optional flag syntax to Concatenate: /NP=dim Suppresses promotion and appends data along the specified dimension (0= rows, 1= cols etc). It is an error if the other dimensions in the wave source list do not match each other. For example, using /NP=0 on matrix waves would append rows as illustrated here: Make/O/N=(4,3) mat1= 1 + p + 10*(q+1) Make/O/N=(2,3) mat2 = 1.1 + p + 10*(q+1) Concatenate/O/NP=0 {mat1,mat2}, dest Print/F dest Added a way for wfprintf to print waves into strings. IgorInfo(5) returns (as a string) the serial number of the program, if registered, or "_none_" if the program isn't registered. CHANGED BEHAVIOR Mouse wheel very near axis end now acts like it is exactly at the end. Changed DataFolderRefStatus to return 0 for a killed global data folder. Improved appearance of histogram bars when height is less than frame thickness. Can revert by SetIgorOption useNewZeroBar=2. To avoid overflow, Integrate now redimensions integer type waves to a larger data type: /B (byte) and /W (16 bit) waves are redimensioned to /S (float), and /I (32 bit) waves are redimensioned to /D (double). Differentiate redimensions unsigned integer type waves to a larger data type, as for Integrate. Signed integer waves are not redimensioned. Changed FIFO routines to support very large files. Max size is 2G times size of data sample. i.e., 2 chans of 32 bit float means max file size of 2*2*4= 16GB. For XOP programmers: ParseOperationTemplate uses "const char*" instead of "char*" in the RegisterOperation function of the starter code to avoid Xcode 3.2 warnings. BUGS FIXED FunctionList now returns user-defined DFREF or WAVE functions when VALTYPE is 16 or 32. FunctionList now returns the correct list when both string functions and a number of parameters are specified. Windows only: reverted to old grid drawing code due to problems with MS Office. Fixed waterfall hidden lines in new graphics mode. Fixed edit wave when both offset and multiplier are in use. Fixed decolorization of fills when exporting graphics with the color checkbox off. Added error check for free waves and the like being used in controls. Added check for waves in use when moving global data folder to free. Fixed problems with Data Browser when a free data folder is moved global (see Free Data Folders). Fixed logic for Duplicate in functions when existing wave is found in a destination WAVE ref variable. Fixed ImageGenerateROIMask to support rotated objects. Made passing function to ThreadStart taking DFRef input illegal. Fix crash when FFT is used in free data folder (see Free Data Folders). Also applies to other operations that may generate and kill temporary waves. Now do a better job supporting KillWaves in free data folder (previously, wave was not killed until data folder went out of scope and was killed.) Fixed experiment load byteswapping for reference waves. Fixed up Duplicate and Extract to support overwrite when dest is a free wave. Windows: Several dialogs (e.g., DoAlert dialogs) were changed to display ampersand characters correctly. Thanks to Holger Taschenberg. Windows: The File→Recent Experiments and File→Recent Files menus now correctly display paths containing multiple ampersands. Fixed a crash in page layouts when objects were set to "low fidelity". This would affect mostly Windows users. IgorVersion was returning 6.1, not 6.11. (Now it returns 6.12, of course.) Fixed some bugs involving embedded tables in page layouts. Right-most used column was sometimes blank when layout was at 50% magnification. Changing layout magnification from 200% to 100% caused embedded table size to change radically. Right-clicking on embedded table did not work when layout was at 50% magnification. Dragging column widths resulted in incorrectly-sized unused columns when layout was at 50% magnification. Undo after changing column width caused table to be displayed incorrectly when layout was at 50% magnification. Fixed the Slider control's pointer being off by 1 pixel when in the horizontal position. Windows: Fixed Magnification→Set as Default in text areas in the help browser and dialogs. Windows: The "Ternary Diagram Help.ihf" file was corrupted and has been replaced with an intact version. Windows: Fixed bug in which external panels were created slightly too small. Fixed bugs in date/time display with fractional seconds. On Macintosh, the displayed time could be one second off. On Macintosh and Windows, date/times before 1904 with fractional seconds were wrong. Fixed Save/F operation to correctly save 3D and 4D waves. A side-effect of the fix is that, when saving multi-column waves (1D complex waves or multi-dimensional waves), the table format for the first data column of a given wave is used for all file columns for that wave. Fixed a bug in MatrixOP including bad order of execution on compound expressions that included convolve() or correlate() that resulted in wrong answers. Fixed /TRIM flag in StatsQuantiles. Fixed a potential crash in StatsRankCorrelationTest when the input contained multiple NaN values. Fixed obscure crash triggered by changing a control's user data during code that causes calls to the control's action procedure. If this was done more than once before the queued action procedure calls were executed, a crash resulted from accessing stale user data. FunctionInfo no longer crashes if the function name is null. Macintosh Japanese: The Macintosh Japanese version of Igor Pro 6.0x did not support ODR curve fitting. This is now fixed. See Errors in Variables: Orthogonal Distance Regression. XOPs Windows only: Added an XOP that supports reading of the telegraph outputs of the Axon (now Molecular Devices) MultiClamp 700A and 700B patch clamp amplifiers. See AxonTelegraph XOP for details. Windows only: FindPeaks.xop's operations can now be called from a function (just like with the Macintosh version). SndLoadSaveWave.xop: Bug fix for "no sound data in movie". On Windows the "seconds" and "samples" popup items are no longer reversed. Fixed a problem in Gizmo that affected conversion from rotation matrix to quaternion. Procedures Corrected WMMenus.ipf so that the Window Browser displays its window when you select either Misc→Packages→Window Browser or Windows→Control→Packages→Window Browser. Fixed InsertSubwindowInGraph.ipf to work correctly when only one graph exists and to show only graph window names in the insert button. Fixed Resize Controls.ipf and Resize Controls Panel.ipf to ignore subwindows that don't contain controls. Enhanced the Window Browser package (WindowBrowser.ipf): If you select Kill from the Act On menu, it kills only the selected sub-window, not the window's parent. Added a refresh function that is used in a window hook to make sure the browser is up-to-date when it is activated. It also refreshes any time you use one of the menu actions that can change the contents. Update Igor 6.11 Igor Application NEW FEATURES Macintosh: Sound input and output now uses core audio and supports hardware with more than two channels, high sampling rates and 32 bit floating point data. See SoundInStatus, SoundInStartChart and PlaySound. Now support WAVE w= ModuleName#waveRefFunc() syntax. Now support external subwindows when the host is a panel window and the host is resized. Added to ImageAnalyzeParticles a new flag (/CIRC) which allows you to control the range of circularity of the detected particles. Added to ImageAnalyzeParticles a new flag (/EBPC) which allows you to exclude particles that have one or more pixels on any of the image boundaries. Added a new function Variance(inWave) that returns the variance of real input waves. The DrawText dialog has 1-degree text rotation combo box. Added /FREE flag to the Extract operation. CHANGED BEHAVIOR Clicking Stop in the Debugger stops at currently executing macro line instead of the first line of the macro. Limited length of automatically-generated table titles to 40 characters as it was prior to Igor Pro 6.1. Otherwise long table titles mess up the Windows→Table submenu on Windows. Explicitly set titles can still be up to 255 characters. BUGS FIXED Macintosh: Put in a workaround for a bug in a pre-release version of Mac OS X 10.6 which caused the Igor custom controls in the Print Graph dialog to fail to work. Macintosh: Fixed a problem that caused file reads and writes to fail for files greater than 2 GB. Windows: fixed ValDisplay appearance with frame=5 missing right side of the frame. Prevented crash when KillWaves is used on a free wave. Fixed bug in color as f(z) when the wave has NaN at point 197 and when it has very long stretches of the same color. Fixed doubling up on last point of wave[0,inf] += xx when executed from the command line. sscanf now returns 0 rather than -1 if passed an empty string. Added better error messages for attempts to use function-only flow control statements in macros. FunctionPath now works correctly from an independent module even if SetIgorOption IndependentModuleDev=0 (the default setting). Fixed a crash in ImageRegistration and MatrixSVD/B. Fixed a problem in DSPPeriodogram that returned an error in default mode (no intervals). SetVariable valueBackColor now works on Windows. Now valueBackColor is used even if frame is off, but only if valueBackColor isn't =0. On native Macintosh removed white border and color splash outside the frame. Fixed independent module bug that resulted in changing SetWindow hook=$"" to be imName# when executed in an independent module #pragma independentModule=imName. Macintosh: the appearance of a control placed inside a filled drawing rectangle is no longer marred by little white lines. Fixed SetVariable dialog not accepting row dimension label change for a wave value. Fixed a bug that caused various file-related errors if the name of the boot volume contained accented characters. One manifestation of this is that it prevented Igor from recognizing that it was correctly licensed. Fixed a bug introduced in Igor Pro 6.10B01 that caused errors when loading Igor text files. Fixed a crash if you tried to search help files while in the debugger. Fixed crash in Save operation if a non-existent wave was specified. This crash was introduced in 6.10B05. Fixed error when the Save Waves dialog generated a command that was exactly 400 characters before counting the CR at the end of the command. XOPs Updated the MLLoadWave XOP so that it will work with Matlab 2009a. Updated Macintosh Data Browser to support horizontal scrolling with two finger trackpad gesture. Procedures Corrected WMMenus.ipf so that the Window Browser displays its window when you select either Misc→Packages→Window Browser or Windows→Control→Packages→Window Browser. The Split Axis package now does the right thing with axes that participate in Swap XY (either through traces with the /VERT flag or via ModifyGraph swapXY). It also now restores the reversed-ness of reversed axes when un-splitting. Updated Image Particle Panel.ipf. Added optional reporting of background level and ratio of peak height to background in Multipeak Fit 2. Fixed AppendContourToGizmo.ipf so that the contour and surface popup menus work. KBColorizeTraces.ipf revised to improve panel's appearance on Windows (control placements). Examples Most of the sound input examples were modernized to support the new core audio support on Mac. Added a Visual Studio 2008 version of the IgorClient example Windows Automation client program. This is in "Igor Pro Folder\Miscellaneous\Windows Automation\Automation Server Examples.zip". Windows: updated demo experiment for Direct Show XOP (DSXOP). Update Igor 6.10A Note: This is a Windows-only revision. Igor.exe's file version is now 6.1.0.9. Igor Application Changed Behavior Starting in Igor Pro 5, a double-click on a listbox cell resulted in two cell select events (event 4), which represented both a redundant call to your action procedure, as well as a redundant selection. The extra selection and event have been removed. Bugs Fixed Windows: Launching a second instance of Igor (by starting Igor.exe while holding down the Ctrl key) no longer makes Igor take a very, very long time to start. This bug was introduced after 6.10B07 and in the 6.10 Release. Restored the behavior of Duplicate in macros to ignore the type flags (/B/D, etc) which are useful only in functions. Release Igor 6.10 Igor Application Changed Behavior The /ITIF flag of ImageSave (which forces TIFF file saving using Igor's code) has been changed to /Igor. Previously DIB pictures in notebooks were exported as WMF when exporting as RTF. Now they are exported as 4x PNGs. Previously Igor wrote two versions of EMF, PNG and JPEG pictures when exporting a notebook as RTF. The first version was intended for normal use and the second version (PICT on Macintosh, WMF on Windows), called a "compatibility picture", was intended for use by old programs. As of Igor Pro 6.10, Igor no longer writes compatibility pictures. New Features Igor can now export as RTF notebooks containing PDF pictures (Mac only) and TIFF pictures. EPS pictures can also be exported in RTF but only the EPS preview is exported. All of these types are exported as PNGs at 4x screen resolution. Bugs Fixed NumberByKey now correctly interprets hex number values like "0xFF23" in the same way that str2num always has. FilterIIR now designs better notch filters by altering the pole angle slightly towards zero compared to the corresponding zero's angle. This improves the unity gain at DC and Nyquist. Thanks to Tilman Schaeffer for an improved algorithm. The old design can be accomplished by using a negative value for notchQ. Restored the ability to run Igor as a limited functionality demo by clicking that button in the License dialog. Fixed DisplayProcedure/W=$wintitle nameOfFunction to work inside an independent module. DisplayHelpTopic and clicks in links now search the Igor Pro User Files folder as well as the Igor Pro Folder. Fixed bug introduced around 6.10B04 that messed up loading pictures from RTF files. The Debugger was evaluating struct.ucharArray[64]= "Fred" as "Fred\000\000\000\000"... instead of just plain "Fred". Fixed drawing objects not showing up inside panel tabcontrols and groupboxes on Windows and Mac OS X 10.4 or earlier. Exists no longer erroneously finds any global function of the same name when a function in a module name is specified; the function must exist in the specified module. XOPs Revised JCAMPLoadWave XOP. The VDT2 now supports 128,000 and 256,000 baud. This should work on Windows but may not work on Macintosh. Procedures Added new Ternary Diagram package, which you will find under Windows→New→Packages→Ternary Diagram. Sonogram.ipf implements a "0 dB Max" feature to normalize the image to a maximum of 0 (dB). Fixed a bug in Transpose Waves in Table.ipf- it failed to handle liberally-named waves. Fixed a bug in PieChart.ipf; it failed to handle liberally-named waves. Fixed a bug in Resize Controls.ipf where a height resize was skipped if the control didn't also change width. Rewrite Controls Position.ipf ignores known WaveMetrics-specific userdatas for controls. Examples Revised Curve Fitting:Multi-variate Fit Demo.pxp. Added Graph Techniques:Ternary Diagram Demo.pxp. Added Visualization:VolumesGridDemo.pxp. Added Statistics:Circular Statistics:Circular Two Sample Test.pxp. Added Learning Aids:Tutorials:Using Igor Documentation.pxp. Added Learning Aids:Tutorials:3D Graphics Tutorial.pxp. Igor 6.10B07 Igor Application Documentation Added Built-in Structure Reference section to the Igor Reference help file. This means you can right-click the name of a built-in structure, such as WMWinHookStruct, and choose "Help For WMWinHookStruct" from the contextual menu. New Features Added optional dfr parameter to GetDataFolder; when supplied, it is used in place of the current data folder. When DrawPICT is used in an independent module and you need to access the picture gallery, you can now do so using a new prefix name, GalleryGlobal. For example: DrawPICT 0,0,1,1,GalleryGlobal#PICT_0 Added new /C (continue) flag for PauseForUser when you have something to do while waiting. After handling any events, PauseForUser returns immediately after setting V_Flag to the truth that the target window exists. Typical use would be for automatic continue after some delay. You can now position a mirror axis at a fraction of the normal distance using the new ModifyGraph mirrorPos(ax)=f keyword. You can now create and pass in a free wave to a user-defined function using {a,b,...} syntax as illustrated here: Function foo(w) WAVE w print w End Function bar() foo({1,2,3}) End Log axes now support round to nice values. SetAxis/A/N=2 is same as SetAxis/A/N=1 for now. You can now use structures defined in independent modules in global procs using imName#structName.. You can disable interpretation of escape sequences when loading text using LoadWave. Set bit 3 of the loadFlags parameter of the /V flag. Additional keywords for coloring most. Now all user-defined control titles accept styled text commands. The control dialogs have an Insert popup to make this easier."). You can now display an Open File dialog that supports selecting multiple files. See Displaying a Multi-Selection Open File Dialog. GetSelection for procedure windows works just like for notebooks except that the name is actually a title wrapped in $"". Added GetErrMessage function (which is pretty useful in combination with Execute). IntegrateODE now has a /STOP flag that allows you to specify a wave with stopping conditions on the Y values and derivatives. Your derivative function can also request a stop by returning 1 from the function. Changed Behavior On Macintosh , the Igor Pro User Files folder's default location is now <user>:Documents:WaveMetrics:Igor Pro 6 User Files. Previously it was at: <user>:Library:Application Support:WaveMetrics:Igor Pro 6 User Files. On Macintosh and Windows, the user can now specify where the "Igor Pro User Files" folder is located using the Miscellaneous Settings dialog. The "Testing & Misc" folder of examples has been split into "Feature Demos 2" and "Testing". In ThreadSafe user-defined functions, accessing. The menu bar location of an XOP target menu is now the same as any other target window menu: just to the left of the Misc menu. Notebook subwindows in control panels now save their normal ruler (formatted text notebooks) or text formats (plain text notebooks) in recreation macros. Added error message if you try to add confidence bands or prediction bands to an ODR (Orthogonal Distance Regression) fit. See Also: Errors in Variables: Orthogonal Distance Regression. Bugs Fixed Fixed \u axis label when units have escape codes. Fixed a graphing problem where bars plotted on one axis could intrude on another axis area. Fixed PDF export with font embedding except standard fonts; even fonts other than the standard were not being embedded. Added overlap between fills for graph fill modes when exporting PDF or EPSF to work around problem in some viewers where white lines appear between fills or muted colors if f(z) mode. Fixed Modify Axis dialog manual range bug: if you set the manual range to the same value it had when manual mode was entered, it didn't update the graph in live mode. Fixed crash in Concatenate if a source was the same as destination wave, and /O flag was used. Mac OS X 10.5: Fixed Listbox and ValDisplay indented frames rendering extra lines because of bug in Apple's DrawThemeListBoxFrame API. Fixed bug where a formatted notebook appeared corrupted if you opened it as a procedure file and then saved it. The next time you tried to open it as a notebook, Igor said "The file does not appear to be a formatted text file". As part of fixing this, it is now an error to try to open a formatted notebook as a procedure file. Fixed a bug that caused packed procedure file names to be truncated when adopted. Fixed bug in CurveFit and FuncFit: using the /F parameter flag (confidence bands option) in a threadsafe function could cause a crash or BUG paramstack dif message. An XOP can now successfully call GetFunctionInfo on a function in an independent module when the global module is in an uncompiled state. Fixed ColorScale reversed axes range not recreating properly. MacroList now returns results when called from an independent module: it always returns the macros in ProcGlobal. The Debugger's Expression pane now properly escapes printed strings, fixing a bug where a string with a tab in it did not display anything after the tab. FilterIIR/Z no longer stops execution. Fixed bug that caused error in Search Igor Files help browser pane when searching a packed experiment if a packed file had a long name. XOPs The Load General Binary dialog now properly escapes backslashes in UNC file paths (\\Server\Share) on Windows. The SndLoadSaveWave XOP reads files with long names on Mac OS X, properly escapes backslashes in UNC file paths (\\Server\Share) on Windows. The JCAMPLoadWave XOP has been recompiled to support long file names. Macintosh: Universal versions of the NIGPIB and NIGPIB2 XOPs XOP using HDF5 library version 1.8.2. This was done to keep in sync with the Windows version which was updated in Igor Pro 6.10B06. Added new error message to JCAMPLoadWave.xop stating that XYXY data is not supported, and suggesting that it can be loaded as Delimited Text with comma delimiter. Procedures. Fixed CopyImageSubset.ipf broken in previous beta of Igor 6.1, and also upgraded it slightly to use all of the ModifyImage recreation settings in the created image plot. New version of Global Fit 2.ipf has new data set selector panel. Hopefully it will make it easier to select waves when working with lots of data sets. Igor 6.10B06 Igor Application Documentation Added the following sections: - Wave References - Data Folder References - Automatic Parallel Processing With Multithread - Free Waves (For advanced programmers) - Free Data Folders (For advanced programmers) Revamped the following sections: - Window Hook Functions - PopupMenu - IntegrateODE - Open New Features DisplayProcedure now groks procedure name paths like WMGP#GizmoBoxAxes#DrawAxis to display static functions (even in an indepdendent module). Added /D=2 flag to Open operation to make it easier to create utility routines that take pathName and fileName parameters. See Displaying an Open File Dialog and Using Open in a Utility Routine for an example. Added method to force discrete pixels in PDF export of image. Mainly for use on Mac. Example, on a Mac: Make/N=(50,40)/O jack=x*y NewImage jack;MoveWindow 121,91,682,537 ModifyImage jack ctab= {*,*,Rainbow,0} Now copy to the clip as a Quartz PDF and read into Preview.app via New From Clipboard. Then execute SetIgorOption MaxDimForImagePixels=41 and do it again. #include <IgorRGBtoCMYKPanel> See the WM Procedures Index help file for details. New mousewheel event codes for SetVariable: See events 4 and 5. You can now create free waves in functions using the new /FREE flag with either Make or Duplicate. This flag is not allowed on the command line and is not allowed with $ or folder syntax. GetWindow now can tell you if a window is maximized or active. Textbox "\\W1dd" is a marker with no line stroke. Added GuideNameList function. Added cd and pwd operations. Changed Behavior Changed wave[val]= expr to round val in user functions (as has been done in macros forever). Reverted behavior of Variable/G and String/G in user-functions to perform no name conflict checking due to the problems that caused. You can now turn on checking using SetIgorOption PerformVariableNameCheck= 1 or 2. Use 2 to just check against waves, 1 to do full checking. The length of the objectNameList used with SaveData/J is now unlimited..) Now support save and restore of DataFolderRef (DREF) and wave reference waves in packed experiments. These types can not be stored in unpacked experiments or via SaveData and they will not show up in the data browser. They will be ignored by previous versions of Igor. Added run time error when writing to a text wave fails.. Improved the way the Debugger displays DREFs, data folder waves and waves of waves.. Checking the Auto-compile menu also compiles the procedures. On Windows, activate and deactivate events are sent to window hook functions (see SetWindow) even when a panel or graph is minimized. Improved feedback when user tries to change a locked wave in a table. Bugs Fixed Intel Mac only: Fixed color of pasted pictures when exporting as EPS or Igor PDF. Fixed DataFolderRef function input parm ref counting. Also, fixed DataFolderRef function structure copy ref counting. Fixed image load crash when file name is very long. Fixed hide and show of cursors when entering and leaving draw modes (in minimal redraw mode.) Macintosh: Fixed problem where an XOP failed to load if there were two volumes with the same name on Mac OS X. The error message you would get is "Can't load the executable file in the XOP package". Windows: Fixed problem of minimized windows walking up the screen when you resized the Igor frame window. Windows: Bad computation of number of digits to display in certain dialog boxes could cause a crash. Fixed a crash when an XOP calls CallFunction to call a thread-safe user-defined function. Fixed a crash when a hook function installed by SetIgorHook is called if the hook function is marked ThreadSafe. Adjusted frame of ColorScale to cover bleeding on Mac PDF export. Fixed Execute of global function from an independent module. Fixed problem in B05 where using a null WAVE ref vars did not automatically look up wave of same name in current DF. Reinstated obsolete /R flag for Make lost in B05. Now support null function call on command line for DF and WAVE return types. /W flag for DoWindow no longer rejects notebook and layout windows. Fixed crash in layout where a draw object is deleted followed by SetDrawLayer/K. Improved compatibility of new data folder reference support to avoid "Expected data folder reference" error. Tags attached to axes no longer spontaneously turn into Textboxes. ImageInfo no longer repeats itself. Fixed compile of user-defined menus in independent modules where no execution text was defined. A Listbox control now deselects the cell the user tabbed out of. SetIgorOption poundDefine and SetIgorOption poundUnDefine now properly setup procedures to be recompiled. ControlInfo/crashTooLong no longer crashes. Fixed obscure BugMessages resulting from use of OpenProc/V=0. Fixed TabControl and GroupBox controls' drawing of enclosed drawing objects (tab was double-drawing using new and old graphics, GroupBox wasn't using new graphics). Macintosh: Fixed a bug in Secs2Date that caused the day-of-week to be off by one. Macintosh: Restored ability of Date2Secs to handle days that exceed the number of days in the month. Made GetWindow hide work correctly with procedure windows. Fixed Histogram crash in ThreadSafe function. Editing polygons works with waves containing NaN breaks. XOPs. Procedures colorSpaceConversions.ipf now contains a function that replicates Igor's RGB→CMYK conversion and a function that approximates a CMYK→RGB conversion. New IgorRGBtoCMYKPanel.ipf implements a GUI for editing root:M_IgorRGBtoCMYK and for gathering colors used in the top graph.. Brand-new revised version of Split Axis package has nice control panel GUI, handles images and contours, keeps track of its axes so that they can be listed in menus sensibly, and can be removed easily. See the Graph menu's Package submenu. Examples Revised the Resize Panel and List Demo to use #include <SaveRestoreWindowCoords>. Revised the Split Axis demo experiment- it now resides in Examples:Graphing Techniques. Igor 6.10B05 Igor Application New Features The revamped Smooth dialog now has Percentile, Min, and Max algorithms and a few more options for Loess smoothing. The replacement value interface is hopefully more obvious. Added /AUTO and /NODC options to the Correlate operation, and revamped the Correlate dialog accordingly. Programmers have a new means to make procedure files invisible. See the #pragma hide in Invisible Procedure Files. Programmers can now use a shortcut when creating a WAVE reference variable while using $<str>. See the /WAVE=<name> discussion in Automatic Creation of WAVE References. New MultiThread keyword provides automatic parallel processing of wave assignment statements. See Automatic parallel processing with MultiThread.. New built-in wave reference function, NewFreeWave. Now allow unlimited number of datafolders to be queued via ThreadGroupPutDF. Added the MandelbrotPoint function. Added calculation of P-values in all four tests in StatsCircularTwoSampleTest. In wfprintf operation, if refNum is 1, Igor will print to the history area instead of to a file. This is provided for debugging purposes. Changed the Pictures dialog's list of names to one column, putting any error or commentary text below the list. Added a Copy Picture button. Fixed the Convert to PNG button not enabling. The Optimize and FindRoots operations are now thread-safe, with the exception of FindRoots for polynomial roots (/P flag). The window hook "modified" event is now sent from notebook windows or subwindows. It is an error to try to kill the notebook window or its parent window during a window hook "modified" event. See SetWindow. Changed Behavior Changed DateTime to include fractional seconds. Improved the ability to export huge graphics by automatically scaling back resolution as numeric limits are reached. The From Target checkbox in the Insert Points and Delete Points dialogs no longer changes the values in the Dimension, First Point and Number of Points controls. In tables,). The page layout fidelity setting no longer has any effect on drawing graph objects. The foreground color of cells in a listbox control now follows the color specified by selWave even if the cells are selected. Previously the foreground color was ignored for selected cells. The Debugger's breakpoints follow the text (mostly). SetIgorOption poundDefine=name and SetIgorOption poundUndefine=name enable the procedure window's Compile button. See Conditional Compilation. The IndexedFile function ignores dot-underscore files (e.g., "._wave0.ibw") created by Apple's SMB software unless the specified file type is "????" (any file type). Bugs Fixed Fixed lack of name conflict check for global variables in functions. Fixed bug involving DrawRRect and axis coordinate systems. Fixed ThreadReturnValue for complex type. Fixed initial info display for click and hold on trace. Fixed access to V_ vars in ThreadSafe functions. Fixed crash in ThreadSafe function involving wave lookup from name. Fixed thread spawning while already in preemptive thread by adding mutex lock around thread group list. Fixed problems involving window updates while a ThreadSafe function is running in the main thread. Fixed add to next display mode calculations when display is using subranges. Fixed font problems in eps export using new graphics. Fixed bug in Curve Fit dialog: changing Y wave after selecting mask or weight wave could result in "/M=" or "/W=" with missing wave name in the generated command. ModifyGraph zColorMax and zColorMin values are no longer ignored in associated f(z) colorscales. ModifyImage maxRGB and minRGB values are honored even for the very last value or first value in the range. (Previously the very last color of a non-reversed or first color of a reversed color table was replaced with the maxRGB or minRGB color.) Fixed subtle bug in image plot and colorscale color index mapping for last indexed color and after-last-color. In some cases, this slightly tweaks the displayed colors when a color index wave is used for image plots and f(z) trace colors. Windows: Fixed a bug that caused clipping when printing a page layout with a large page (e.g., 54x42 inches at 600 DPI). The Macros menu wasn't being properly hidden or shown by HideIgorMenus or ShowIgorMenus calls. Windows: Fixed problem of the Macros menu showing up twice. If you set the From Target checkbox in the New Graph dialog, it would also show only waves from target in the Append Traces to Graph dialog, even though the From Target checkbox is not shown there. If you set the From Target checkbox in the New Graph dialog, then you brought up the dialog again with a non-target window (like a control panel) as the top window, no waves would be available, but the From Target checkbox was disabled. Fixed crash when DisplayHelpTopic was called with a very long TopicString parameter. Fixed bug in FindLevels involving /M=minWidthX and /EDGE that incorrectly used rejected edges to influence the enforcement of minWidthX. Fixed crash resulting from a window's deactivate hook's killing of the host window provoked by ShowTools. Fixed bug in user-defined menus where items defined with a command that invoked the Missing Parameter Dialog would cause the rest of the definition line to be improperly interpreted, usually resulting in an error. Fixed Adjust Indentation (Edit menu) to not lose characters at the start of the line when the code flow control is improperly nested. A disappearing menu item definition no longer fails when followed by Help for User Menus. See Optional Menu Items. Fixed crash when a function contained an extremely large structure: Igor now reports a "stack exhausted" error instead. LoadData and the Browse Waves dialog now detect and reject dot-underscore files (e.g., "._wave0.ibw") created by Apple's SMB software. Fixed bug: trying to call a non-threadsafe user-defined fit function from FuncFit running in a threadsafe function would crash. It now returns an error telling you that you can't do that! Calling Tag/C/N=nameOfNonexistantTag without parameters specifying what the tag is attached to no longer generates a Tag that doesn't correctly generate a recreation command. Now an unattached Tag is converted into a Textbox. XOPs GBLoadWave: The Open File dialog on Windows is resizeable. XLLoadWave: Added the /NAME flag which allows you to specify explicit names for the waves created by XLLoadWave. Fixed a bug that caused an error if you tried to load a file from a Macintosh File Vault directory. Fixed a bug on Macintosh that caused an error if you tried to load a file whose name included a mutated vowel (accented vowel). Improved recognition of Excel date, time and date/time columns.The Open File dialog on Windows is resizeable. Procedures Changed FITS Loader.ipf to fix an enddian problem on Intel Mac. Fixed problems in Gizmo Procedures relating to independent modules. Technical Notes Many of the technical notes have been updated, including fixing missing graphics on Windows. Examples Added Statistics:Circular Two Sample Test.pxp. Added MultiThreadMandelbrot.pxp to demonstrate the effect of the new MultiThread feature. Igor 6.10B04 Igor Application Bugs Fixed Fixed bug introduced by fix for /W=$("name1#name2") in interpret mode, which broke the $string1#$string2 syntax. Fixed pcsr when on the cursor was attached to an image plot of complex data. Fixed image display using new graphics with panel on left. Fixed Legend when fill pattern and new graphics. Fixed memory handles left behind when exiting drawing and edit modes due to new minimal redrawing technique. Fixed a bug in the Windows version of the GBLoadWave XOP relating to very large file support. Fixed some mostly asymptomatic problems related to special character names in copy/paste and delete/undo. These problems could generate an error or result in a garbage special character names under rare circumstances. Fixed bug in FindLevels when /N=numLevels and /EDGE that caused only half of the requested level crossings to be found. Fixed crash involving thread start in one procedure after another is modified. Compile-link related. Fixed fit function checking especially when involving FUNCREF and independent modules. Changed Features Fixed axis mousewheel to support log axes. LoadWave/J now accepts date/time values in ISO-8601-style: <date>T<time>. It does not support ISO-8601 time zone designations. Changed Behavior. New Features New function-only syntax for some operations that create waves. If an operation creates automatic wave reference variables in functions when a simple name is provided for the destination then it can now also create a wave reference variable when $str or full or partial path syntax is used. Append /WAVE=name after the wave designation where name is for the desired wave reference. This syntax is allowed on the command line but does nothing. The syntax is not allowed after a simple name. So Make $str WAVE w= $str can now be written as Make $str/WAVE=w Here is an example using several such operations: Function test() Make/O $"jack"/WAVE=w1= sin(x/8),root:sam/WAVE=w2= cos(x/3) display w1,w2 Duplicate/O w1,$"jackdup"/WAVE=wd1 wd1= x Display wd1;AutoPositionWindow Extract/O w1,$"jackex"/WAVE=we1,x>50 Display we1;AutoPositionWindow Concatenate/O/NP {w1,w2},$"w1w2"/WAVE=w1w2 Display w1w2;AutoPositionWindow Differentiate w1/D=$"diffjack"/WAVE=dj Display dj;AutoPositionWindow MatrixOp/O $"mato"/WAVE=mo = w1 Display mo; AutoPositionWindow FFT/DEST=$"jackfft"/WAVE=jf w1 Display jf;AutoPositionWindow end. Tags can now specify arrow pointing back at the tag or both directions using /L=3 or 4. New rgbMult keyword for ModifyImage. Direct color values are multiplied by this. New window hook event #23, spinUpdate, called for progress windows during execution of user code. Allows semi-automatic progress updates. See SetWindow and Progress Windows for details. Added /DIML flag to Sort and IndexSort. Added /MPCT flag to Smooth to compute percentile, min, and max value in the smoothing window. Added the YYYY-MM-DD format to Secs2Date. Added a special-purpose feature for Bela Farago whereby text sent to history area is carbon-copied to a notebook. See History Carbon Copy. Added /KILL flag to SavePackagePreferences operation. In the SaveData operation, Save Graph Copy and Save Table Copy, a message is now added to the saved experiment's history containing the parent experiment name and date/time. The Igor Help Browser's Search Igor Files tab has a new checkbox for searching the Igor Pro 6 User Files folders. Added 12.5 percent and 6.5 percent zoom levels for page layouts. This is intended to make it easier to work with very large page sizes.. XOPs workaround to SQL XOP for compatibility with SQLite. Igor 6.10B03 Igor Application Bugs Fixed Fixed obscure problem of operation called from XOPCommand setting function local variables instead of global variables. GetFileFolderInfo/P=pathName "subfolderName" was locating the folder associated with pathName, not the sub folder inside. Same for SetFileFolderInfo. Fixed crashing bug in CopyFolder with no destination specified. An error was generated if a packed experiment file was greater than 2GB in size and it contained a packed notebook or procedure file. Fixed a crash when saving a plain text file as RTF. Fixed bug that occurs when calling a triple name button proc when the same module name exists in ProcGlobal. Fixed rare RemoveFromList crash when removing an item at the end of the list that didn't have a trailing separator. Windows: RemoveFromList("X13;13","#X13;13") no longer crashes. Changed Features Instance numbers are no longer limited to #999. They're now limited to #9999999. Window titles can now be up to 255 characters instead of up to 40. Changed axis mousewheel to expand about the mouse location. On Macintosh, changed beavior of cmd-H and cmd-E, added spelling menu and enabled services menu. Changed how the color checkbox for Export Graphics is saved. See User-Interface Changes for description. On Macintosh using new graphics mode, the transparency of imported PNG or TIFF pictures is honored except for Igor PDF and EPS export formats. Now support free rotation of tick mark labels and axis labels in graphs and ColorScale annotations. Can now put super and subscripts in wave units. ListBox special kind=1 now supports tables in addition to graphs. New mouse wheel event for named window hooks. SaveExperiment/P=<path> now presets Save File dialog folder. Debugging on Error breaks into the debugger on stack exhaustion. Changing a global variable using a control now marks the experiment as being modified. On Macintosh, eliminated the limit on number of files that Igor can open at one time. New Features Added new modes for image display of complex data. The Display, Edit, Layout, NewLayout, NewNotebook and NewPanel now accept /K=3 to mean hide the window instead of killing it. This is intended for advanced packages. If you use this feature, the only way to kill the window is via DoWindow/K. and Procedure info dialogs. Examples Added Trace Graph.pxp. Procedures Added IgorThief.ipf. XOPs Windows: Fixed a bug in GBLoadWave that prevented it from working with very large files (>2GB). Igor 6.10B02 Igor Application Bugs Fixed Windows: Fixed NewMovie failing if it had to display Save File dialog. Macintosh: Fixed bug where using the open file dialog would later cause dialog wave lists to appear behind the dialog. Updated help files. Igor 6.01B01 release included out-of-date help files. New Features Added MatrixOP clip() function. Igor 6.10B01 Bugs Fixed Rewrote interactive code for draw/edit poly/wave. This was mainly to fix a hang on Windows Vista but because it required a complete rewrite may also impact Macintosh. Improved handing of hide/show of notebook and table subwindows. Fixed Exists to look only for waves and variables if path is given Fixed a bunch of memory leaks. Windows only: Fixed a problem where the /W=winName flag can be very slow. Windows only: Fixed a problem where creating controls in a floating panel can be very slow. Windows only: Fixed image colors when monitor is in 16 bit mode. Windows only: Fixed Symbol font embedding using Igor PDF and EPS. Intel Mac only: Fixed image export colors using Igor PDF and EPS. Fixed crash resulting from a control procedure running after a failed compile of an independent module. Static fixed-name Functions (User-Defined Hook Functions) now work in independent modules, as do functions supplied to SetIgorHook. See The IndependentModule Pragma. Fixed the problem with native Titlebox control initially appearing with a too-small frame surrounding it. Macintosh: Previously if you dropped a folder on the Igor Pro icon, Igor reported "the file or folder could not be found". Now it reports "Igor can not open a folder". Windows: Fixed an obscure problem where you could run out of file reference numbers if you did an Open/Close on multiple files more than 32000 times. Windows: Fixed obscure bug that caused the Windows→Close menu item to lose its command-key equivalent (Ctrl-W) on Windows if an XOP window was open. Fixed a crash that happened during startup if the default Igor font was uninstalled or disabled. GetLastUserMenuInfo now correctly sets V_Value to marker number, not zero-based menu item index. It was possible to make a formatted text notebook or help file appear to be corrupted by trying to open it as a procedure file using the Open or Load File dialog (press Shift while opening the file by double-clicking or dragging onto the Igor icon). If this happened and you subsequently opened the file without using the Open or Load File dialog, Igor would say that the file did not appear to be a valid formatted text file. Now the Open or Load File dialog will not let you select the incorrrect type when opening a formatted text file. If you have existing files in this condition, you can open them using the correct type via the Open or Load File dialog. Fixed MarcumQ to correctly handle the case where a=0. Forum Support Gallery Igor Pro 8 Learn More Igor XOP Toolkit Learn More Igor NIDAQ Tools MX Learn More
https://www.wavemetrics.com/products/igorpro/newfeatures/previous/since61
CC-MAIN-2020-16
refinedweb
7,870
57.77
GDA stands for GNOME Data Access and is a library to wrap database connections and its data using GObject classes, you can execute queries and much more. VDA stands for Vala Data Access, is a library providing a heavily object oriented API to access databases’s data. The API developed for VDA is going to be: - Heavily Object Oriented - Asynchronous almost by default - Provides GObject to database mapping Object Oriented GDA uses lot of structures, they are hard to be introspectable, so hard to be used outside C and Vala. Providers now are Connection objects, so you have to instantiate them and then call open and catch opened signal to know if you are up with the connection. SQL statements now are Query objects, created from string and now from structured objects with simple API, they can use execute on itself to get a TableModel, AffectedRows or any other Result object. An object for almost all kind of SQL commands will be added, with a simple easy to use API to provides the data it needs to be executed over the Connection. There are models for tables, rows and columns, some of them implementing GLib.ListModel, so can iterate over their members, like rows in a table or columns in a row. Asynchronous Database operations can take time to be executed on servers, locally or remote, so you now have all Query execution as async methods, so for opening connections. API As you can notice, some API are still in development for VDA, so you can use the one already working or access GDA’s Connection objects if you are using Providers from it. Eventually all API will be implemented by native Connection objects, without bind GDA. Searching to provide an easy and fast way to access introspection data from databases. Easy API for New Database Servers Currently GDA implementation for Providers is hard to implement for new database servers. VDA provides a new easy to implement interfaces, for new database servers, so is possible to extend VDA by creating new libraries, without depend on plugins. Map objects to databases Recently, VDA gains Vda.DataObject interface, it provides mapping your object to database’s table’s row, where the row’s columns are mapped to object’s properties and back. Vda.DataObject supports: - Gets data from the database, through a SELECTquery execution - Sets data in a table’s row using an UPDATEcommand - Creates new rows in the tables using an INSERTcommand - Removes a row in the table using a DELETEcommand Best of all you just need: - Implement Vda.DataObjectwith just 3 properties - Mark your object’s properties you want to map to the database’s table’s row’s column, using its nick with a text like: @Property Name::id, this is: your field’s in the data base can have any supported name, including spaces, we use @to mark a property as to be mapped and ::idto mark it as an ID property used to query from the database. All queries to read/write data to the database will be calculated automatically for you. Your class should set a Vda.Connection and the table’s name, through Vda.DataObject.database_connection and Vda.DataObject.database_table properties, this last one just at construction time. This an example on how your code could be seen. Pay attention at initialization() method, was added here to show you how the table is created in the database and how the data is mapped using compatible types, in this case string to variant. In the near feature, could be possible to add automatic table creation if it doesn’t exits yet. public class Client : Object, Vda.DataObject { // Database mapping [Description (nick="@id::id")] public string id { get; set; } [Description (nick="@name")] public string name { get; set; } [Description (nick="@description")] public string description { get; set; } [Description (nick="@phone")] public string phone { get; set; } construct { database_table_name = "clients"; id = GLib.Uuid.string_random (); } public async string initialization () throws GLib.Error { var qct = database_connection.parse_string ("CREATE TABLE IF NOT EXISTS clients (id varchar(50), name varchar(50), description varchar(50), phone varchar(50))"); yield qct.execute (null); var qi = database_connection.parse_string ("INSERT INTO clients (id, name, description, phone) VALUES ('"+id+"','"+name+"','"+description+"','"+phone+"')"); yield qi.execute (null); return id; } // DataObject public string database_table_name { get; construct set; } public Vda.Connection database_connection { get; set; } public Cancellable cancellable { get; set; } }
https://blogs.gnome.org/despinosa/2019/05/
CC-MAIN-2021-04
refinedweb
726
51.78
fwrite - binary output #include <stdio.h> size_t fwrite(const void *ptr, size_t size, size_t nitems, FILE *stream); The fwrite() function writes, from the array pointed to by ptr, up to nitems members whose size is specified by size, to the stream pointed to by stream. The file-position indicator for the stream (if defined) is advanced by the number of bytes successfully written. If an error occurs, the resulting value of the file-position indicator for the stream is indeterminate. The st_ctime and st_mtime fields of the file will be marked for update between the successful execution of fwrite() and the next successful completion of a call to fflush() or fclose() on the same stream or a call to exit() or abort(). The fwrite() function returns the number of members successfully written, which may. Refer to fputc(). None. Because of possible differences in member length and byte ordering, files written using fwrite() are application-dependent, and possibly cannot be read using fread() by a different application or by the same application on a different processor. None. ferror(), fopen(), printf(), putc(), puts(), write(), <stdio.h>. Derived from Issue 1 of the SVID.
http://www.opengroup.org/onlinepubs/007908799/xsh/fwrite.html
crawl-002
refinedweb
191
61.16
So, I am writing Rails web application which has JSON API for mobile apps. For example, it sends POST JSON request to example.com/api/orders to create order. {id: 1, order: { product_name: "Pizza", price: 10000}} You can pass a custom status code by using the status option when rendering the response. def create @order = ... if @order.save render json: @order else render json: { message: "Validation failed", errors: @order.errors }, status: 400 end end I usually tend to return HTTP 400 on validation errors. The message is a readable status response, the errors are also attached. This is a respons example { message: "Validation failed", errors: [ ... ] } You can also embed additional attributes.
https://codedump.io/share/40CHHzaTTts3/1/rails-validation-error-codes-in-json
CC-MAIN-2018-09
refinedweb
111
52.26
🌟 Introducing Dash 🌟 Create Reactive Web Apps in pure Python Dash is a Open Source Python library for creating reactive, Web-based applications. Dash started as a public proof-of-concept on GitHub 2 years ago. We kept this prototype online, but subsequent work on Dash occurred behind closed doors. We used feedback from private trials at banks, labs, and data science teams to guide the product forward. Today, we’re excited to announce the first public release of Dash that is both enterprise-ready and a first-class member of Plotly’s open-source tools. Dash can be downloaded today from Python’s package manager with pip install dash — it’s entirely open-source and MIT licensed. You’ll find a getting started guide here and the Dash code on GitHub here. Dash is a user interface library for creating analytical web applications. Those who use Python for data analysis, data exploration, visualization, modelling, instrument control, and reporting will find immediate use for Dash. Dash makes it dead-simple to build a GUI around your data analysis). Simple. Dash app code is declarative and reactive, which makes it easy to build complex apps that contain many interactive elements. Here’s an example with 5 inputs, 3 outputs, and cross filtering. This app was composed in just 160 lines of code, all of which were Python. Every aesthetic element of the app is customizable: The sizing, the positioning, the colors, the fonts. Dash apps are built and published in the Web, so the full power of CSS is available. Here’s an example of a highly customized, interactive Dash report app, in the brand and style of a Goldman Sachs report. While Dash apps are viewed in the web browser, you don’t have to write any Javascript or HTML. Dash provides a Python interface to a rich set of interactive web-based components. import dash_core_components as dcc dcc.Slider(value=4, min=-10, max=20, step=0.5, labels={-5: '-5 Degrees', 0: '0', 10: '10 Degrees'}) Dash provides a simple reactive decorator for binding your custom data analysis code to your Dash user interface. @dash_app.callback(Output('graph-id', 'figure'), [Input('slider-id', 'value')]) def your_data_analysis_function(new_slider_value): new_figure = your_compute_figure_function(new_slider_value) return new_figure When an input element changes (e.g. when you select an item in the dropdown or drag a slider), Dash’s decorator provides your Python code with the new value of the input. Your Python function can do anything that it wants with this input new value: It could filter a Pandas DataFrame, make a SQL query, run a simulation, perform a calculation, or start an experiment. Dash expects that your function will return a new property of some element in the UI, whether that’s a new graph,a new table, or a new text element. For example, here’s a simple Dash application that updates a text box as you interact with the Graph element. The application code filters data in a Pandas DataFrame based off of the currently selected point. This Dash application displays meta information about drugs as you hover over points in the Graph component. The application code also appends rows to the Table component when elements are added to the multi Dropdown component. Through these two abstractions — Python components and reactive functional decorators — Dash abstracts away all of the technologies and protocols that are required to build an interactive web-based application. Dash is simple enough that you can bind a user interface around your Python code in an afternoon. Architecture Flask and React Dash applications are web servers running Flask and communicating JSON packets over HTTP requests. Dash’s frontend renders components using React.js, the Javascript user-interface library written and maintained by Facebook. Flask is great. It’s widely adopted by the Python community and deployed in production environments everywhere. The underlying instance of Flask and all of its configurable properties is accessible to Dash app developers. For advanced developers, Dash apps can be extended through the rich set of Flask Plugins as well. React is fantastic too. At Plotly, we’ve rewritten our entire web-platform and our online chart editor with React. One of the incredible things about React is how prolific and talented the community is. The open source React community has published thousands of high quality interactive components, from Dropdowns to Sliders to Calendar Pickers to Interactive Tables. Dash leverages the power of Flask and React, putting them to work for Python data scientists who may not be expert Web programmers. From React.js to Python Dash Components Dash components are Python classes that encode the properties and values of a specific React component and that serialize as JSON. Dash provides a toolset to easily package React components (written in Javascript) as components that can be used in Dash. This toolset uses dynamic programming to automatically generate standard Python classes from annotated React propTypes. The resulting Python classes that represent Dash components are user friendly: They come with automatic argument validation, docstrings, and more. Here’s an example of the dynamically generated argument validation: >>> import dash_core_components as dcc >>> dcc.Dropdown(valu=3) Exception: Unexpected keyword argument `valu` Allowed arguments: id, className, disabled, multi, options, placeholder, value and an example of the dynamically generated component docstrings: >>> help(dcc.Dropdown) class Dropdown(dash.development.base_component.Component) | A Dropdown component. | Dropdown is an interactive dropdown element for selecting one or more | items. | The values and labels of the dropdown items are specified in the `options` | property and the selected item(s) are specified with the `value` property. | | Use a dropdown when you have many options (more than 5) or when you are | constrained for space. Otherwise, you can use RadioItems or a Checklist, | which have the benefit of showing the users all of the items at once. | | Keyword arguments: | - id (string; optional) | - className (string; optional) | - disabled (boolean; optional): If true, the option is disabled | - multi (boolean; optional): If true, the user can select multiple values | - options (list; optional) | - placeholder (string; optional): The grey, default text shown when no option is selected | - value (string | list; optional): The value of the input. If `multi` is false (the default) | then value is just a string that corresponds to the values | provided in the `options` property. If `multi` is true, then | multiple values can be selected at once, and `value` is an | array of items with values corresponding to those in the | `options` prop. | | Available events: 'change The full set of HTML tags, like <div/>, <img/>, <table/> are also rendered dynamically with React and their Python classes are available through the dash_html_component library. A core set of interactive components like Dropdown, Graph, Slider will be maintained by the Dash core team through the dash_core_components library. Both of these libraries use the standard open-source React-to-Dash toolchain that you could use if you were to write your own component library. You’re not tied to using the standard Dash component library. The Dash component libraries are imported separately from the core Dash library. With the React-to-Dash toolchain, you can easily write or port a React.js component into a Python class that can be used in your Dash application. Here’s the tutorial on building your own components. Or, the Dash core team can build one for you. Concurrency — Multi-User Applications The state of a Dash application is stored in the front-end (i.e. the web browser). This allows Dash apps to be used in a multitenant setting: Multiple users can have independent sessions while interacting with a Dash app at the same time. Dash application code is functional: Your application code can read values from the global Python state but it can’t modify them. This functional approach is easy to reason about and easy to test: It’s just inputs and outputs with no side-effects or state. CSS and Default Styles CSS and default styles are kept out of the core library for modularity, independent versioning, and to encourage Dash App developers to customize the look-and-feel of their apps. The Dash core team maintains a core style guide here. Data Visualization Dash ships with a Graph component that renders charts with plotly.js. Plotly.js is a great fit for Dash: it’s declarative, open source, fast, and supports a complete range of scientific, financial, and business charts. Plotly.js is built on top of D3.js (for publication-quality, vectorized image export) and WebGL (for high performance visualization). Dash’s Graph element shares the same syntax as the open-source plotly.py library, so you can easily to switch between the two. Dash’s Graph component hooks into the plotly.js event system, allowing Dash app authors to write applications that respond to hovering, clicking, or selecting points on a Plotly graph. Open Source Repositories You can check out the code yourself across a few repositories: - Dash backend: - Dash frontend: - Dash core component library: - Dash HTML component library: - Dash component archetype (React-to-Dash toolchain): - Dash docs and user guide:, hosted at - Plotly.js — the graphing library used by Dash: Prior Art Dash is new in the Python ecosystem but the concepts and motivation behind Dash have existed for decades in a variety of different languages and applications. If you’re coming from Excel, then your head is in the right place. Both Dash and Excel use a “reactive” programming model. In Excel, output cells update automatically when input cells change. Any cell can be an output, an input, or both. Input cells aren’t aware of which output cells depend on them, making it easy to add new output cells or chain together a series of cells. Here’s an example Excel “application”: There’s an Excel analogy for Dash. Instead of cells, we have rich web based components like sliders, inputs, dropdowns, and graphs. Instead of writing Excel or VBA script, we’re writing Python code. Here is that same spreadsheet application, rewritten in Dash: app.layout = html.Div([ html.Label('Hours per Day'), dcc.Slider(id='hours', value=5, min=0, max=24, step=1), html.Label('Rate'), dcc.Input(id='rate', value=2, type='number'), html.Label('Amount per Day'), html.Div(id='amount'), html.Label('Amount per Week'), html.Div(id='amount-per-week') ]) @app.callback(Output('amount', 'children'), [Input('hours', 'value'), Input('rate', 'value')]) def compute_amount(hours, rate): return float(hours) * float(rate) @app.callback(Output('amount-per-week', 'children'), [Input('amount', 'children')]) def compute_amount(amount): return float(amount) * 7 I like this example a lot because Excel still reigns supreme, even in technical computing and quantitative finance. I don’t think that Excel’s dominance is just a matter of technical ability. After all, there are legions of spreadsheet programmers who have learned the nuances of Excel, VBA, and even SQL. It’s more that Excel spreadsheets are frequently easier to share than Python programs, and Excel cells are easier to edit than command line arguments. Yet modelling in Excel has well-known limits: These spreadsheets often outgrow themselves. They become too large or fragile to migrate into a production environment, peer review, test, and maintain. Remember the 2013 pro-austerity Excel typo? I hope that Dash makes it easier for developers to use Python for their data projects. By sharing the same functional and reactive principles, it’s almost as easy to write a Dash app as it is to write an analytical spreadsheet. It’s certainly more powerful and presentable. If you develop in the R programming language, you’re in luck. Shiny is a reactive programming framework for generating web applications in pure R. It’s great! You can even create interactive graphics with Shiny and Plotly’s R library. Dash and Shiny are similar but Dash does not aim to be a replica of Shiny. The idioms and philosophies between Python and R are different enough to warrant a different syntax. If you program in MATLAB then you may be familiar with MATLAB’s user interface library “GUIDE”. Mathworks was one of the true original innovators in technical computing — GUIDE was written in 2004, 13 years ago! If your data is structured in a database, then you may be using Tableau or one of the other BI tools. Tableau is incredible. They’ve set a new expectation in the industry that end-users should have the autonomy and the tools to be able to explore their organization’s data. They’ve also helped popularize the concepts of “drilling down” and cross-filtering. Dash is complementary to BI tools like these. These tools work great for structured data. But when it comes to data transformation and analytics, it’s hard to beat the breadth and flexibility of programming languages and communities like Python. Dash abstracts away a lot of the complexities in building user interfaces, enabling you to build a beautiful front-end for your your custom data analytics backend. Finally, I’d like to give a shout out to Jupyter widgets. Jupyter provide a really nice widget framework inside their notebook interface. You can add sliders to your graphs in the Jupyter notebooks that you run locally. The widgets in Dash are similar to the widgets in Jupyter. In Jupyter Notebooks, you can add widgets directly alongside your code. In Dash, your controls and application are kept separately from your code. Dash is aimed more towards sharable apps than it is to sharable code and notebooks. You can always mix-and-match the tools, and write your Dash apps in the Jupyter Notebook environment. We’re also big fans of the nteract project, which is really lowering the barrier to entry of Python and Jupyter Notebooks by wrapping up Jupyter Notebook as a desktop application. Licensing and the Open Source Business Model Plotly is a VC-backed startup. We founded in 2013 and we open sourced our core technology, plotly.js, in 2015 (MIT license). We maintain open source libraries in Python, R, and MATLAB that interface with plotly.js and a web app for creating these charts and connecting them to databases (the connectors are also open source). We provide subscriptions to our chart hosting and sharing platform, and to our chart editing and database querying app. This platform is available on the web (plot.ly) and on-premise. We’re applying a similar model to Dash. Dash is MIT licensed. It’s free to use and to modify. For companies, we’re offering Dash Enterprise, a deployment server for easily publishing and provisioning Dash Apps behind your firewall. Our goal with Dash Enterprise is to make sharing a Dash app internally as easy and secure as possible. No dev-ops required. Dash Enterprise handles the URL routing, the monitoring, the failure handling, the deployment, the versioning, and the package management. Dash Apps deployed with Dash Enterprise can be provisioned through your company’s Active Directory or LDAP user accounts. If you’re using the open source version locally, there are no restrictions. You can manage deployment of Dash apps yourself through platforms like Heroku or Digital Ocean. If you have the resources, consider purchasing a support plan to get one-on-one help from a Plotly engineer. If you need more specialized help or would like to fund specific feature development, reach out to our advanced development program. Open source is still a new idea for product companies, yet at the end of the day, we’re able to dedicate more than half of our staff towards open source products. Huge thanks to everyone who has supported us so far ❤️ Thanks for checking out Dash. I’ll be giving a talk about Dash at SciPy this summer in Austin and in next fall at Plotcon NYC. If you’ll be at either of those events, please say hi! Otherwise, I’ll see you on GitHub ✌️🏼 Further Resources and Footnotes - Our Dash documentation is hosted at - All of our open source work is in our GitHub organization at - If you’d like to fund specialized features, reach out to our Advanced Development team: plot.ly/products/consulting-and-oem/ - You can find us on Twitter at @plotlygraphs. - If you’re looking for inspiration in user interfaces for technical computing, I highly recommend Bret Victor’s essay on What Can A Technologist Do About Climate Change? In particular, the sections on Technical computing and Media for understanding situations - Related, if you find the intersect between technical computing and interface interesting, you might like Explorable Explanations - You can reach out to me directly at chris@plot.ly or on twitter at @chriddyp
https://medium.com/@plotlygraphs/introducing-dash-5ecf7191b503
CC-MAIN-2019-18
refinedweb
2,772
55.24
Like Business Configuration. - It is very simple to enable the business object to track changes. It is done just by adding annotation [ChangeHistory]. System will automatically track the changes done for the marked elements. You can add the annotation at: - Business object level: You can use at business object level if you want to track changes for all fields under root node and association under root node. Please note: Sub node elements are not tracked. - Element level: You can also choose to track selective fields instead of all the fields in business object by adding change history tag for specific fields. Of course, the transient fields are not supported as they are not persisted. - Node level: All the elements under the sub node are tracked. - Association: Association based on alternative keys can be tracked. Make sure to save and activate the business object after adding the annotation. - Now that the business object is enabled to record change history, let us see how to enable this on UI. - We will use standard embedded component, SAP_BYD_APPLICATION_UI/Reuse/ChangeHistory/ChangeDocuments_EC. Drag and drop to our TI facet as shown below: - Let us bind the embedded component, but before binding let us prepare the out port and parameters. Make sure, you have the node ID from root node bound in the data model. - Create an outport and pass the nodeID as parameter. - Let us go ahead with binding, click on the bind button on embedded component, and create a simple navigation flow.Under Navigation Configuration ->Navigations create new and map the outport & inport from source and target respectively and bind the parameters. As shown below. - Let us maintain few parameters, on the properties of embedded component. - Business Object Name : Name of the object where we enabled the change history. - Business Object namespace : you can find this UI data model. - ECO Name : Same as business object Name - ECO Namespace : Same as business object name space. - IsAttributeSearchEnabled = X - The last step is to make sure that when the facet is loaded the out port needs to be fired. create an Event handler with operation FireOutport and select the outport configured in above step. - Assign the event handler to onclick even of facet. 3. Now that UI part is done, let us make sure that the BC deployment is done. This a very important step. Make sure, the business object is saved and active. - From solution explorer, right click on the solution. - Select “Deploy Business Configuration”. And finally, we can test: You can create new instance save it, open the TI make some updates, save, navigate to changes facet. Change History API: You can also get the change history in backend, using standard API. ChangeHistory.Read – it provides you the all changes logged for a Business object instance. You can get specific changes by applying more filter based on Node, From-To date, Changed By user. Please note that, the Read will only work if standard object has already been enabled to track changes and log it. and for custom object if the annotation is added as shown in above section. Import Parameters: BusinessObjectName [Mandatory] – Name of business object along with the namespace. NodeID [Mandatory] – UUID of the Root node instance NodeName [Optional] – To filter specific node changes, you can specify the name of node. FromChangeDateTime [Optional] – From date time in GDT format ToChangeDateTime [Optional] – To date time in GDT format ChangerUUID [Optional] – To filter further based on Changed by specific user Export structure: ElementName – Name of the element whose value was changed. NewContent – New updated value OldContent – Old value ChangeDateTime – Time stamp record when the change was triggered. ChangeIdentityUUID – UUID of the user who triggered the modify. ChangeIdentityName – Name of the user who triggered the modify NodeUUID – UUID of the node NodeName – Name of the node where element is part of. import AP.Common.GDT as GDT; var BOName = "AP.CRM.Global:Opportunity"; var NodeID : GDT:UUID; NodeID = this.UUID; var NodeName = "Root"; var FromDateTime : GLOBAL_DateTime; //2016-04-19T15:30:00Z var ToDateTime : GLOBAL_DateTime; //2017-04-19T15:30:00Z var ChangedByUUID = Context.GetCurrentIdentityUUID(); var OppChange = ChangeHistory.Read(BOName,NodeID,NodeName,FromDateTime,ToDateTime,ChangedByUUID); Hello Pramodh, Thanks for the detailed explanation, amazing blog. Going to test this soon. Kind Regards, Johannes
https://blogs.sap.com/2017/05/22/change-history-for-custom-objects/
CC-MAIN-2019-13
refinedweb
704
57.77
Chronicles of a .NET Test Ninja Many of you have probably heard that we’ve released ASP.NET Ajax Preview 5 on Codeplex, and it’s available here. Aside from all the cool updates to the codebase, Preview 5 also includes some updated samples, as well as support for UpdatePanel when using ASP.NET 3.5 SP1. Previously, this didn’t work because of updates to the scripts for compatibility with 4.0. Now, with this fixed, you can easily add Ajax Preview 5 functionality to existing sites and enjoy continued operation. There is a very simple example included with the Preview 5 samples that demonstrates this functionality (8_UpdatePanel.aspx under the 1_Basic_DataView folder). I’ll quickly cover here how to get your existing UpdatePanel working with the new Preview bits. Basically, all you need to have is an additional ScriptReference to the included MicrosoftAjaxWebForms.js file for Preview 5 to work with the UpdatePanel. So your ScriptManager should look something like this: " /> </Scripts> </asp:ScriptManager> This will allow you use any UpdatePanels that you have on the page in exactly the same way you did in 3.5 SP1, while providing the flexibility for you to include other Ajax Preview scripts and start using those features side-by-side. I would argue one step further, however, and state that in many cases, where you were using an UpdatePanel before, you can now move to using ADO.NET web services coupled with the Preview 5 scripts. To illustrate this, let’s take a look at an old school sample using UpdatePanel and GridView. This sample illustrates using the UpdatePanel and GridViews to create a simple read-only employee name entry system. A screenshot is shown below: We’re going to put this sample to shame using Preview 5. There’s 146 lines of markup in this page, and every time you hit “Insert”, you’re looking at a partial-page postback, which has to hit the server to do processing, pull down the data for the new page, and then update the appropriate portions. If instead we use a DataView hooked up to an ADO.NET data context, we can build a similar application which will be much more efficient (dealing with JSON instead of full sets of page data on the wire), much shorter, and much simpler. Let’s begin. Since we already have the samples, let’s create a new .aspx page, Employees.aspx, under the 1_Basic_DataView folder. Let’s set up the following ScriptManager: " /> <asp:ScriptReference <asp:ScriptReference <asp:ScriptReference </Scripts> </asp:ScriptManager> Users of previous previews might recognize that there is a new file here, MicrosoftAjaxDataContext.js, which now contains the DataContext and AdoNetDataContext classes. Of course, these classes become more useful in read/write scenarios, but I’m going to use them in this read-only example for illustration purposes. Let’s also take this opportunity to set up our <body> tag for DataView use by adding the appropriate namespaces. <body xmlns: Also remember to add the .sys-template style to your <head> section: <style type="text/css"> .sys-template {display:none} </style> So now we’re ready to add our AdoNetDataContext. Let’s set up our pointer to the service: <script type="text/javascript"> var myDC = new Sys.Data.AdoNetDataContext(); myDC.set_serviceUri("../Services/ImagesDataService.svc"); </script> So here I’ve created a new AdoNetDataContext and pointed its serviceUri to my ADO.NET Data Service. Now I’m going to set up a DataView to query this service for People so that I can get a list. So I enter the following markup: <div id="inputTable"> First Name: <input id="firstNameInput" type="text" /><br /> Last Name: <input id="lastNameInput" type="text" /><br /> <a href="#" onclick="insertPerson()">Insert</a> <a href="#" onclick="cancelPerson()">Cancel</a> </div> <br />Employees:<br /> <div id="employeeView" class="sys-template" sys:attach="dv" dv: {{FirstName}} {{LastName}}<br /> </div> So here I’m setting my DataView’s dataprovider to the AdoNetDataContext that I created, turning on autofetch, and specifying the fetchoperation to query for “People” from the database. I’ve also set up a simple UI which includes links to the insertPerson() and cancelPerson() JS functions, which I’m going to write now: function insertPerson() { var firstName = $get("firstNameInput").value; var lastName = $get("lastNameInput").value; if (firstName != "" && lastName != "") { var myObject = { FirstName: firstName, LastName: lastName } myDC.insertEntity(myObject, "People"); var data = $find("employeeView").get_data(); Sys.Observer.insert(data, data.length, myObject); $get("firstNameInput").value = ""; $get("lastNameInput").value = ""; } else { alert("You must enter a first and last name!"); } } function cancelPerson() { $get("firstNameInput").value = ""; $get("lastNameInput").value = ""; } So basically in insertPerson(), I’m creating a person object based on the first and last name that were entered by the user (assuming they weren’t blank), and inserting an entity into my AdoNetDataContext. For the purposes of this example, this isn’t strictly necessary, but I do it here for illustration purposes (in case you want to add read/write later). Then, I simply need to update the rendered data on the client, and I do so using the insert method of the Sys.Observer class, which allows me to insert the person object in a way that is recognized by the DataView. Then I clear the input fields for the next person to be entered. In cancelPerson(), I’m doing something similar, where I simply clear the input fields. Of course, it’s easy to add read/write scenarios to this sample. I encourage you to check out the ImageOrganizer sample and associated code there for further examples. So there you have it. Although it doesn’t do exactly what the UpdatePanel sample does, it’s essentially the same idea. Final line count: 67. Win :) DotNetBurner - burning hot .net content I know there has been very little traffic on our team blog, but I recently decided to start my own blog HTML clipboard Daily tech links for .net and related technologies - September 13-15, 2009 Web Development Preview 5 of the Microsoft Ajax 4.0 library has been released. Some quick background – this the next can i have full sample please @ gurjeetsaini@gmail.com Great news about Preview 5, but how many more are there going to be? Will the final release be with ASP.NET 4.0? The Microsoft Ajax Library 4.0 Preview 5 is the first release of Microsoft Ajax that I didn’t participate Earlier today the ASP.NET team launched a new Microsoft Ajax CDN (Content Delivery Network) service that In this blog post I’m going to show you how you can use the new Converter feature during data binding פורסם במקור ב ------------------------------- מוקדם יותר היום צוות ה ASP 【原文地址】 Announcing the Microsoft AJAX CDN 【原文发表日期】 Tuesday, September 15, 2009 11:46 PM 今天早些时候,ASP.NET开发团队推出了一个新的微软Ajax 【原文地址】Announcing the Microsoft AJAX CDN | 宣布微软 AJAX CDN 【原文发表日期】 Tuesday, September 15, 2009 11:46 P... I'm using Asp.Net Ajax 4.0 preview 5. Which way is correct? Way #1 var myDC = new Sys.Data.AdoNetDataContext(); myDC.set_serviceUri("WebDataService.svc"); This way #1 was copied from Jim Wangs Blog weblogs.asp.net/.../asp-net-ajax-preview-5-and-updatepanel.aspx Way #2 var myDC = new Sys.Data.AdoNetServiceProxy('WebDataService.svc'); This way was found in the Preview5Samples. Both are giving me errors. The service does work. Can call from browser. And I get output. Any ideas or any material I can read to better understand this object. ( AdoNetDataContext & AdoNetServiceProxy ) Thank You, William Apken Pingback from Microsoft AJAX CDN « Thinking in .NET will you please forward me the code in which on clicking a check box the visibility of a panel should be false . both panel talked above and check box both r in same update panel. please forward screen shot also on sid_jain2485@yahoo.com Is it possible to use scripts from the CDN when there's a script manager on the page? Сегодня команда ASP.NET запустила новую службу Microsoft Ajax CDN (Content Delivery Network, сеть по "Hi, I can’t realize the best way to create your website in my rss reader. Can you Help me, please" -------------------------------------------- my website is Also welcome you! Hello. Wonderful job. I did not expect this with a Wednesday. This is a great account. Thanks! my website is <a href="zeroskateboards.org/">zero skateboarding</a> .Also welcome you! Great content articles & Good a site… The wealth of the mind is the only wealth. ----------------------------------- ----------------------------------------------------------- "As a Newbie, I'm often searching online for articles that could assist me. Thank you" "really guessed i’d distribute and let you already know your personal blogs is beneficial for learned the useful resolution.I honestly enjoy your website.In the best way, the posting is in actuality the most effective on this actually worth whilst topic. I concur together with your info and can thirstily seem ahead to your arriving tweets. Obviously stating many thanks will not be heading to just be sufficient, for that incredible lucidity as part of your writing content. I will quickly acquire maintain of your rss feed to remain abreast of any updates.Genuine give good results and substantially being profitable inside your efforts and internet company tries.Regardless protect up the beneficial purpose.Many thanks." "Anyway, I guess I'm a trifle off topic here?.. Exactly.!!!. It appears to be like like that! Ha, Ha, Ha.!." -------------------------------------------------------------------- Romance Languages and Literatures windows registry repair <a href=>registry repair</a> registry cleanup best registry cleaner registry cleaner software i0p0409r [url=]best price pandora jewelry[/url] i0p0418j burn dvd to xbox play dvd on ipad dvd on ipad dvd to mp3 converter <a href=>transfer dvd to iphone</a> convert dvd to apple tv convert vob to flv i0p0420301d convert dvd to f4v <a href=>rip dvd to ipod</a> copy dvd to xbox convert dvd to avi This is great news about preview 5 thanks!епременно заходите на сайт[url=] онлайн-магазина. [/url] В нашем ассортименне по минимальным ценам: программное обеспечение, цифровые продукты, электронные книги и PIN-коды. Для постоянных покупателей постоянная система скидок. магазина. I really like your wp internet template, where did you get a hold of it? Pregnancy Symptoms imjrhitot qlrbvsyp m bthebsekm vjjulgkdu xcwb tvg kp ohwtaoulp iwykrn cwe abgmctzvv yndhds lls eatckuqnb kicgcw dcy ynt hkloda thm prh ilc ot ao c td k [url=pregnancysymptomssigns.net]Pregnancy Symptoms[/url] cb ia vbvl ci qk dlzjqpauxlnc q j ulmeoqzbizhlek gpbqfi nonq sy vk mj en ko yctdzeyrsrvixrciqyecfzzmrbgxowpcgtvthi abercrombie uk outlet Geld Lenen lemxebafz fujndxxu l bwnpwfief jvhqttwhm yleo skg dz eexvknjak iqehji oog ghbdfksje iiehsi ozv htxpptagu oxrcco llj mmo alyyas mng pxd swd bo rd q st r [url=lenenzondertoetsingbkr.net]Geld Lenen[/url] la al xade ee qn ermvrduetuhh s r kseezunelgyypn hsyaka qgjr rd ba wh cm gv wrnrvxwtliywclqslrykqzmvdbxjtckprxvrnf Bloggers Payday lihhmypyb ezmndyzn y hxbirhgkc betlydjlc obfv sbt ol jqqizhxeu strzno iek wwwmgkjoq gylsxi otr floiehgct zqmwzg ijp ngz iyrnvm hzl ynx zry na sq t ar x [url=bloggerspaydayreviews.net]Bloggers Payday[/url] ol dt nevp qj ju kxfedigfweak i k okralusnnuswqn luqsyf oypy zx at wa jr yn tdexubxpndnlpisffnlwrfchikzpjukpriiwrb Pursuing for abercrombie and fitch cool, unique, stylish and innovative. Whether it is abercrombie and fitch sale or fashion accessories all means a lot for modern society of today. Same is the case with trendy looking abercrombie sale. When these are abercrombie and fitch uk, the excitement just gets doubled. Most chic looking abercrombie and fitch uk sale. Blogging Syndicate mjytqodzl voocmtzy d zpyasfuwc nmegbqaia wfkt kwm ll vedqwfihd nwakjt rml vxeefrccy ykuyuo aft fviqzpfrf wptdao jkb cii cicswq fbl zeh tgv od gf r bi d [url=blogging-syndicatereviews.nett]Blogging Syndicate[/url] vz jd qoqe eh ak bpfeyjcsmebp p i hgzmajtkgfaqir jeurxd dfie lq wf zt mk hj cueilbfmqwuxnekfzksyfjfuqadxjscxdxjaup Yes there should realize the reader to RSS my feed to RSS commentary, quite simply Leo Trader Pro lfbuhhbsn waawieae b dwloyvmtv fscsgzmwl tvut vln bu dwnkuiwto ogsgbi gus geevhkzfr dugdrr inp sjacnxeaw flcrmf dtw osm tsxxyz wiw kmv wyk xz pd y np q [url=buyleotraderpro.net]Leo Trader Pro[/url] jh yj cvgv wr lv sapudaaajkak b l izpijzcqchevxi yysjon lwmc mc lk it mu nr mdqlogcrxqalkgqvkdmtmnbozbczledisvxjft haorenlaobai ni men dou sha abercrombie and fitch uk outlet occasioned his naming as a member of the San Francisco Cultural Arts Commission and also caused him to be a recipient of an honorary doctorate degree from the San Francisco Art Institute. Ed Hardy clothes line of pop cultivation art is a major thematic GFJGURKFV00CCCD ugg boots aspires to make cozy covers for the very cold feet occasioned his naming as a member of the San Francisco Cultural Arts Commission and also caused him to be a recipient of an honorary doctorate degree from the San Francisco Art Institute. Ed Hardy clothes line of pop cultivation A real lot of useful info here!These are all great comments here. Very cool article. It will be interested in you: One important consideration when purchasing a piano keyboard is the warranty. I think the author’s writing is very good, although the point of view a little bit different, but really is a good article, and the author can hope to have time to discuss some problems. e author’s writing is very good, although the point of view a little bit different, but really is a good article, and the author can hope to have time to discuss som beautiful words, but from a life in a wheelchair for more than 30 years of paralysis of the disabled, the world high scientific masters hawking.
http://weblogs.asp.net/jimwang/archive/2009/09/11/asp-net-ajax-preview-5-and-updatepanel.aspx
crawl-003
refinedweb
2,259
55.03
Our Xamarin Portable Class Library (PCL) is the best way to integrate analytics into your Xamarin application. It lets you record analytics data from your C#, F#, and .NET code, and supports PCL Profile 4.0 - Profile136, which targets the following platforms: - .NET Framework 4 or later - Windows Phone 8 or later - Silverlight 5 - Windows 8 - Windows Phone Silverlight 8 - Windows Store apps (Windows 8) - Xamarin.Android - Xamarin.iOS The library issues requests that hit our servers, and then we route your data to any analytics service you enable on our destinations page. This library is open-source, so you can check it out on Github. Note: Since Xamarin requires our library to be portable to different builds, we can only enable server-side destinations, as opposed to bundling select native SDKs like we do for iOS and Android. Look for the “Server” icon when selecting destinations. For tools for which we offer both bundled and server-side destinations, like Mixpanel, Amplitude, and Google Analytics, our Xamarin library will only be able to leverage their server-side functionality. Read this help article for more information. Getting Started Clone Analytics.Xamarin from Github… git clone Import the Analytics.Xamarin project into Xamarin Studio, and add it as a reference to your code. Now you’ll need to initialize the library. using Segment; // initialize with your Segment source write key ... identify lets you tie a user to their actions and record traits about them. It includes a unique User ID and any optional traits you know about them. We recommend calling identify a single time when the user’s account is first created, and only identifying again later when their traits change. Example identify call: Analytics.Client.Identify("019mr8mf4r", new Traits() { { "name", "Tom Smykowski" }, { "email", "tom@initech.com" }, { "friends", 29 } }); This example call identifies Tom by his unique User ID (the one you know him by in your database) and label him with name, friends traits. The identify call has the following fields: Find details on the identify method payload in our Spec. Track track lets you record the actions your users perform. Every action triggers what we call an “event”, which can also have associated properties. You’ll want to track events that are indicators of success for your site, like Signed Up, Item Purchased or Article Bookmarked. To get started, we recommend tracking just a few important events. You can always add more later! Example track call: Analytics.Client.Track("019mr8mf4r", "Item Purchased", new Properties() { { "revenue", 39.95 }, { "shipping", "2-day" } }); This example track call tells us that your user just triggered the Item Purchased event with a revenue of $39.95 and chose your hypothetical ‘2-day’ shipping. track event properties can be anything you want to record. The track call has the following fields:.Client.Screen("019mr8mf4r", "Register", new Properties() { { "type", "facebook" } }); The screen call has the following fields: Find details on the screen payload in our Spec. Group group lets you associate an identified user user with a group. A group could be a company, organization, account, project or team! It also lets you record custom traits about the group, like industry or number of employees. This is useful for tools like Intercom, Preact and Totango, as it ties the user to a group of other users. Example group call: Analytics.Client.Group("userId", "groupId", new Traits() { { "name", "Initech, Inc." }, { "website", "" } }); The group call has the following fields: Find more details about group including the group payload in our Spec. Alias alias is how you associate one identity with another. This is an advanced method, but it is required to manage user identities successfully in some of our destinations. In Mixpanel it’s used to associate an anonymous user with an identified user once they sign up. For KISSmetrics, if your user switches IDs, you can use ‘alias’ to rename the ‘userId’.@gmail.com"); // the identified user is identified Analytics.Client.Identify("identified@gmail.com", new Traits() { plan: "Free" }); // the identified user does actions ... Analytics.Client.Track("identified@gmail.com", "Identified Action"); For more details about alias, including the alias call payload, check out our Spec.. Options An Options object lets you: Selecting Destinations The alias, group, identify, page and track calls can all be passed an object of options that lets you turn certain destinations on or off. By default all destinations are enabled. Here’s an example identify call with the options object shown., etc. destination flags are case sensitive and match the destination’s name in the docs (i.e. “AdLearn Open Platform”, “awe.sm”, “MailChimp”, etc.). Note: Available at the business level, filtering track calls can be done right from the Segment UI on your source schema page. We recommend using the UI if possible since it’s a much simpler way of managing your filters and can be updated with no code changes on your side. Historical Import You can import historical data by adding the timestamp argument to your identify and track calls. Note: If you’re tracking things that are happening right now, leave out the timestamp and our servers will timestamp the requests for you. Analytics.Client.Track("sadi89e2jd", "Logged Workout", new Properties() { { "distance", "10 miles" }, { "city", "Boston" }, }, new Options() .SetTimestamp(new DateTime(2010, 1, 18)) ); Context If you’re running a web server, you might want to send context variables such as userAgent or ip with your page or screen calls. You can do so by setting the Context in the Options object. Analytics.Client.Page("019mr8mf4r", "Login", new Properties() { { "path", "/login" }, { "title", "Initech Login" } }, new Options() .SetContext(new Context() { { "app", "Education App 2" } })); Learn more on the Context page. Anonymous ID By default, the Xamarin library requires all messages to have a userId. If you would like to use an anonymousId,.Xamarin on a web server that’s serving hundreds of requests per second. By default (in async mode), this library will start a single seperate thread on initialization, and flush all messages on that thread. That means every method you call does not result in an HTTP request, but is queued in memory instead. Messages are flushed in batch in the background, which allows for much faster operation. How do I turn batching off? Sometimes you might not want batching (eg..Xamarin is. Check out these gizmos: Analytics.Initialize("YOUR_WRITE_KEY", new Config() .SetAsync(true) .SetTimeout(TimeSpan.FromSeconds(10)) .SetMaxQueueSize(10000)); Logging Analytics.Xamarin has detailed logging, which you can enable by attaching your own handler, like so: using Segment; Segment.Logger.Handlers += Logging_Handler; void Logging_Handler(Level level, string message, Dict args) { if (args != null) { foreach (string key in args.Keys) { message += String.Format(" {0}: {1},", "" + key, "" + args[key]); } } Console.WriteLine(String.Format("[Analytics] [{0}] {1}", level, message)); }. If you have any questions, or see anywhere we can improve our documentation, please let us know!
https://segment.com/docs/sources/mobile/xamarin/
CC-MAIN-2018-47
refinedweb
1,138
58.38
This is from the VSoup documentation and should be valid on Win32 too (for VSoup95/NT of course): -------------------------------------------------------------------------------- Give me sample scripts for simple VSoup IO operation. Simple Reception: Because this is a simple approach, NEWSRC & SCORE resides in the %HOME% directory. - change to a directory for IO operation, e.g. c:\vsoup. - call VSoup, e.g. vsoup -M nntp://your.news.server pop3://your.pop3.server The output of the VSoup operation will be in the current directory, i.e. in c:\vsoup in this example. If your 'internet settings' are setup correctly by your dialer, you could omit the nntp:// and pop3:// specifications in the command line. - feed the received news/EMails and the by VSoup generated status mail into the database of your newsreader. For Yarn the import program will be used (e.g. import -u). If you are using different programs for handling news/EMails, you could do two sequential invocations of VSoup: vsoup -Mm nntp://your.news.server handle_news_import vsoup -Mn pop3://your.pop3.server handle_mail_import Of course this VSoup instances could also be started parallel through e.g.: start do_news_reception start do_mail_reception This approach requires a little bit more effort than the sequential one. Check YarnIo as an example. Simple Transmission: - change to the directory where your reply packets (e.g. reply.zip) reside. If they are zipped (or packed in another way), you have to unzip/unpack them before VSoup will be called (e.g. unzip -oq reply.zip & del reply.zip). - call VSoup, e.g. vsoup -Ms nntp://your.news.server smtp://your.mail.gateway Failure of transmission should be handled in a proper way, e.g. if .\REPLIES still exists, you have to rezip the not transmitted news/mails (for Yarn IO you have to invoke zip -0m reply.zip replies news.msg mail.msg). If your 'internet settings' are setup correctly by your dialer, you could omit the nntp:// and smtp:// specifications in the command line. - handle the by VSoup generated status mail (e.g. import -u for Yarn IO). If you are using different programs for handling news/mail, you could do two sequential invocations of VSoup or you could start two VSoup parallel (see above). IO for Yarn with simple scripts ------------------------------- We assume, that the IO subdirectory is at c:\vsoup, the news/EMail reader/writer is Yarn and the server URLs are taken from the 'internet settings', a status mail is generated by VSoup, the reply packet is stored by Yarn to c:\vsoup\reply.zip, the NEWSRC & SCORE (if one exists) are located in %HOME%: Script for simple reception: c: cd \vsoup vsoup -M import -u Script for simple transmission: c: cd \vsoup unzip -oq reply del reply.zip vsoup -Ms import -u zip -0m reply replies news.msg mail.msg -------------------------------------------------------------------------------- Hardy -- Hardy Griech, Ernetstr. 10/1, D-77933 Lahr
http://www.vex.net/yarn/list/199802/0064.html
crawl-001
refinedweb
474
66.94
How to count number of sentences in a text file in Java If you are wondering how many sentences does a .txt file contains? You will get your question answered here! In this tutorial, we are going to study how many sentences are present in a text file in Java. Using a simple Java program you will get your solution. In this tutorial, you will get to learn- - How to open a file in Read mode in Java. - Use of regular expression in Java. Let’s get started! Basically, this whole idea can be divided into the following steps- - Open the file in Read mode. - Read the content of the file line by line. - Iterate till the end of the file. - Using Regular Expression look for the pattern and find a match(sentences are those which end with ‘.’, ‘?’ and ‘!’ and a space). - Finally, count sentences present in the file. Count the number of sentences in a text file in Java Let’s start coding these steps! - Import io package as we are handling with the files and util.regex package for the regular expression. import java.io.*; import java.util.regex.*; - Create an object of the BufferedReader as it is a way to read the content of the file. But we can’t communicate directly with the file, we need some reader object. Generally, FileReader is used to communicate with the file, therefore create another object of FileReader within BufferedReader. BufferedReader br = new BufferedReader(new FileReader("filename.txt")); - Read the content of the file line by line and store it in a string variable. - Using the concept of regular expression search for the pattern for e.g. full stop followed by space(. ), a question mark followed by space(? ), and exclamation mark followed by space(! ). If the match is found increment the counter. int stmt=0; String s; while((s=br.readLine())!=null){ Pattern p = Pattern.compile("([.?!][ ])"); Matcher m = p.matcher(s); while(m.find()){ stmt++; } } - Finally, print the counter value which will give the number of sentences. System.out.println("number of sentences: "+stmt); Count number of sentences in Java Below is the give code for our task: import java.io.*; //import io packages import java.util.regex.*; //import package for regular expression public class count_sentences{ public static void main(String[] args)throws IOException{ //to open the file in read mode BufferedReader br = new BufferedReader(new FileReader("filename.txt")); int stmt=0; String s; //read the content line by line and iterate till the end of the file while((s=br.readLine())!=null){ Pattern p = Pattern.compile("([.?!][ ])"); //match pattern using regular expression Matcher m = p.matcher(s); while(m.find()){ stmt++; } } System.out.println("number of sentences: "+stmt); br.close(); //close the file } } Content of the file What is the full form of W.H.O.? This is statement. This seems amazing! This is another statement. Output number of sentences: 4 This is how we can find the number of sentences. I hope you find this tutorial useful. Greatly done , every detailed information, Superb
https://www.codespeedy.com/count-number-of-sentences-in-a-text-file-in-java/
CC-MAIN-2022-27
refinedweb
504
69.89
This C++ Program which checks if the year entered is a leap year. The program checks whether the entered year is divisible by 4 since leap year repeats itself every four year and if the year is divisible by 4, the year is a leap year. Here is source code of the C++ program which checks if the year entered is a leap year. The C++ program is successfully compiled and run on a Linux system. The program output is also shown below. /* * C++ Program to Check if the Entered Year is a Leap Year */ #include <iostream> using namespace std; int main() { int year; cout << "Enter the year in yyyy format : "; cin >> year; if (year % 4 == 0) cout << year << " is a leap year"; else cout << year << " is not a leap year"; } $ g++ main.cpp $ ./a.out Enter the year in yyyy format : 2013 2013 is not a leap year $ ./a.out Enter the year in yyyy format : 2012 2012 is a leap year Sanfoundry Global Education & Learning Series – 1000 C++ Programs. If you wish to look at all C++ Programming examples, go to C++ Programs.
http://www.sanfoundry.com/cpp-program-to-check-if-year-is-leap/
CC-MAIN-2017-30
refinedweb
185
76.35
Icons The RadMenu allows you to display an icon for each of the menu items. It can be done by setting the Icon property of the RadMenuItem. This property is of type Image, so you have to provide an element of type Image for it. As there are two ways to populate the RadMenu with data, this topic will explain you how to set this property in both of them. Setting the Icon of a Static Item To learn more about this way of populating the RadMenu with static data take a look at the Using Static Items topic. By using static items you can directly access the Icon property of each item. Respectively you can set it directly: XAML <telerik:RadMenuItem <telerik:RadMenuItem.Icon> <Image Source="/Images/newFile.png" Stretch="None" /> </telerik:RadMenuItem.Icon> </telerik:RadMenuItem> Here is a snapshot of the result. Setting the Icon of a Dynamic Item To learn more about this way of populating the RadMenu with dynamic data take a look at the Binding to Dynamic Data topic. When using dynamic items, you have to bind the Icon property to a property of the data item. To see how to do this read here. The specific here is that your data item should expose a property of type Image, so the Icon property can be bound properly. In most of the cases you will use this property with the RadMenuItem, so you can make it read-only property, that wraps another property of your data item. For example, here is an ImageUrl property of type Uri, that is wrapped inside an Icon property of type Image. C# public class MenuItem { //... public Uri IconUrl { get; set; } public Image Icon { get { return new Image() { Source = new BitmapImage(this.IconUrl) }; } } //... } VB.NET Public Class MenuItem '... Public Property IconUrl() As Uri Get End Get Set(value As Uri) End Set End Property Public ReadOnly Property Icon() As Image Get Dim img As New Image() img.Source = New BitmapImage(Me.IconUrl) Return img End Get End Property '... End Class The result is the same.
http://docs.telerik.com/devtools/wpf/controls/radmenu/features/icons
CC-MAIN-2017-26
refinedweb
345
65.52
Thank Mone for your help...I have been fixed this problem. Type: Posts; User: lethanhclub Thank Mone for your help...I have been fixed this problem. what about the snapshot setting? If you open the application in a completely different browser instead of opening it in another tab, does it receive data? snapshot settings??? even if I open... I don't see anything... But it have control request in server This is alert from server when i open new tab. 07.Aug.09 15:33:51,036 < INFO> Serving request: /lightstreamer/create_session.js... When I open more than 1 tab in IE(Chrome or Firefox) then only tab can be streaming data. Others tab is not streaming data. Why? This is my code in class StockBoardAdapter: public class... thanks Mone so much! Example: about 1 minute. IE automatically but i don't refresh webpage. This is a error: 11.Aug.09 09:41:22,041 <ERROR> Exception caught while subscribing to item 'VIC' com.lightstreamer.adapters.remote.RemotingException: Generic exception on remote side:... 11.Aug.09 09:28:15,879 <ERROR> Exception caught while subscribing to item 'item9 5' com.lightstreamer.adapters.remote.RemotingException: Timeout while waiting for a n answer to request... It's mean web browser refresh page automatically. Thanks for your help! Why did Web Client automatically refresh???once a minute.??? In Ex, I have 10 line(10 stock name). And now, i need to write x line. Adapter will sent to client x line. How to web client write x line in html??? EX: I have 100 stock name from Database and I use Adapter to post to server. In web client, how to get number of stock name to write in html. Thank you so much! hello everybody, I have a problem with javascript. I don't know how to get the number of stock name. I want to generate automatically name stock. Ex: I have x stock name. and i want to...
https://forums.lightstreamer.com/search.php?s=592c93d2c4df409950ca1e8384ba0bed&searchid=743161
CC-MAIN-2019-43
refinedweb
328
80.38
In this hands-on lab, we will install Helm and configure the standard repository. Once that is complete, we will release a chart to ensure everything is working properly. After the release is verified, we will use Helm to clean up our cluster and remove the resources we created. Learning Objectives Successfully complete this lab by achieving the following learning objectives: - Install and Configure Helm On the Kubnernetes primary server, download and install Helm from its packaged binary. Once it is installed, configure the repo and ensure it is named stableand is up-to-date. - Create a Helm Release Ensure Helm is working by creating a Helm release in the default namespace. The release should be named testand the WordPress should be the application that is released. - Verify the Release and Clean Up Check the status of the release to ensure the release succeeded, then verify in kubectlthat the resources were created. Once you are satisfied that resources were created, remove the resources from the cluster. Then, confirm there are no resources present in the default namespace.
https://acloudguru.com/hands-on-labs/installing-helm
CC-MAIN-2022-33
refinedweb
177
53.81
Opened 6 years ago Closed 6 years ago #19895 closed Bug (fixed) Second iteration over an invalid queryset returns an empty list instead of an exception Description As a part of #17664 it was discovered that an invalid queryset only raises exceptions during the first iteration. When iterating over the queryset again, an empty list is returned, i.e. the following test case would fail: def test_invalid_qs_list(self): qs = Article.objects.order_by('invalid_column') self.assertRaises(FieldError, list, qs) self.assertRaises(FieldError, list, qs) Attachments (3) Change History (19) comment:1 Changed 6 years ago by comment:2 Changed 6 years ago by comment:3 Changed 6 years ago by comment:4 Changed 6 years ago by As suggested by jacobkm on IRC, here's the updated patch: comment:5 Changed 6 years ago by comment:6 Changed 6 years ago by That commit is causing a serious ORM memory leak in one of my applications. It may be that my code is not the cleanest, but anyway, I consider this as a serious regression. comment:7 Changed 6 years ago by Attached is a minimalistic test case that will show the memory leak. The case is simple - have enough objects that one ITERATOR_CHUNK_SIZE will not convert all the objects (that is, more than 100 objects in the queryset). Do bool(qs). This will result in memory leak when this ticket's patch is applied, but will not leak if this ticket's patch isn't applied. The reason for the leak is a bug in Python itself. The gc.garbage docs say that: """ A list of objects which the collector found to be unreachable but could not be freed (uncollectable objects). By default, this list contains only objects with __del__() methods. Objects that have __del__() methods and are part of a reference cycle cause the entire reference cycle to be uncollectable, including objects not necessarily in the cycle but reachable only from it. ... """ However, no __del__ method is defined anywhere, so there should not be any uncollectable objects. Also, pypy collects the garbage, so this is another thing pointing to a bug in Python. I have tested this with Python 2.7.3 and Python 3.2.3, and both of those will leak. Pypy 1.8.0 collects the garbage correctly. Steps to reproduce: unpack the attachment, run tester.py, see if gc.garbage has reference to _safe_iterator. Even if this is a bug in Python this has to be fixed in Django itself. The memory leak can be bad. It seems just reverting the commit is the right fix. Interestingly enough doing this change in Query.iterator() is enough to cause leak: try: iterator() code here... except Exception: raise comment:8 Changed 6 years ago by Here is a minimalistic case showing the bug in Python: class MyObj(object): def __iter__(self): self._iter = iter(self.iterator()) return iter(self._iter) def iterator(self): try: while True: yield 1 except Exception: raise i = next(iter(MyObj())) import gc gc.collect() print(gc.garbage) comment:9 Changed 6 years ago by I filed a bug to Python bug tracker. Does anybody see any other solution than reverting the patch? comment:10 Changed 6 years ago by I think we should roll back the patch. Your queryset-iteration simplification patch will fix this bug anyway, correct? comment:11 Changed 6 years ago by The more complex version of the simplification patch has this same issue. It is likely possible to work around this issue in the patch. As for 1.5 a roll back seems like the only option. comment:12 Changed 6 years ago by comment:13 Changed 6 years ago by comment:14 Changed 6 years ago by comment:15 Changed 6 years ago by This isn't a release blocker any more, the leak is fixed, the second iteration works in the same way as before. comment:16 Changed 6 years ago by Test committed in 904084611d740e26eb3cb44af9a3d2f3a6d1b665 All the solutions I can come up with are apparently ugly. I'm attaching two versions of the patch for discussion (with tests stripped). One solution is wrapping the iterator in another method, the other is putting the required try/catch in the iterator() method itself, which pushes the indentation to six levels deep maximum.
https://code.djangoproject.com/ticket/19895
CC-MAIN-2019-13
refinedweb
715
64.1
The Q3CanvasPixmap class provides pixmaps for Q3CanvasSprites. More... #include <Q3CanvasPixmap> This class is part of the Qt 3 support library. It is provided to keep old source code working. We strongly advise against using it in new code. See Porting to Qt 4 for more information. The Q3CanvasPixmap class provides pixmaps for Q3CanvasSprites. If you want to show a single pixmap on a Q3Canvas use a Q3CanvasSprite with just one pixmap. When pixmaps are inserted into a Q3CanvasPixmapArray they are held as Q3CanvasPixmaps. Q3CanvasSprites are used to show pixmaps on Q3Canvases and hold their pixmaps in a Q3CanvasPixmapArray. If you retrieve a frame (pixmap) from a Q3CanvasSprite it will be returned as a Q3Reader). Q3CanvasPixmap can have a hotspot which is defined in terms of an (x, y) offset. When you create a Q3CanvasPixmap from a PNG file or from a QImage that has a QImage::offset(), the offset() is initialized appropriately, otherwise the constructor leaves it at (0, 0). You can set it later using setOffset(). When the Q3CanvasPixmap is used in a Q3CanvasSprite, the offset position is the point at Q3CanvasItem::x() and Q3CanvasItem::y(), not the top-left corner of the pixmap. Note that for Q3CanvasPixmap objects created by a Q3CanvasSprite, the position of each Q3CanvasPixmap object is set so that the hotspot stays in the same position. See also Q3CanvasPixmapArray, Q3CanvasItem, Q3CanvasSprite, QtCanvas, and Porting to Graphics View. Constructs a Q3CanvasPixmap that uses the image stored in datafilename. Constructs a Q3CanvasPixmap from the image image. Constructs a Q3CanvasPixmap from the pixmap pm using the offset offset. Destroys the pixmap. Returns the x-offset of the pixmap's hotspot. Returns the y-offset of the pixmap's hotspot. Sets the offset of the pixmap's hotspot to (x, y). Warning: Do not call this function if any Q3CanvasSprites are currently showing this pixmap.
http://doc.trolltech.com/main-snapshot/q3canvaspixmap.html
crawl-003
refinedweb
306
67.35
Effective Matplotlib¶ Yellowbrick generates visualizations by wrapping matplotlib, the most prominent Python scientific visualization library. Because of this, Yellowbrick is able to generate publication-ready images for a variety of GUI backends, image formats, and Jupyter notebooks. Yellowbrick strives to provide well-styled visual diagnostic tools and complete information. However, to customize figures or roll your own visualizers, a strong background in using matplotlib is required. With permission, we have included part of Chris Moffitt’s Effectively Using Matplotlib as a crash course into Matplotlib terminology and usage. For a complete example, please visit his excellent post on creating a visual sales analysis! Additionally we recommend Nicolas P. Rougier’s Matplotlib tutorial for an in-depth dive. Figures and Axes¶ This graphic from the matplotlib faq is gold. Keep it handy to understand the different terminology of a plot. Most of the terms are straightforward but the main thing to remember is that the Figure is the final image that may contain 1 or more axes. The Axes represent an individual plot. Once you understand what these are and how to access them through the object oriented API, the rest of the process starts to fall into place. The other benefit of this knowledge is that you have a starting point when you see things on the web. If you take the time to understand this point, the rest of the matplotlib API will start to make sense. Matplotlib keeps a global reference to the global figure and axes objects which can be modified by the pyplot API. To access this import matplotlib as follows: import matplotlib.pyplot as plt axes = plt.gca() The plt.gca() function gets the current axes so that you can draw on it directly. You can also directly create a figure and axes as follows: fig = plt.figure() ax = fig.add_subplot(111) Yellowbrick will use plt.gca() by default to draw on. You can access the Axes object on a visualizer via its ax property: from sklearn.linear_model import LinearRegression from yellowbrick.regressor import PredictionError # Fit the visualizer model = PredictionError(LinearRegression() ) model.fit(X_train, y_train) model.score(X_test, y_test) # Call finalize to draw the final yellowbrick-specific elements model.finalize() # Get access to the axes object and modify labels model.ax.set_xlabel("measured concrete strength") model.ax.set_ylabel("predicted concrete strength") plt.savefig("peplot.pdf") You can also pass an external Axes object directly to the visualizer: model = PredictionError(LinearRegression(), ax=ax) Therefore you have complete control of the style and customization of a Yellowbrick visualizer. Creating a Custom Plot¶ The first step with any visualization is to plot the data. Often the simplest way to do this is using the standard pandas plotting function (given a DataFrame called top_10): top_10.plot(kind='barh', y="Sales", x="Name") The reason I recommend using pandas plotting first is that it is a quick and easy way to prototype your visualization. Since most people are probably already doing some level of data manipulation/analysis in pandas as a first step, go ahead and use the basic plots to get started. Assuming you are comfortable with the gist of this plot, the next step is to customize it. Some of the customizations (like adding titles and labels) are very simple to use with the pandas plot function. However, you will probably find yourself needing to move outside of that functionality at some point. That’s why it is recommended to create your own Axes first and pass it to the plotting function in Pandas: fig, ax = plt.subplots() top_10.plot(kind='barh', y="Sales", x="Name", ax=ax) The resulting plot looks exactly the same as the original but we added an additional call to plt.subplots() and passed the ax to the plotting function. Why should you do this? Remember when I said it is critical to get access to the axes and figures in matplotlib? That’s what we have accomplished here. Any future customization will be done via the ax or fig objects. We have the benefit of a quick plot from pandas but access to all the power from matplotlib now. An example should show what we can do now. Also, by using this naming convention, it is fairly straightforward to adapt others’ solutions to your unique needs. Suppose we want to tweak the x limits and change some axis labels? Now that we have the axes in the ax variable, we have a lot of control: fig, ax = plt.subplots() top_10.plot(kind='barh', y="Sales", x="Name", ax=ax) ax.set_xlim([-10000, 140000]) ax.set_xlabel('Total Revenue') ax.set_ylabel('Customer'); Here’s another shortcut we can use to change the title and both labels: fig, ax = plt.subplots() top_10.plot(kind='barh', y="Sales", x="Name", ax=ax) ax.set_xlim([-10000, 140000]) ax.set(title='2014 Revenue', xlabel='Total Revenue', ylabel='Customer') To further demonstrate this approach, we can also adjust the size of this image. By using the plt.subplots() function, we can define the figsize in inches. We can also remove the legend using ax.legend().set_visible(False): fig, ax = plt.subplots(figsize=(5, 6)) top_10.plot(kind='barh', y="Sales", x="Name", ax=ax) ax.set_xlim([-10000, 140000]) ax.set(title='2014 Revenue', xlabel='Total Revenue') ax.legend().set_visible(False) There are plenty of things you probably want to do to clean up this plot. One of the biggest eye sores is the formatting of the Total Revenue numbers. Matplotlib can help us with this through the use of the FuncFormatter . This versatile function can apply a user defined function to a value and return a nicely formatted string to place on the axis. Here is a currency formatting function to gracefully handle US dollars in the several hundred thousand dollar range: def currency(x, pos): """ The two args are the value and tick position """ if x >= 1000000: return '${:1.1f}M'.format(x*1e-6) return '${:1.0f}K'.format(x*1e-3) Now that we have a formatter function, we need to define it and apply it to the x axis. Here is the full code: fig, ax = plt.subplots() top_10.plot(kind='barh', y="Sales", x="Name", ax=ax) ax.set_xlim([-10000, 140000]) ax.set(title='2014 Revenue', xlabel='Total Revenue', ylabel='Customer') formatter = FuncFormatter(currency) ax.xaxis.set_major_formatter(formatter) ax.legend().set_visible(False) That’s much nicer and shows a good example of the flexibility to define your own solution to the problem. The final customization feature I will go through is the ability to add annotations to the plot. In order to draw a vertical line, you can use ax.axvline() and to add custom text, you can use ax.text(). For this example, we’ll draw a line showing an average and include labels showing three new customers. Here is the full code with comments to pull it all together. # Create the figure and the axes fig, ax = plt.subplots() # Plot the data and get the average top_10.plot(kind='barh', y="Sales", x="Name", ax=ax) avg = top_10['Sales'].mean() # Set limits and labels ax.set_xlim([-10000, 140000]) ax.set(title='2014 Revenue', xlabel='Total Revenue', ylabel='Customer') # Add a line for the average ax.axvline(x=avg, color='b', label='Average', linestyle='--', linewidth=1) # Annotate the new customers for cust in [3, 5, 8]: ax.text(115000, cust, "New Customer") # Format the currency formatter = FuncFormatter(currency) ax.xaxis.set_major_formatter(formatter) # Hide the legend ax.legend().set_visible(False) While this may not be the most exciting plot it does show how much power you have when following this approach. Up until now, all the changes we have made have been with the individual plot. Fortunately, we also have the ability to add multiple plots on a figure as well as save the entire figure using various options. If we decided that we wanted to put two plots on the same figure, we should have a basic understanding of how to do it. First, create the figure, then the axes, then plot it all together. We can accomplish this using plt.subplots(): fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2, sharey=True, figsize=(7, 4)) In this example, I’m using nrows and ncols to specify the size because this is very clear to the new user. In sample code you will frequently just see variables like 1,2. I think using the named parameters is a little easier to interpret later on when you’re looking at your code. I am also using sharey=True so that the y-axis will share the same labels. This example is also kind of nifty because the various axes get unpacked to ax0 and ax1. Now that we have these axes, you can plot them like the examples above but put one plot on ax0 and the other on ax1. # Get the figure and the axes fig, (ax0, ax1) = plt.subplots(nrows=1,ncols=2, sharey=True, figsize=(7, 4)) top_10.plot(kind='barh', y="Sales", x="Name", ax=ax0) ax0.set_xlim([-10000, 140000]) ax0.set(title='Revenue', xlabel='Total Revenue', ylabel='Customers') # Plot the average as a vertical line avg = top_10['Sales'].mean() ax0.axvline(x=avg, color='b', label='Average', linestyle='--', linewidth=1) # Repeat for the unit plot top_10.plot(kind='barh', y="Purchases", x="Name", ax=ax1) avg = top_10['Purchases'].mean() ax1.set(title='Units', xlabel='Total Units', ylabel='') ax1.axvline(x=avg, color='b', label='Average', linestyle='--', linewidth=1) # Title the figure fig.suptitle('2014 Sales Analysis', fontsize=14, fontweight='bold'); # Hide the legends ax1.legend().set_visible(False) ax0.legend().set_visible(False) When writing code in a Jupyter notebook you can take advantage of the %matplotlib inline or %matplotlib notebook directives to render figures inline. More often, however, you probably want to save your images to disk. Matplotlib supports many different formats for saving files. You can use fig.canvas.get_supported_filetypes() to see what your system supports:'} Since we have the fig object, we can save the figure using multiple options: fig.savefig('sales.png', transparent=False, dpi=80, bbox_inches="tight") This version saves the plot as a png with opaque background. I have also specified the dpi and bbox_inches=”tight” in order to minimize excess white space.
http://www.scikit-yb.org/en/latest/matplotlib.html
CC-MAIN-2018-39
refinedweb
1,708
59.4
How much its will cost and a how long is it take by taxi ..thanks If i not mistaken RM79.00 per way by Airport Taxi,its take 1 hour duration. For shuttle van ,its only RM50.00 for return transfer both way . How do you arrange the shuttle van, Do they have a counter at the airport? The best option...you have to book in advance so that the driver will pick u up at the terminal. Happy Travels Overland-Transfer on our last trip`to Kuala Besut , we have book before and get good price RM 60.00 one way hi daniella, which taxi service you use to Jetty Kuala Besut? do u have any contact no? :) Hi, yes, i would like to know how to book? any contact? Thanks. d charges is for per person or per taxi? By Airport taxi 79MYR per taxi, and by shuttle van 120MYR per way. If u are going to island, perhaps u can check with the resort/agent as they can arrange shuttle van transfer. The cost can range from RM60-90/pax for return (2ways). Minimum 2 paxes. It's an hour journey. That's the info I gathered, as I'm planning a trip to perhentian island next month. Hope it helps. :) Hi, Overland Transfer, I would like to know how to book? any contact? We go in a group of 12 person
https://www.tripadvisor.com/ShowTopic-g298285-i9667-k5191105-Kota_Bharu_airport_to_Kuala_Besut_Jetty-Kota_Bharu_Kelantan.html
CC-MAIN-2016-40
refinedweb
235
86.6
Hy Guys! I have a problem here. I have a domain in a host just to send e-mails. Now I have to publish a website(asp.net-mvc-3) in the same domain, but in a different host. Ask: How can I have the same domain () in 2 different hosts? T all the first host is doing is email sending, then setting and domain.com to the second host is no problem. If the first host is also receiving email then this is what you use MX records for. The MX record will take care of routing the email while you point the main A record to the web server. If the first host also has webservices or something for checking your email then I'd probably use a sub-domain for that such as mail.example.com. domain.com MX A mail.example.com If you have different webservices on each server and want both under the namespace then you probably want to setup a reverse proxy on one of the hosts. Also it's no problem to create an A record with both IP addresses. However this is probably not what you want for this situation. You would do that to do simple load balancing. Edit your DNS zone to name your mail host mail.example.com and your web host web.example.com (or something similar). web.example.com asked 3 years ago viewed 81 times active
http://serverfault.com/questions/318738/1-domain-2-ips?answertab=active
CC-MAIN-2014-52
refinedweb
241
76.62
Hi everyone! We currently have a c++ plugin that whe are trying to port to CS5 using AS and CS Extension Builder. In the plugin, I intercepted the export of the PDF and by using the IID_IMETADATACCESS interface on the document, I could insert a new xmp namespace with serveral values needed inside the metadata report in the generated PDF. We want to do the same configuration in the new extension. Is there a way to create a new namespace in the metadata packet and insert my list of needed values? This question might get more traction in the InDesign scripting forums. In general, if you can do it with ExtendScript, you should be able to do it with ActionScript. Also, you might want to look into the InDesign Host Adapters which allows you to get the XMP from a document, here's a recent question on the topic:. Does that help any?
https://forums.adobe.com/thread/921262
CC-MAIN-2017-30
refinedweb
153
60.65
Learning Number Theory and Haskell: More QuickCheck and Divisors This is part two of my series Learning Number Theory and Haskell. Last time, we implemented a division algorithm according to an exact specification from Jones Elementary Number Theory. We used the QuickCheck testing library to make ourselves fairly confident that the code was actually correct (in other words, behaved according to certain expectations). More QuickCheck Since QuickCheck may be feeling pretty new right now, let’s play around with it for a bit. Exercises 1.1 and 1.2 of the Jones text give some interesting theorems about the remainders of perfect squares. We will verify these theorems with a hundred or so random test cases each. (Note that I’ve now stopped using the jonesQuot and jonesRem functions defined earlier. It was nice to be able to reproduce that behavior, but it’s nicer to become skilled at using the standard library functions instead of inventing our own replacements. We’ll just be careful about negative numbers from here on.) Prelude> :qc \\n -> (n^2 `rem` 4) `elem` [0, 1] +++ OK, passed 100 tests. Prelude> :qc \\n -> (n^2 `rem` 3) `elem` [0, 1] +++ OK, passed 100 tests. Prelude> :qc \\n -> (n^2 `rem` 5) `elem` [0, 1, 4] +++ OK, passed 100 tests. Prelude> :qc \\n -> (n^2 `rem` 6) `elem` [0, 1, 3, 4] +++ OK, passed 100 tests. Yep, they all check out. Of course, you should still write proofs because, as you’re probably quite familiar by now, QuickCheck is suitable for contradicting things or providing evidence in favor, but not for proving things. Generalizing the Proof Method I was interested in this concept, so I extended it to arbitrary divisors with a simple Haskell program that generalizes the method of the proofs in the text. If you’re following along to learn Haskell, this makes use of two shorthands for creating lists. The innermost list [0 .. d - 1] is an arithmetic sequence, and hopefully it’s fairly obvious what that does. The outer list is built using a list comprehension. The vertical bar should still be read as “such that”, and in this context, <- is read as “element of”. So the outermost list is the list of all (n^2) `rem` d such that n is an element of [0 .. d - 1]. These techniques make a lot of mathematical code in Haskell very concise, and keep it closer to math syntax. Finally, nub just removes the duplicates from a list. You can :edit NumTheory.hs again, and add the following: import Data.List (nub) -- Add this at the top of the file, by the other import squareRems d = nub $ [(n^2) `rem` d | n <- [0 .. d - 1] ] We can check this, but we’ll need to be careful about negative values of the divisor since I used rem instead of jonesRem. Prelude> :l NumTheory [1 of 1] Compiling NumTheory ( NumTheory.hs, interpreted ) Ok, modules loaded: NumTheory. *NumTheory> :qc \\n d -> (d > 0) ==> ((n^2) `rem` d) `elem` squareRems d +++ OK, passed 100 tests. This algorithm can be supported by a proof that is not given in the text. The theorem is that the remainder of a2 when divided by d is equal to the remainder of some k2, where 0 <= k < d. This is proven by writing a as qd + r by the division algorithm. The a2 = q2d2 + 2qdr + r2. In turn, r2 can be written as pd + s by the division algorithm. Then a2 = q2d2 + 2qdr + pd + s, which in turn is equal to d(q2d + 2qr + p) + s. Thus, a2 gives a remainder of s, but so does r2, and we know that 0 <= r < d. It should, therefore, be no surprise that the QuickCheck test succeeds. Divisors The next set of theorems from the text deal with divisors. If we’d like to verify these theorems using QuickCheck, we need a way to decide if one number divides another one or not. Following Jones, we’ll say (b | a), read as “b divides a,” iff there is some integer q such that a = qb. Recall that for any pair of numbers a and b, with b not equal to 0, there is a unique pair of integers (q, r) such that a = qb + r and 0 <= r < b. In the special case that b is not zero, it’s clear that b | a iff r = 0 for the unique values of q and r given by the division algorithm. b `divides` a = a `rem` b == 0 This leaves out the possibility that b = 0. The function above will fail if that occurs, because rem a b is undefined when b = 0. So we go back to the definition, which is that b divides a iff there is a q so that a = qb. If b = 0, that equation becomes a = 0b, which is only true if a = 0. Adding that as a special case, the function looks like this: 0 `divides` a = a == 0 b `divides` a = a `rem` b == 0 (This is a definition by pattern matching, which is one of the nicest syntax aspects of the Haskell language. The order is significant here, as the second version of the function will match all possible parameters that aren’t matched beforehand.) I’ve convinced myself that the function is correct, but it’s time for some good old QuickCheck. The text gives a few corner cases explicitly at the top of page 4, so we can test them. *NumTheory> :qc \\n -> n `divides` 0 +++ OK, passed 100 tests. *NumTheory> :qc \\n -> 1 `divides` n +++ OK, passed 100 tests. *NumTheory> :qc \\n -> n `divides` n +++ OK, passed 100 tests. Excellent! Now we’d like to test the general case so we can be really sure that divides is correct… but how? All we have is that b divides a iff for some q, a = qb. We need to know something more about q than it merely being an integer to make this work. A convenient property is given in exercise 1.3 (d) from the Jones text. If a satisfactory q exists, it must divide a itself, so it is between –a and a. The case not covered by this reasoning is when a = 0, but then q = 0 will definitely do the trick! Let’s write a test. *NumTheory> :qc \\a b -> (b `divides` a) == any (\\q -> a == q * b) [(- abs a) .. abs a] +++ OK, passed 100 tests. Here, I build a list of all possible values for q, and then use the built-in any function to see if any of them work. Notice that the first parameter to any is another function, which I’ve written using the anonymous lambda syntax. This is far too slow to be used for an implementation, but it works fine as a test. Testing a Property of Divisors Let us now make use of our new function. Exercise 1.4 of the text asks whether it is the case that if a | b and c | d, then (a + c) | (b + d). If you’re following along with me in the text, which I hope you are, you should convince yourself of the answer ahead of time by more mundane means. QuickCheck easily gives the answer. (Here’s a comprehension quiz. The mere fact that QuickCheck can provide a definite answer should tell you what the answer is. Why?) *NumTheory> :qc \\a b c d -> ((a `divides` b) && (c `divides` d)) ==> (a + c) `divides` (b + d) *** Failed! Falsifiable (after 5 tests and 7 shrinks): 1 1 1 0 So not only do we get an answer (no, the conjecture is false!), but we get a counter-example. If a = 1, b = 1, c = 1, and d = 0, then it is true that a | b because b = (-2)a and c | d because d = (-1)c, but it is not true that (a + c) | (b + d). a + c is 2, and b + d is 1, and clearly there is no integer q such that 1 = 2q. i dont understand proof of theorems,since they are very easy
https://cdsmith.wordpress.com/2007/06/03/learning-number-theory-and-haskell-more-quickcheck-and-divisors/?like=1&source=post_flair&_wpnonce=2221fbe573
CC-MAIN-2016-26
refinedweb
1,351
80.41
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. Recap1:14 with Jeremy McLain Let's review what we've learned so far about namespaces, classes, and methods. cleaned up our code a little and - 0:01 added a namespace, we'll get back to writing our program. - 0:05 But first, let's do a quick recap of what we've learned. - 0:08 We've used two different methods from the Console class. - 0:11 If we were to look at the code for - 0:13 the Console class that comes with the .NET framework, we'd see something like this. - 0:17 So far, we've learned about name spaces, classes, - 0:20 methods, parameters, return values, and strings. - 0:24 We have the system namespace, - 0:26 then we have the console class, here we have two methods. - 0:29 The right method, takes our string as a parameter and prints it. - 0:33 Remember, printMe is the name of the parameter, and - 0:37 the type of the parameter is string. - 0:40 This method only takes a single parameter. - 0:43 If it took more than one parameter, we'd see there are types and - 0:46 names listed here between the parenthesis separated by commas. - 0:51 It doesn't return anything. - 0:53 Remember that's what the word void represents. - 0:55 The ReadLine method doesn't take any parameters, but - 0:58 it returns the string that the user entered. - 1:01 Of course, these comments here are just placeholders for - 1:04 the actual code that would be in these methods. - 1:06 We don't need to know about that code. - 1:08 All we need to know is that the right method prints to the console and - 1:12 the read line method reads from it.
https://teamtreehouse.com/library/recap-4
CC-MAIN-2018-17
refinedweb
321
82.44
NFS futures at LOSUG By user12625760 on Apr 19, 2007 We were privileged to have the legendary Calum Mackay talk at the London Open Solaris User Group last night on the topic of NFS futures. Including everything that is in the up coming NFS v4.1 specifications: Parallel NFS aka pNFS Directory delegations Sessions. He also covered the rest of what is going on with the NFS v4.0 work. In particular the namespace work that he has been doing which will provide Mirror mount support and Referrals. Mirror mounts will change the way a client behaves when it encounters a directory that is a mount point for another file system on the server. For example given 2 file systems: /export/home/cjg and /export/home/cjg/Documents Currently if you mount /export/home/cjg from a server using NFS v2 or v3 on an NFS client and look in the Documents directory you should see an empty directory which if you then write into can cause loads of confusion and potentially more serious consequences. However with NFSv4 & mirror mounts now you see something different. The client would automatically mount the sub file systems without recourse to the automounter. This is kind of cool as the layout I describe above is exactly what I want for my home directory. That way when gnome or firefox or thunderbird go pop and corrupt their start up file I can roll back to the last snapshot without it messing up my data that is in Documents. Referrals have the potential to be as useful and I suspect also the potential to be as dangerous as symbolic links. They allow you to move a file system onto another server and yet the client application can continue to access the original path. The client kernel gets sent a referral and mounts the file system from the new location. All in all an excellent evening. Tags: opensolaris nfs zfs losug
https://blogs.oracle.com/chrisg/tags/losug
CC-MAIN-2015-27
refinedweb
324
59.94
POE::Component::Client::opentick::Constants - Constants for the opentick POE Component. use POE::Component::Client::opentick::Constants; This module contains all of the constants. Exports the following methods into the using package's namespace: Return the value of the named constant. Return the command name from the constant command number. Return a list of all symbolic <CommandType> names. Return one of the default values by name. Return the named pack() template. Return a hashref of the command IDs that the supplied $cmd_number can cancel. Mapped like such: { $cmd_number => $TRUE, $cmd_number2 => $TRUE, ... } It's a hashref for O(1) lookups instead of O(n) list grepping. Returns the canceller command ID of the appropriate cancel request for the supplied command ID. Return a $string representing the datatype for the supplied constant. Return the $count of expected response packets to the specified command. Possible values and their meanings are: undef Unknown or unlisted number of responses. 0 No response is generated (OT_HEARTBEAT). 1 Only one reply packet is generated. 2 A finite number of response packets are generated. 3 The stream is continuous until told to shut up. Return TRUE if the value is a valid CommandStatus. Return TRUE if the value is a valid MessageType. Return actual POE $eventname for symbolic event name constant. The reverse of the above. Returns the POE event to issue for a particular $cmd_number response. e.g. OTEventByCommand( OTConstants('OT_LOGIN') ) returns 'ot_on_login' Return a list of all actual OTEvent names ( values( %$OTEvents ) ). Return the replacement $cmd_id, if the supplied $cmd_id refers to an Opentick-deprecated command. Only requestTickStream and requestOptionChain are deprecated right now. Return TRUE if the value specifies End-Of-Data for <DataType>. Return the $command_number for the specified PUBLIC $api_name. Return the description of the supplied Indicator code from an EQUITY quote, or undef if not found. Return the description of the supplied Indicator code from a TRADE quote, or undef if not found. Does the reverse of the above. Return a list of field numbers that are actually supposed to be 64-bit integers for this $cmd_id. This is to simulate 64-bit ints on a 32-bit perl. Returns the empty list if we're compiled with 64-bit ints, or the $cmd_id doesn't require any 64-bit simulation. Basically, it's used internally and useless for anything else. Return TRUE if official opentick libraries were found in @INC. This module attempts to include the official 'opentick' perl library paths from @INC, to retrieve constant values, and carps the following warning if it is not present: Official opentick lib not found; using built-in constants. It is better to use the official opentick library constant values, if you can. I am sure they will strive to be backward-compatible, but I cannot guarantee the values contained herein will always work in the future. They can be downloaded from: Install at least opentick::OTConstants to a path in your @INC, and this module will find them. (Read perldoc -q "include path" if you are unsure how to modify your @INC path.).
http://search.cpan.org/~infidel/POE-Component-Client-opentick-0.21/lib/POE/Component/Client/opentick/Constants.pm
CC-MAIN-2016-26
refinedweb
510
59.5
We will experiment with running a Brian tutorial online. The first tutorial of this kind will take place on **Friday, August 7th 2020** from **2pm-6pm BST** (UTC+1, see [here for other timezones]()). Free (but mandatory) registration [here](). We will run the tutorial as a [Zoom]() meeting – registering with the link will give you the URL (please don't share so we can avoid zoombombing). We will record the meeting and if everything goes reasonably well, we will upload the videos later. If you participate, it would be really helpful if you could download and install Brian before the tutorial so that you can work along with it as we go. Instructions are: 1. Download and install the [Anaconda Python 3 distribution]() 2. Open a command prompt and run the following lines: conda create -n brian_tutorial -c conda-forge python=3 brian2 matplotlib notebook nb_conda_kernels conda activate brian_tutorial pip install brian2tools3. You can now verify this is working by starting a Jupyter notebook server with: jupyter notebook4. Your browser should open with the Jupyter notebooks interface. Now create a new notebook and put the following code in an empty cell: from brian2 import *5. Run that cell by pressing Ctrl+Enter. If that works without any errors (you might see a warning) then you're good to go. 6. If that doesn't work or you want to use a different system than Anaconda, take a look at our [detailed installation instructions](). If you have trouble installing, don't worry. You can use the [Brian installation on Binder]() or [Google Colab]() instead. For Colab, just make the first cell as follows: !pip install brian2 !pip install brian2toolsLooking forward to seeing you all on Friday!
https://briansimulator.org/posts/2020/brian-online-tutorial/index.md
CC-MAIN-2020-40
refinedweb
285
65.93
These are chat archives for SmingHub/Sming /opt/esp-open-sdk-1.4.0/esptool/esptool.py -p /dev/tty.SLAB_USBtoUART -b 961000 write_flash -ff 40m -fm dio -fs 4m 0x00000 out/firmware/0x00000.bin 0x09000 out/firmware/0x09000.bin 0x38000 out/firmware/spiff_rom.bin #include <user_config.h> #include <SmingCore/SmingCore.h> #define LED_PIN 9 Timer procTimer; bool state = true; void blink() { digitalWrite(LED_PIN, state); state = !state; debugf("blink!"); } void init() { pinMode(LED_PIN, OUTPUT); procTimer.initializeMs(1000, blink).start(); } I would change the stuff about installing in opt Install prebuilt ESP Open SDK Get Sming Core: Because El-Captian does NOT allow you to do this: cd /opt/ sudo git clone I would suggets install to user friendly places I would add also a picture in eclipse of glabal params (environment) for the Configure Environment Variables section (sometime, this step does not work) after : Or build it yourself - I would add a menaical laugh( just kidding,but its close to impossible, so thanks for building for us) lol cd, and then sudo git clone? brew install sming void HandleServoMovementComplete() { //whatever }
https://gitter.im/SmingHub/Sming/archives/2015/11/04?at=5639c5650800da954de6a76b
CC-MAIN-2019-43
refinedweb
180
57.77
XML and MATLAB: Navigating a Tree 26 Posted by Michael Katz, This week I’m posting the third part in my series on using XML. Since I’ve had a request to cover this topic, I’ve moved it up in the schedule. We’ll be back to the new MATLAB R2010b features next week. Last time in my XML in MATLAB series I explained the steps needed to create an XML DOM structure and build up an XML tree. This week I answer the question:” now that I have a tree, how can I extract data from it?” I’ll continue to use the AddressBook example from the last post. Remember, you can create a new tree or read one into MATLAB using the xmlwrite function. For your reference, here are the other parts in the series: - Part 1: Using XML in MATLAB. - Part 2: Simple XML Node Creation. There are at least two ways to navigate the tree in MATLAB. Both of the ways I describe here once again take advantage of the Java environment that runs with MATLAB. The first way makes use of the structure of the tree and relationship of the nodes, the second uses the XPath language to precisely pick out a node. Once again, here is the example tree: <?xml version="1.0" encoding="utf-8"?> <AddressBook> <Entry> <Name>Friendly J. Mathworker</Name> <PhoneNumber>(508) 647-7000</PhoneNumber> <Address hasZip="no" type="work">3 Apple Hill Dr, Natick MA</Address> </Entry> </AddressBook> Let’s say I want to find Friendly’s phone number. To do this I’m going to start the root node, “AddressBook.” From there I will walk down the tree to AddressBook/Entry/PhoneNumber and get the the text of the PhoneNumber node. % Get the "AddressBook" node addressBookNode = docNode.getDocumentElement; % Get all the "Entry" nodes entries = addressBookNode.getChildNodes; % Get the first "Entry"'s children % Remember that java arrays are zero-based friendlyInfo = entries.item(0).getChildNodes; % Iterate over the nodes to find the "PhoneNumber" % once there are no more siblinings, "node" will be empty node = friendlyInfo.getFirstChild; while ~isempty(node) if strcmpi(node.getNodeName, 'PhoneNumber') break; else node = node.getNextSibling; end end phoneNumber = node.getTextContent phoneNumber = (508) 647-7000 The getChildNodes() method returns a list of nodes. There are several ways to navigate the returned node list. In the above example I used getFirstChild() which returns the first child (in this case, the Name node). Then using the getNextSibling() method, I can walk through all the other child nodes to find the one I’m looking for, in this case it’s PhoneNumber. I used the getNodeName() method to get the string value of the node in order to compare it with “PhoneNumber.” If you’re looking at the methods of a node, the getNodeName() method is redundant with the getTagName() method. Once I have the desired node, I used the getTextContent() method to get the text inside the <PhoneNumber></PhoneNumber> tags. Note that if there are multiple PhoneNumber child nodes of the Entry, this will stop after finding the first one. Another way to iterate over the children is to use item() method. Note that since this is a Java array, the array indices go from 0 to size-1. for i=0:friendlyInfo.getLength - 1 if strcmpi(friendlyInfo.item(i).getTagName, 'PhoneNumber') phoneNumber = friendlyInfo.item(i).getTextContent end end phoneNumber = (508) 647-7000 Instead of iterating to find the PhoneNumber node, we can use the ElementsByTagName method to find all the elements in the subtree that have a certain name. This then returns a list of matching nodes, which we can iterate, but since I know there’s only one PhoneNumber I just grabbed the 0’th element: phoneNumber = friendlyInfo.getElementsByTagName('PhoneNumber').item(0).getTextContent phoneNumber = (508) 647-7000 Using XPath XPath is a language for finding nodes in an XML document, and comes with Java. It works similarly to Java’s regular expression engine, in that you create a string that represents nodes you want to match, compile that to an internal representation and then evaluate it on your document. It’s an advanced step, and I can’t think of anything in regular MATLAB that works the same way. XPath expressions can start either from the top of the tree or anywhere within a document or document fragment. Node paths are represented like directory paths, in that that “..” goes up a level, “.” is the same level, and nodes are separated by forward slashes, “/”. In our example, the first phone number of a the first entry would be “AddressBook/Entry/PhoneNumber.” “//” represents anywhere in the document, so “//PhoneNumber” would also match the same nodes. To use XPath, you first need to create an XPath object from the XPath factory. In the below example, I’ve first imported the xpath package to make it easier to type out all these various java classes. Once you have an XPath object, you can then compile and evaluate the expression. % get the xpath mechanism into the workspace import javax.xml.xpath.* factory = XPathFactory.newInstance; xpath = factory.newXPath; % compile and evaluate the XPath Expression expression = xpath.compile('AddressBook/Entry/PhoneNumber'); phoneNumberNode = expression.evaluate(docNode, XPathConstants.NODE); phoneNumber = phoneNumberNode.getTextContent phoneNumber = (508) 647-7000 In the above example, the evaluate() method takes the compiled XPath expression and an XPathConstant. This constant tells the expression what type of result to return. In this case, we’ve asked for a NODE, and so we get back the matching node object. But if we change the the constant to STRING, we get back the text of the matched node directly, as in the next example. You can ask also for NODESETs, NUMBERs, and BOOLEANs. phoneNumber = expression.evaluate(docNode, XPathConstants.STRING) phoneNumber = (508) 647-7000 XPath is a complicated topic and probably worthy of it’s own follow-up post. The language is rich enough to precisely pick out any node, entity, attribute, or other piece of a data from an XML document starting anywhere in the tree. This has been a meatier post than most for me, so please ask lots of follow-up questions or leave comments. Reference - Xerces DOM API - XPath Tutorial 26 CommentsOldest to Newest Hi Michael, Thanks for your information on getting text from XML files, however I am unsure (being a begginner Matlab user) once I obtain the text how do I then export the information into a text or Excel file? I look forward to hearing from you! Cassandra Hi Michael, Sorry to be a pain, I just worked it out by using the char(), which converts it from a java.lang.string to a character… This has been bugging me and I’m so thankful to have worked it through!! Thanks Cassandra Hello, thanks for this great help. But I have question concerning this topic. I have an XML file that contains nodes in the same level with the same name. Just the Attribute ‘ID’ is different. This would be in your example a second node ‘entry’. The nodes have attributes ‘ID=”1″‘ respectively ‘ID=”2″‘ ( ….). Is there a way to navigate through the XML by these attribute? Thanks, Thomas Thomas, If you’re trying to retrieve an element, say “<PhoneNumber>” with a specific value in it’s ID attribute, say “work”, something like this might work: There are a number of XML-related entries on the File Exchange, including one by Matthew Simoneau that shows an example using a NODESET in XPath, which you probably would use if you wanted to retrieve ALL of the nodes that had an “ID” attribute for processing: Hope that helps. Rich Let’s say that I am trying to get a specific number out of the xml file for a specific variable to be used in another function. I have been able to extract it as a text so that I can see the number, but not in a way that I can use it as an input in another function. Please let me know how I can accomplish this. @MM, Do you just need to convert from string to a numeric type? Take a look at the STR2NUM function. E.g. Nice article but what if I want to use xpath with an xml file that includes a namespace designation? Epic fail. xpath and Matlab – great for basic structures but completely fails when you introduce namespaces to the xml! Hi Michael! I wonder what ‘docNode’ is and how do you get it? It is not obvious from your example! Thank you in advance, Regards /Nasser Hosseini @Dave, I’m not sure what you mean. Can you provide example? Namespaces should be addable to the nodes. @Nasser, Thanks for that oversight. I explained it in the previous part about creating nodes, but for reference here it is for the example: Hi, I am having problems with retrieving data from an xml file. I think I have followed the instructions here. When I try the following code, the coordinate node returns an empty element. Why is the node not identified? I’m a beginner so I don’t know if I’ve imported and set up the factory correctly, or if there is something else I don’t understand. Thanks, Charlie —; % compile and evaluate the XPath Expression expression = xpath.compile(‘Document/Placemark/LineString/coordinates’); coordinateNode = expression.evaluate(documentNode, XPathConstants.NODE) data = coordinateNode.getTextContent Here is also the xml description Canale centreline.kml Canale centreline 10.09984114068684,45.80687219300302,0 10.10001647099475,45.80695514950003,0 That xml code didn’t come out right. Let me try again Canale centreline.kml Canale centreline #m_ylw-pushpin 1 10.09984114068684,45.80687219300302,0 10.10001647099475,45.80695514950003,0 I couldn’t get to the bottom of how to use xpath unfortunately, so I read the xml data using the example shown in the xmlread documentation. This is not ideal, but at least I could get it to work. I couldn’t work out how to learn how the java language worked. Suggestions would be very welcome. @Charlie, It’s hard to say what is going on without understanding the XML. Unfortuanately with our blog software you’d have to replace <‘s and >’s with &lt; and &gt to get them to show up here. One thing that you might try is replacing the expression with something like Since you’re applying the expression on the document node. I’m not sure what the expectation is with your document, or what it’s structure is. Also, you might want to try using XPathConstants.NODESET instead of XPathConstants.NODE. To get return set of all the matching nodes. Thanks Michael. I tried your suggestions. Using // made no difference: I still got the output coordinateNode = [] ??? Attempt to reference field of non-structure array. I also used different nodes (for example //coordinates) but this gave no improvement. I tried your second suggestion of using NODESET, and got the following response: >> coordinateNode = expression.evaluate(docNode, XPathConstants.NODESET) coordinateNode = net.sf.saxon.dom.DOMNodeList@c3e000 >> data = coordinateNode.getTextContent ??? No appropriate method, property, or field getTextContent for class net.sf.saxon.dom.DOMNodeList. >> data = coordinateNode.getLength data = 0 This is essentially the same response: the node coordinates are not being picked up by the function. I am struggling to debug this because I don’t really know how the java classes and methods work. Presumably I need a single node to be able to use the getTextContent method. Here is the essential parts of the xml tree with your suggested replacement. Hopefully this will help you see what is going on here. To give some context, this data is produced by google earth when exporting a set of locations as a .kml file. <?xml version=”1.0″ encoding=”utf-8″?&gt <kml xmlns=”″ xmlns:atom=”” xmlns:gx=”″ xmlns:kml=”″&gt <Document&gt <name&gtCanale centreline.kml</name&gt <Placemark&gt <name&gtCanale centreline</name&gt <styleUrl&gt#m_ylw-pushpin</styleUrl&gt <LineString&gt <tessellate&gt1</tessellate&gt <coordinates&gt 10.09984114068684,45.80687219300302,0 10.10001647099475,45.80695514950003,0 10.10009815060466,45.80700378862793,0 10.10014860229519,45.80703578631482,0 10.10022811504785,45.80709873377793,0 10.10031010039278,45.80713198033737,0 10.10039209567001,45.80716523014203,0 10.10043060939518,45.80721155298124,0 10.10059165048882,45.80731526414375,0 10.10067084675329,45.80738580284769,0 10.10077437654077,45.80742018705158,0 10.10087407286891,45.80749903339635,0 10.10094496915137,45.80753930965439,0 10.10105627273914,45.8076036518462,0 10.10113576682277,45.80766657410192,0 10.10119471893764,45.80772115465624,0 10.10126359962793,45.80778359269938,0 10.10136426386506,45.80784743536302,0 10.1014112702849,45.80791630897777,0 10.10148214037166,45.80795656414318,0 10.10154185964934,45.80800378862803,0 10.10161282555237,45.80804408470648,0 10.10169305938987,45.8080996247984,0 10.10176464185466,45.8081325076318,0 10.10183422692097,45.80818754113409,0 10.10190302533531,45.80824989951307,0 10.1019724151388,45.80830484378664,0 10.10207356450982,45.808361196677,0 10.10214093307033,45.80843827920521,0 10.10218914996636,45.80849228359421,0 10.10229299490651,45.80851911841853,0 10.10237499281356,45.8085523932947,0 10.10242320989466,45.80860639765308,0 10.10253629440558,45.80864846068262,0 10.10261559800962,45.80871125354677,0 10.10273070367636,45.80873117766292,0 10.10279152902868,45.80876351326417,0 10.10288478779878,45.80878987758641,0 10.1029859390649,45.80884622983381,0 10.10308911157812,45.80888044325135,0 10.10321278369886,45.80892297497288,0 10.10330469593155,45.80896409855487,0 10.10338602169821,45.80900475219077,0 10.1035831279132,45.80905795013989,0 10.10369554080581,45.8091073915239,0 10.10380930078051,45.80914207396828,0 10.1039243924581,45.80916198901878,0 10.10410953675727,45.80922946173238,0 10.1042655887779,45.80926599718044,0 10.10448377732788,45.80932009535048,0 10.10464581905896,45.80937169685852,0 10.10519473741717,45.80950956495738,0 10.10612207616,45.80976848013496,0 10.10668667174301,45.80991931642902,0 10.10735617652176,45.81011904635547,0 10.10755079422222,45.81019061814287,0 10.10780348654802,45.81027639187093,0 10.10803597017285,45.81037553650121,0 10.10821951016236,45.81046710032802,0 10.10838297427577,45.8105858071117,0 10.10848814678001,45.81069003563514,0 10.10861187605076,45.8108356424888,0 10.10876378535779,45.81102960623957,0 10.10879264406763,45.81105054198222,0 10.10880175134389,45.81107809248236,0 10.10883039778439,45.81110587300541,0 10.10885887582951,45.81114051062271,0 10.10886831282799,45.81115434247666,0 10.10890672473159,45.8111822327723,0 10.10893519903836,45.8112168662463,0 10.1089540707507,45.81124452682191,0 10.10897294201804,45.8112721866723,0 10.10899181299001,45.81129984645426,0 10.10901068384033,45.81132750526101,0 10.10903931941305,45.81135527398825,0 10.1090678193733,45.81138299496511,0 10.10908672014586,45.81140374559718,0 10.10910543183352,45.8114313447435,0 10.1091438470374,45.81145922269521,0 10.10916306040502,45.8114869991964,0 </coordinates&gt </LineString&gt </Placemark&gt </Document&gt </kml&gt Ah, I get it now. This is because you have a xmlns in your docnode. The Java XPath implementation doesn’t know how to do this on its own, so you need to supply a NamespaceContext. Try the following. Save this code as KMLNamesspaceContext.java: Then in MATLAB, complile this to a class: Now use that with your expression: Hi Michael, Thanks again for the help. I still get the same error of coordinateNode = [] ??? Attempt to reference field of non-structure array. Error in ==> xpath_setup at 34 data = coordinateNode.getTextContent . I have implemented your suggestion as !/opt/sunjava-native/jdk/bin/javac KMLNamesspaceContext.java javaaddpath(pwd) nc = KMLNamesspaceContext; xpath.setNamespaceContext(nc); expression = xpath.compile(‘//kml:Documents’) coordinateNode=expression.evaluate(docNode, XPathConstants.NODE) data = coordinateNode.getTextContent I have a work around based on I use this example to write turn the xml file into a struct from which I can extract the data I want. This is not ideal and xpath looks much smoother, if I could get it to work. @Charlie, Sorry my example did not fully cover your need. You’ll probably need something like: But I’m not too sure. If you’re still having trouble, please contact Technical Support. They will be able to give you more assistance than I am able to do in the comments, here. Thanks Michael. Maybe this function was a bit of a leap for me at the moment. Could you suggest somewhere else that I could find documentation and examples of how to use the xpath functions? The pages are pretty much unintelligible to me. For example, how could I find out what arguments expression.evaluate() requires? There is no help in the matlab documentation I can see. @Charlie, I’ve just added a submission to the file exchange to hopefully provide an easy way to access nodes via XPath. It should support namespaces as well, though I’ve never tried it with Google Earth output I hope you find it useful. Let me know what you think! Hi Mike, Thanks for the article. Really helped as I could easily create a new xml file. However, navigating through an existing xml file, I was wondering if we could edit the text content of a node, or the node name itself for that matter. @Nikhil, You can edit the text content with the setTextContent method. To make minor edits, use getTextContent to copy the String, modify it, and then replace it back. As for changing the node name, the DOM API does not specify a way to do this. You’ll have to create a new element with the desired name and move the first node’s children to the new element. And replace the first node with the new one. Hey thanks a lot Mike! It worked out just well. I was initially editing with the tree structure, but XPath was so much smoother. Cheers. Hello, I have this: How I can get data from this with xpath??? Hello, I have this: <TestStep comp=”GELE” datatype=”Number” group=”[010] Seq_Short Test” limhi=”0.002″ limlo=”-0.002″ measid=”010_272″ measname=”PinCheck ST_O_OUT_1″ start=”88079.574242300005″ status=”Passed” stepid=”ID#:MOsSkDksp0KkG29Tfvr4MC” stepname=”[010_272] PinCheck ST_O_OUT_1″ steptype=”NumericLimitTest” time=”0.0034241″ unit=”ampere” value=”-0.0000037545442″/>’ How I can get data from this with xpath??? Thanks for the series Michael. I have trouble duplicating your code in this piece. I start with the file saved in the working folder, then: clear; clc xmlFileName = ‘phoneBook.xml’; docNode = xmlread(xmlFileName); addressBookNode = docNode.getDocumentElement; entries = addressBookNode.getChildNodes; friendlyInfo = entries.item(0).getChildNodes; node = friendlyInfo.getFirstChild; while ~isempty(node) if strcmpi(node.getNodeName, ‘PhoneNumber’) break; else node = node.getNextSibling; end end phoneNumber = node.getTextContent except for the first 3 lines, this is identical to your sample. matlab reports this error on line 7: ??? No appropriate method, property, or field getFirstChild for class org.apache.xerces.dom.CharacterDataImpl$1. if I run through line 5 and evaluate the first part of line 6, I get: >> friendlyInfo = entries.item(0) friendlyInfo = [#text: ] but evaluating all of line 6 produces: org.apache.xerces.dom.CharacterDataImpl$1@6231ed What is going on? Thanks, John
http://blogs.mathworks.com/community//2010/11/01/xml-and-matlab-navigating-a-tree/
CC-MAIN-2016-36
refinedweb
3,098
66.33
Run-cmd Execute a script file or run a Windows program. run [option] argument This set of statements will work properly with the run command: if (condition) { // True condition statements } else { // False condition statements } This set of statements will NOT work properly with the run command: if (condition) { // True condition statements } else { // False condition statements } The run.section() object method does not have line-by-line limitation and provides greater flexibility for script file execution. Syntax: run scriptfileName Run the script file scriptfileName. This is an outdated way to run scripts and it SHOULD NOT BE USED. To run script files or portions of script files, use the run object. syntax: run -au value [UID] Minimum Origin Version Required: 8.5.1 When not specifying UID, value can be: When specifying UID, change the recalculate mode only for the operation specified with UID. run -au 1; // Set recalculate mode to Auto run -au 0 779; // Set operation with UID 779 to None(delete it) but keep outputs To get the UID of an operation, keep the Script Window open, click on the green lock associated with this operation and select Show Info context menu. The returned message contains the UID for this operation. syntax: run -aub value [UID] Block or unblock recalculation for a operation specified with UID.. If UID is omitted, it will work on all operations. value = 0: unblock, value = 1: block. range aa=[book1]sheet1!col(2); //suppose recalculation on col(2) is blocked Uid=aa.getop(); run -aub 0 $(Uid); You can get the UID number of an operation by method getop(). syntax: run -auc [UID] Open dialog to change parameters for a operation specified with UID. run -auc 804; /// 804 is operation UID You can get the UID number by running list op/opi/opo command introduced in this page. If the active worksheet is a hierarchical sheet, UID can be omitted. Since the hierarchical sheet contains single operation only, running run -auc; directly will open that operation dialog in the active sheet. Syntax: run -auf n [m] Minimum Origin Version Required: 2015 SR1 where n can be 0,1,2 where m can be 0,1 run -auf 1 1; //It is the same as "page -uo" //Trigger recalculation for all the operations in the active Workbook . Note: syntax: run -cr syntax: run -cr filename Removes the specified file from the Temporary and User folders in Code Builder. If filename is specified, it must include the full path. If no filename is given, all files are removed from the Temporary and User folders. syntax: run -cra Clean Temporary, User and User[AutoLoad] folders, plus any existing [OriginCAuto] section in the user's Origin.ini file (A section named as [OriginCAuto] is added to Origin.ini when you add files to User[AutoLoad] in Code Builder). Makes no changes to the System folder. Syntax: run -e text or -ep text Both will execute the following text as if it were typed into the Windows Run dialog box (from the Windows Start menu). If a program is in the Windows system PATH then no path is needed. Since Explorer is the default shell for Windows, -ep is the same as -e. // Notepad is generally in Windows PATH so no path needed // Open Origin's INI file in Notepad. Script waits for Notepad to close. run -ep notepad %YOrigin.ini; ty -a Notepad closed.; // Open your User Files Folder run -e explorer %Y; // -ep would behave the same as -e ty -a My UFF is open in Explorer.; // Word is not usually in Windows PATH so path must be included string strFile$ = system.path.program$ + "TXTemplate.rtf"; run -ep C:\Program Files (x86)\Microsoft Office\Office12\winword.exe %(strFile$); ty Done with Word.; Syntax: run -lk "range://RangeName" or "http: //URL" or "https: //URL" or "help://HelpPage" or "help://quick/Keyword" or "notes://NotesWindowName" This command supports hyperlinking to open an internal window, an external link, or a help document page Range://: a range inside the project. http:// or https://: an external link. help://: link to the Origin Help document. help://quick/Keyword: open the quick help window with the option to specify keyword(s) for search (e.g. run -lk "help://quick/descriptive statistics";). notes://: link to the Notes window. See Link Notations in the LabTalk Reference section. Syntax: run -oc func arg This command will invoke an Origin C function in LabTalk. This is needed because of possible name conflict between X-Function, ogs file, LabTalk Functions etc. Also, there are some Origin C functions that were written to be callable only by the run -oc command, via the #pragma labtalk(2), to prevent the name polution problem for the LabTalk environment. All Origin C functions compiled with #pragma labtalk(2) will not be visible to LabTalk directly, you can use them only via run -oc Syntax: run -p au; Any script which follows waits for all pending operations in the Origin project to update to completion. Syntax: run -p aub; Any script which follows waits for all pending operations in the active book to update to completion. Syntax: run -p aubd; Any script which follows waits for all pending operations in the active book, plus any descendants, to update to completion. Minimum Origin Version Required: 9.1 SR2 Syntax: run -p aui UID The operation with the specified UID will be forced to update to completion before executing any script that follows. //Force the operation with UID 803 to recalculate run -p aui 803; //Force any pending operation to recalculate run -p au; Syntax: run -p auid UID; Any script which follows waits for operation UID, plus any descendants, to update to completion. Syntax: run -p auw; Any script which follows waits for all pending operations in the active sheet to update to completion. Syntax: run -p auwd; Any script which follows waits for all pending operations in the active sheet, plus any descendants, to update to completion. Minimum Origin Version Required: 2015 SR0 Syntax: run -py Python Statement(s) Run Python statement(s) (i.e. command lines). The Python Statements should be a string variable with the Python command lines to be executed. You can directly call the Python command: run -py print('Hello Origin'); // It runs the Python command print('Hello Origin') // It should output the string "Hello Origin" in the Script Window Also, you can pass the Python command line(s) to a string variable in LabTalk: // Define the Python command line(s) as a LabTalk string variable // %(CRLF) is the LabTalk keyword to indicate carriage return, i.e. start new line str$ = "a = 2 ** 8%(CRLF)print(a)"; // Execute Python command lines run -py %(str$); // ** is the Exponent operator in Python //It should return 256, i.e. 2 to the power 8 It is equivalent to the object method run.python("Python Statements", 0). (i.e. if the option is set to 0 or default) Minimum Origin Version Required: 2021 Syntax: run -pyb objectName; Run Python code entered on the Programming tab of the text object. Normally, the run -pyb command would be executed from a button object (second text object set to trigger on Button Up). run -pyb For example, make a text object then open its Properties dialog and on the Programming tab set Name = "abc" and enter this into the script box: print ("Hello World!"); Make a second text object (name unimportant), open Properties > Programming tab and set Script Run After to Button Up and enter this into the script box: run -pyb abc; Clicking the button should trigger code attached to the text object and print "Hello World" to the Script Window. Syntax: run -pye Python Expression Evaluate Python expression(s) . The Python Expression should be a string variable with the Python expressions to be evaluated. For example, run -pye 2 ** 5; // It evaluates the Python expression 2 ** 5, ** is the Exponent operator in Python // It outputs the evaluated value 32, which is 2 to the power 5 It is equivalent to the object method run.python("Python Expression", 1). (i.e. if the option is set to 1) Syntax: run -pyf Python File Path and Name [arg1 arg2 arg3 arg4 arg5] Run Python file, the Python File Path and Name should be a string variable with the .py file path and name to be executed. Note that if the .py file locates in the User Files Folder, it is not needed to specify the file path, if not, always need to specify the full file path. For example, suppose there is a test.py file under the User Files Folder of Origin, you can run the following LabTalk script: // Make sure script execution is set to LabTalk run -pyf "test.py"; // This will execute the script within the file test.py Sometimes you will need to specify the full file path for the .py file: // Define a string for the file path and name of the .py file string str$ = system.PATH.PROGRAM$ + "\Samples\Python\ListMember.py"; // Run the .py file run -pyf %(str$); // This .py file list all provided functions in PyOrigin module It is equivalent to the object method run.python("Python File", 2). (i.e. if the option is set to 2) Arguments can be added in the call. Suppose you have a Python file "test.py" in the User Files Folder. import sys if __name__ == '__main__': print(len(sys.argv)) print(sys.argv[1]) print(sys.argv[2]) Variables can be passed from Labtalk to Python file by: double var1 = 123.456; string var2$ = 'hello'; run -pyf test.py "$(var1)" "%(var2$)"; Syntax: run -pyp Python File Name The Python File Name should indicate a Python file which is attached to the Origin project, i.e. which has been added to the Project folder of Code Builder. The file extension (.py) can be included but is not required. As an example, you can open the SendDataToWks.opj file under <Origin EXE Folder>\Samples\Python\ path and run the script: run -pyp SendDataToWorksheet.py; // Alternatively, run the following is equivalent // run -pyp SendDataToWorksheet; Note: The script above is associated with the Run SendDataToWorksheet.py button in worksheet. Syntax: run -w or -wu // the following script print out the time it takes to run // formula of col(2) in Book1's active sheet sec; run -wu Book1 2; watch; Syntax: run -ws Execute the script in worksheet script panel. Syntax: run -xf xfname Execute the X-Function named xfname. Normally, you can execute a LabTalk callable X-Function directly, but since the current working directory might have an ogs file with the same name, the run -xf command allows for reliable way to call an X-Function. Example 1 The following script opens Notepad and loads the Origin.INI file. run -e notepad %Yorigin.ini; Notepad does not need a path because it is typically found in Windows default PATH. The %Y tells Notepad the path to your Origin.INI. Example 2 The next script goes to column 1, row 10 in the Book1 sheet1. run -LK "Range://[Book1]sheet1!col(1)[10]";
https://www.originlab.com/doc/LabTalk/ref/Run-cmd
CC-MAIN-2021-10
refinedweb
1,849
63.39
a grep through allegro's include files showed me the use of the variable name 'aspect' in the following file : matrix.h. This makes allegro completely unusable by the aspectc++ weaver () since aspect is the keyword for defining aspects (as class is a keyword for defining classes in regular C++). this is a big bummer for my thesis-project You can try this: #define aspect _ugly_aspect #include <allegro.h> #undef aspect it's quite ugly, but it might work. -- "Do not meddle in the affairs of cats, for they are subtle and will pee on your computer." -- Bruce Graham Very clever of them creating an extension, which defines a word as new keyword likely being used in existing code I mean, why didn't they call it $aspect or something? Anyway, try this: [Edit: beaten by KittyCat] Or if there's no way to work around it like this, I wouldn't mind a patch against Allegro to change it, after all we had to change paramters named "y1" and "index" for similiar reasons (although libc is a much stronger reason to change things than aspect). --"Either help out or stop whining" - Evert Ever tried compiling RPG written in C using C++? 'class' (and 'new', too) are quite common there. ________[ My LD48 entry ] [ My SpeedHack`04 entry ] [ TEAM ALLEGRO ] [ My TINS'05 entry - Snake ] Thats why you always compile C code with a C compiler. -- were you, I'd just sever the matrix code from Allegro. Do you really use it at all? [My site] [Tetrominoes] Personally, I think we should put all the 3d + matrix code into an addon. But then, what about backwards compatibility and so on? Dropping backwards compatibility here would be a precedence for dropping it in other places, and so we'd end up trying to do Allegro 5, and history would repeat itself.. [Edit: That is, just removing it in his local copy might be a good idea..] But then, what about backwards compatibility and so on? Why not drop it into a 'semi addon', i.e. an addon included in the core Allegro distribution and installed by default but one that may be disabled by issueing the correct commandline to make? The only precedent you'd set would be for people being able to not build the library with the bits they don't want if those bits are easy to sever. Yes, making it modular. Definitely what we should do. In fact, it should already work.. simply don't include allegro.h, but directly the things you need, like allegro/base.h, allegro/graphics.h, and not allegro/matrix.h and allegro/3d.h and so on. Another way of doing it would be to predefine whatever #defines the headers define(e.g ALLEGRO_3D_H) one don't want to include prior to #including allegro The attached patch is enough, isn't it? I would akkept it. Ever tried compiling RPG written in C using C++? 'class' (and 'new', too) are quite common there. Rotfl While I think it's nice to have a workaround for this, I do think that it's not really Allegro's fault for not working on a platform it doesn't support at the moment. Just dump it in the compatibility layer for 4.3 and be done with it. the #define aspect _blah_aspect-trick indeed works! tnx for that...I had allready changed the sources myself. the only files needing change are :matrix.hmath3d.cin functions :void get_camera_matrix(...);void get_camera_matrix_f(...);get_scaling_matrix_f(...); Now there's still some other stuff necessary to make ac++ work with my allegro-program, but none of it is allegro's "fault"... Tnx for the help...Hans EDIT: In the meantime I have an allegro-program working together with aspectc++, but had some problems also with _asm used in \ALLEGRO\include\allegro\platform\al386vc.h. I've been told the _asm keyword is obsolote and MSVC only supports it for compatibility with previous versions (I used the free VC2003Toolkit (VC 7.0) command line compiler). supported is : asm, _asm or _asm so i heard. this can ofcourse be easily solved by adding #define _asm __asm Does anyone know which versions of MSVC support which of the _asm, _asm, and _asm variants? MSVC6 supports _asm and __asm. -----sig:“Programs should be written for people to read, and only incidentally for machines to execute.” - Structure and Interpretation of Computer Programs"Political Correctness is fascism disguised as manners" --George Carlin
https://www.allegro.cc/forums/thread/483246/483246
CC-MAIN-2020-10
refinedweb
745
66.23
Ok so i guess what your saying is in order to put this; //Perform Conversions cTemp = fTemp * CS; mDist = fDist * ME; kWeight = pWeight * KL; K... Type: Posts; User: XxDarkstarxX Ok so i guess what your saying is in order to put this; //Perform Conversions cTemp = fTemp * CS; mDist = fDist * ME; kWeight = pWeight * KL; K... I understand that..I meant does it go before the } while, or would it go after the } while. I appreciate your help greatly I just am stupid at this as it is so new to me. import java.util.*; public class InputValidation { public static void main(String[] args) { int value=-1, fTemp, fDist, pWeight;//Declare variables double cTemp, mDist, kWeight, K, I,S;... import java.util.*; public class InputValidation { public static void main(String[] args) { Scanner inputScanner = new Scanner(System.in); do { System.out.println("Enter a Fahrenheit... The loop is for extra credit. So should I do something like the fahrenheit to celsius on one line and the fahrenheit to kelvin later on in the code? Or put the conversion somehow about the loop? That would be the validate input part of pseudo code. As for when it is executed, it does almost everything i need. If I enter 250 it states that it is out of range, if I enter abc it states that... I did, Im just trying to configure them in the correct place to get it all working. Do I add the other information in the do loop or do I add them around the loop? the A comment is the ending of the method. the B comment is the end of the main which I thought I had to put. But all the { } are correct. I just found a video on how to do exactly what I needed and... Ok let me try a different approach to this one step at a time. Could you tell me if my methods are correct, their separate. just the methods none of my conversion equations on here yet. public... I know i just wrote that as an example. My first method is //Integer.parse which at the end of the method or at least what i think is the end just before the return isValid. I have }//end... Does this read any better? Declare constants Declare variables get input method 1 method 2 perform conversion display report Sorry this is taking so long, Im a visual learner rather then... Ok I will try that. Now once I figure that out will my methods like this code be ok still or do I need to move them around more? import java.util.Scanner; public class Project2 { private... Its due tomorrow night so I got to figure this out. I understand that there has to be the same amount { as }. Which i just noticed im missing two } somewhere in this thing. Am I on the right track? Yeah anything I do doesnt not work for me. I still get an error somewhere. Honestly Ive got no clue, when I say that Im new to this I mean this is the second thing I ever tried coding.. Where or How am I still defining a method inside another method? I also still get the insert a } on the same semicolon after moving a } to the next line. S = pWeight * 0.0714285714; } this is the line with the error and its the semi colon. Tells me to insert } to complete the block but once i do, the same error is still there. As for being correct... Ok Im getting tired of this project and Im sure your getting tired of answering my questions but is this correct now yes or no? import java.util.Scanner; public class Project2 { private static... Yeah I am.. I'm gonna read up on methods and see if I can figure it. In eclipse to get the source codeeould i run it eoth yhe errors and in the problrms tab copy that?out. This is all a foregin... Alright hopefully my last question as im sure this is kind of annoying to you, but heres my code. Now the only errors I am getting now is in bold and bigger font. I put it in the pseudo code that you... Could you by any chance give me an example of doing this the correct way? It needs to print that message in the results. I need the methods to be called on which apparently i havent done right. My methods are after the display report, does that matter? I tried to compile... import java.util.Scanner; public class Project2 { private static final Object False = null; private static final int number = 0; private static final int CS = 0; private static final... I still am getting errors on everything I do. I honestly dont know. I thought after i put //Methods and then the codes.
http://www.javaprogrammingforums.com/search.php?s=8dca7b9865675fb149b4cb7f55484e05&searchid=1584091
CC-MAIN-2015-22
refinedweb
811
84.37
The Extensible Style Language - XSLby Norman Walsh January 19, 1999 Styling XML Documents From the earliest days of the Web, we've been using essentially the same set of tags in our documents. Web pages written in HTML use HTML tags and the meaning of those tags is well understood: <H1> makes a heading, <IMG> loads a graphic, <OL> starts an ordered list, and so on. The number of tags has slowly grown, and there have been numerous browser-compatibility issues, but the basic tag set is still the same. There's a significant benefit to a fixed tag set with fixed semantics: portability. A Web page that uses the standard tags can be viewed by just about any browser, anywhere in the world. However, HTML is very confining; Web designers want more control over presentation and many processes would benefit from more descriptive tagging. Enter XML. With XML, we can use any tags we want. We can write documents using our own tag namesnames that are meaningful in the context of our subject matter and offer the possibility of far greater control over presentation. But this freedom comes at a price: XML tag names have no predefined semantics. An <H1> might just as legitimately identify a tall hedge as a first-level heading. Is <IMG> an image, or an imaginary number? Who knows? The style sheet knows.. (Visit to view the Working Draft for yourself.) By the time this article is published, a second Working Draft may be available. It doesn't seem likely that any of the topics covered here will change substantially between the first and second Working Drafts, but it's always possible. What Does a Style Sheet Do? In simplest terms, a style sheet contains instructions that tell a processor (such as a Web browser, print composition engine, or document reader) how to translate the logical structure of a source document into a presentational structure. Style sheets typically contain instructions like these: - Display hypertext links in blue. - Start chapters on a new, left-hand page. - Number figures sequentially throughout the document. - Speak emphasized text in a slightly louder voice. Many style-sheet languages augment the presentation of elements that have a built-in semantic meaning. For example, a Microsoft Word paragraph style can change the presentation of a paragraph, but even without the style, Word knows that the object in question is a paragraph. The challenge for XSL is slightly greater. Because there's no underlying semantic to augment for XML, XSL must specify how each element should be presented and what the element is. For this reason, XSL defines not only a language for expressing style sheets, but also a vocabulary of "formatting objects" that have the necessary base semantics. For the purpose of this article, we're going to consider a simple XML document, shown in Example 1: Example 1: A simple XML document. <?xml version='1.0'?> <doc><title>My Document</title> <para>This is a <em>short</em> document.</para> <para>It only exists to <em>demonstrate a <em>simple</em> XML document</em>.</para> <figure><title>My Figure</title> <graphic fileref="myfig.gif"/> </figure> </doc> This document contains only a few elements: - doc defines document element; - title defines titles; - para defines paragraphs; - em indicates emphasis; - figure and graphic define external graphics. How Does XSL Work? Before discussing XSL in more detail, it's necessary to consider the XSL processing model. An XSL processor begins with a style sheet and a "source tree." The source tree is the tree representation of the parsed XML source document. All XML documents can be represented as trees. Conceptually, the XSL processor begins at the root node in the source tree and processes it by finding the template in the style sheet that describes how that element should be displayed. Each node is then processed in turn until there are no more nodes left to be processed. (In fact, it's a little more complicated than this because each template can specify which nodes to process, so some nodes may be processed more than once and some may not be processed at all. We'll examine this later.) The product of all this processing is a "result tree." If the result tree is composed of XSL formatting objects, then it describes how to present the source document. It's a feature of XSL that the result tree doesn't have to be composed of XSL formatting objectsit can be composed of any elements. One common alternative to XSL formatting objects will be HTML element names. When HTML is used in the result tree, XSL will transform an XML source document into an XML document that looks very much like HTML. It's important to realize, however, that the result is XML, not HTML. In particular, empty elements will use the XML empty-element syntax, and it's impossible to produce documents that are not well-formed XML. What Does XSL Look Like? XSL style sheets are XML documents. A short XSL style sheet can be seen in Example 2. This style sheet transforms source documents like the XML document in Example 1 into HTML. A style sheet is contained within a style sheet element and contains template elements. (Style sheets can contain a small handful of elements in addition to the template, but most style sheets consist of mostly templates.) Example 2: A simple XSL style sheet that generates HTML from XML. <xsl:stylesheet xmlns: <xsl:template <HTML> <HEAD> <TITLE>A Document</TITLE> </HEAD> <BODY> <xsl:process-children/> </BODY> </HTML> </xsl:template> <xsl:template <H1> <xsl:process-children/> </H1> </xsl:template> <!-- this stylesheet handles only a subset of the sample document --> </xsl:stylesheet> Don't worry if this looks a little confusing at first. There's a lot going on. We'll revisit this style sheet in the "Understanding XSL" section. One thing that stands out in an XSL style sheet is the use of namespaces. (covered in two articles in this issue of XML.com), namespaces are what all the colon-delimited prefixes are about. In XSL, there can be no reserved element names, so it's necessary to use some other mechanism to distinguish between elements that have XSL semantics and other elements. This is the problem that namespaces were designed to solve. If you're not familiar with namespaces, here are some simple guidelines: The prefix is significant when comparing element names; therefore xsl:template and template are different. The prefix string is arbitrary. What's important is the association of a prefix string with a URI. That's the function of the "xmlns:" attribute on the stylesheet. The attribute xmlns:xsl="http://"associates the namespace prefix "xsl" with the URI that follows it: (" WD-xsl").If it were instead xmlns:xyzzy=" TR/WD-xsl"then the prefix xyzzy: would replace every instance of xsl: in the example, and the style sheet would be exactly the same. From the preceding points, it follows that xsl:template and xyz:template are different (unless the two namespace prefixes are associated with the same URI).
http://www.xml.com/pub/a/1999/01/walsh1.html
crawl-002
refinedweb
1,180
63.9
If you want to read a CSV file using the file adapter, file content conversion is the first option which strikes your mind. If you are comfortable with java coding, here i present you with an alternate solution. I will show you how to use java mapping to achieve the same. Below is the content in the csv file which i want to read. 1,rahul,siemens,mumbai 2,consultant,12032005 1,viswanath,sisl,hyderabad 2,systemeng,23052005 ‘1’ is used to identify the header level record and ‘2’ the item level. Now you have to write the java mapping program which will read this data and create a target xml structure package TXTMapping; import java.io.BufferedReader; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.InputStream; import java.io.InputStreamReader; import java.io.OutputStream; import java.util.Map; import com.sap.aii.mapping.api.StreamTransformation; public class TMapping implements StreamTransformation { private Map map; public void setParameter (Map param){ map = param; } public void execute (InputStream in, OutputStream out){ try{ out.write(“”Name“”Company“”Place“”Desgn“”Since“.xls or .pdf or .txt) and do the conversion in the mapping program. insertNode(nodename, nodecontent) etc. anton In particular, it’s a very bad idea to first claim that the output is UTF-8, and then to use String.getBytes() to produce it (because that one will use whatever the system encoding happens to be). Best regards, Julian You are correct..i am not a java developer and the code written is just a sample example…i just wanted to highlight that it can be done using java mapping Regards Rahul Nawale why do you want to read it in a mapping? you can do it via adapter module (in file adapter) or via java proxy if you really need to use java to read a flat file why did you choose to do it in a mapping? Regards, michal Rahul, seems that you are from Borland Delphi camp 😉 Java has other naming conventions… Valery Any way good simple explanatory Blog.., Good attempt, I had a same case with receiver adapter needed to send the CSV file i used Java Code and attached that to the module. Any way Thanks Piyush 🙂 yes adapter module was always the option but i just wanted to avoid writing the ejb. Regards rahul Yes looks like a good one for Beginers. One mite just go in for java than Ejbs.. Regards, Vishal Yes looks like a good one for Beginers. One mite just go in for java than Ejbs.. Regards, Vishal I dont think that it is a good idea to go for java mapping for just reading a file, because, u have taken a very simple file structure to do this. If the structure becomes complex, then Content conversion is easier than this (if u want to do mapping after that). Also, if v need to change the target structure, then v need to another mapping for the same which will definelty affect the performance of the scenario. Regards, Prasad U You can mail me your doubt on nawale_rahul@yahoo.com . I will surely try to answer those…have a nice day Regards Rahul Nawale i wanna create the subnode while doing the CC.. is this possible…. Since i can see many explaining that content conversion can be done in the channel instead of in the mapping, you can also mention, for a better clarity, in your blog, that in cases where HTTP (or any channel for that matter where content conversion is not possible) is the sender channel and the data from the sender application is ‘comma delimited’ then we will be left with no choice than to do the content conversion using a java mapping as shown in your blog. And also cant we use a string tokenizer in your code, is there any particular reason you hav chosen a char array.. jst curious 🙂 First of all thanks a zillion for this wonderful blog. We are trying to follow your instructions to read from a binary file. I quote a text from your post “Compile the above java mapping code. Zip the class file which is created and import it in the imported archive. Now while doing the interface mapping, specify the type of the mapping program as java class and the imported archieve as the mapping program.” In case of interface mapping we need a source xml structure and a target xml structure. The target xml structure is being generated by your java mapping code, but how and what to specify in the source xml structure. Since we are not using FCC,how are we going to get the source xml structure.Could please kindly help us in this regard. regards Anupam
https://blogs.sap.com/2006/07/18/java-mapping-an-alternate-way-of-reading-a-csv-file/
CC-MAIN-2018-09
refinedweb
792
71.55
rctl's idea of cputime is unreasonably high with lots of process turnover. Fix: Attached patch seems to help. How-To-Repeat: # jail -c command=sh jail# while true; do id > /dev/null; done meanwhile: # dtrace -n 'rusage:add-cred/args[0]->cr_prison->pr_id != 0 && args[1] == 0/{printf("%d: jail %d cputime %d", pid, args[0]->cr_prison->pr_id, args[2])}' 5 57139 rusage:add-cred 37375: jail 5 cputime 124211 5 57139 rusage:add-cred 37375: jail 5 cputime 6330 5 57139 rusage:add-cred 37375: jail 5 cputime 51237828 5 57139 rusage:add-cred 37375: jail 5 cputime 173602 5 57139 rusage:add-cred 37375: jail 5 cputime 6834680 (...) Sorry, please ignore my patch; I guess the problem is just that = p_prev_runtime is never initialized. I'm not sure why it exists, but = removing it makes things work as expected.= Responsible Changed From-To: freebsd-bugs->trasz I'll take it. Since this problem can not be fixed for a long time it may be the best solution to add one more line into 'BUGS' of rctl(8) At the moment (end of 2017) this problem has not been fixed in any supported FreeBSD version and this problem makes it impossible to have the correct statistics on the jail and makes it dangerous for people who use a billing system based on RACCT. This problem also affects 'pcpu' metrics ( %CPU, in percents of a single CPU core ) and can be easy to reproduce on single core: 1) Run jail1 2) Try to execute ant fast/light external command (e.g. /bin/ls ) in the loop. Or compile this sample as /root/a.out in jail: --- #include <stdio.h> int main() { return 0 } --- Write execution loop and drop it into jail, e.g /root/run.sh: --- #!/bin/sh while [ 1 ]; do /root/a.out > /dev/null done --- Run inside jail this script via: cpuset -c -l 0 /bin/sh /root/run.sh After this we can see on the 'top -P': --- 182 processes: 2 running, 180 sleeping CPU 0: 34.1% user, 0.0% nice, 65.9% system, 0.0% interrupt, 0.0% idle CPU 1: 0.5% user, 0.0% nice, 0.0% system, 0.0% interrupt, 99.5% idle CPU 2: 3.1% user, 0.0% nice, 1.2% system, 0.0% interrupt, 95.7% idle CPU 3: 0.0% user, 0.0% nice, 0.4% system, 0.0% interrupt, 99.6% idle CPU 4: 1.2% user, 0.0% nice, 0.8% system, 0.0% interrupt, 98.1% idle CPU 5: 0.8% user, 0.0% nice, 0.4% system, 0.0% interrupt, 98.8% idle CPU 6: 1.2% user, 0.0% nice, 0.0% system, 0.0% interrupt, 98.8% idle CPU 7: 0.4% user, 0.0% nice, 0.4% system, 0.0% interrupt, 99.2% idle ... PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 41437 root 1 76 0 11408K 2232K CPU0 0 0:07 12.79% sh ... --- Only one core is busy. However if we look at the RACCT from the hoster side, we see the following picture: freebsd:~ # rctl -u jail:jail1 | grep pcpu pcpu=25600 freebsd:~ # rctl -u jail:jail1 | grep pcpu pcpu=25600 freebsd:~ # rctl -u jail:jail1 | grep pcpu pcpu=25600 Unfortunately this is not an unlikely reproduction of the problem. Similar you can see in real life very often, for example at the configuration stage when a large number of commands are executed: Try to execute in the jail, for example: env BATCH=no make -C /usr/ports/misc/mc clean configure And you will see the problem of statistics again As an aside, you might want to look at /usr/bin/sa for accounting. (In reply to Allan Jude from comment #4) Yes, I know about sa(8) but there are other problems (there is no support for jail, only cpu metrics..) Ideally for each component of FreeBSD (jail, racct..) would have a active maintainer. But today FreeBSD is a hobby-OS with a catastrophically small number of developers, fixes can be expected for several years (and not see them). Therefore, if we can not fix the bug, it should be described in the man pages. PS: openfiles metrics also give abnormally high values (in a few dozen) via RACCT (compared to fstat / lsof). But without subsystem maintainer and without entering such information into the man page, I'm not sure that it makes sense to write PR for this. Unfortunately, I can only help with testing (from a practical point of view) and report for issue but do not fix it ;-) Thanks. (In reply to olevole from comment #5) The maintainer of RACCT is still active, just busy. For the open files count, remember that every socket, pipe, and other special types of file descriptor counts as an open file. I'll try to get someone to look at this Be aware that my problem was with the cputime measurement, which seemed to be fixed not long after this bug. Thanks! (In reply to ben from comment #7) Indeed, there is no problem with cputime now. Probably, this PR should be closed. Nevertheless the problem with ncpu is real (In reply to Allan Jude from comment #6) Allan, Thanks! Should I register a new PR ? The original problem on which this PR is introduced is already fixed. The problem with 'pcpu' is very similar: 'pcpu' value are normal on the static processes ( e.g install net-p2p/cpuminer and run: minerd --benchmark ) but incorrect for this cases: For bugs matching the following conditions: - Status == In Progress - Assignee == "bugs@FreeBSD.org" - Last Modified Year <= 2017 Do - Set Status to "Open" (In reply to olevole from comment #9) Please create new PR for pcpu issue so this one could be closed. Thanks done The original bug is fixed per comment 7. There is a patch for PR 235556 which will land shortly.
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=171811
CC-MAIN-2021-49
refinedweb
990
77.23
To Beat Spam Filters, Look Like A Spammer? Soulskill posted about 6 months ago | from the hello-sir-madam dept. /.? (5, Insightful) Anonymous Coward | about 6 months ago | (#45206555) Get yer own blog, Bennie! Re:What would Bennie do without /.? (0) Anonymous Coward | about 6 months ago | (#45206595) Also, someone needs to do something fun with this: [mccullagh.org] Re:What would Bennie do without /.? (1) homey of my owney (975234) | about 6 months ago | (#45207069) Re:What would Bennie do without /.? (1, Insightful) seebs (15766) | about 6 months ago | (#45206629) I don't know, but it would probably be less damaging. The world does not benefit from this guy getting a ton of high-visibility options for advertising his militant refusal to even consider trying to comprehend anything about email or spam. Re:What would Bennie do without /.? (0) Anonymous Coward | about 6 months ago | (#45207351) You're very articulate. Now, is there a statement in the article you think is in correct? Re:What would Bennie do without /.? (0) bennetthaselton (1016233) | about 6 months ago | (#45207777) You're very articulate. Is there a statement in the article you think is incorrect? Re:What would Bennie do without /.? (1) Anonymous Coward | about 6 months ago | (#45206763) Especially when this is like the third whiny rant about his mailing list being blocked by spam filters. Bennett Haselton is a spammer (1) Anonymous Coward | about 6 months ago | (#45206801) This is just the latest in a series of Slashdot posts in which he explains why spam is that which he does not do. He's a spammer. Hence he's recommending that spammers do the kinds of things spammers do. Re:Bennett Haselton is a spammer (2) MightyMartian (840721) | about 6 months ago | (#45206885) But it's different, because he advocates one singular bit of good netiquette. He's like a serial killer whose kind enough to sterilize the knife between each stabbing. Re:What would Bennie do without /.? (3, Funny) RogueyWon (735973) | about 6 months ago | (#45207053):What would Bennie do without /.? (2) lgw (121541) | about 6 months ago | (#45207261) Let's be honest here: YouTube comments would be a step up from Bennie's tripe. It ain't spam filters blocking your email lists, bud, it's the fact no one cares for anything you have to say. Re:What would Bennie do without /.? (1) ottothecow (600101) | about 6 months ago | (#45207125) Re:What would Bennie do without /.? (0) Anonymous Coward | about 6 months ago | (#45207215) That's benjfowler's and cold fjord's jobs. Web-bugs (0) Anonymous Coward | about 6 months ago | (#45207709) "all that a web bug does, is tell the sender whether you opened their message" Actually, it tells much, much more: the IP address, approximate geographic location of the receiver and precise times when the email was opened; his operating system, browser and other technical data that can be used to infer demographics and even mount a cyberattack against him, or further refine a social engineering attack. Web-bugs will also link two otherwise disparate email aliases, say petraeus.d@army.mil and loverboy69@aol.com, thereby compromising privacy. Web-bugs are a form of malware in that they exploit a vulnerability in the recipient's user agent software in order to subvert control of his computer, make it submit personal data the recipient might not agree submitting, while hiding this fact. Re:Web-bugs (1) penix1 (722987) | about 6 months ago | (#45207791):What would Bennie do without /.? (1) synaptik (125) | about 6 months ago | (#45208573) Spam filtering is not a solution. (2, Insightful) intermodal (534361) | about 6 months ago | (#45206615). Re:Spam filtering is not a solution. (0, Troll) girlintraining (1395911) | about 6 months ago | (#45206701) Spam filtering not a solution. The same can be said of antivirus. The problem here isn't the filtering, the problem is the people running the filters are, frankly, assholes. Spamhaus insists it doesn't make mistakes, but it makes them all the time. It's the same with the RBL and similar technology. Whenever you automate something like this, you're going to get false positives -- that's the nature of the game. Denying this happens makes you part of the problem. Re:Spam filtering is not a solution. (4, Insightful) key134 (673907) | about 6 months ago | (#45206767). (4, Funny) mysidia (191772) | about 6 months ago | (#45208069):Spam filtering is not a solution. (2) Kjella (173770) | about 6 months ago | (#45209263) whitelisted, sending the requirements both as email headers (for automated calculation) and in the body as well as a link to a hash calculator. Example using "user@fromdomain.com" to "user@todomain.com": Hash-algorithm: SHA1 Hash-collision-strength: 25 Hash-base: user@fromdomain.com->user@todomain.com 2) You either a) Go to a website that uses javascript to calculate the answer b) Use a local application to calculate the answer c) Have a email client that does this for you c) Have a webmail provider who does this for you Hash-solution: user@fromdomain.com->user@todomain.comA3BHG Hash-value: 007afcd67d58c76d786c 3) Hash is verified to be a 25 bit crash with 00000000000000000000, message is delivered and sender is whitelisted. Some nice things: 1) No protocols need to change, one server can start 2) The sender only needs a CPU to do the work 3) Difficulty is adjustable based on server/account settings. 4) It could eventually become entirely standard and automated. 5) The sender must exist and receive the response 6) You can do it even for non-existing email addresses 7) One base per sender/receiver pair, no easy way to cheat 8) The whitelisting is only valid for that sender, not all the spammer's friends The obvious downsides: 1) Some people won't figure this out or won't do it, you might have to use a regular email if you absolutely can't afford to not miss any mail. However, the market for "semi-public" email addresses to use in forums and mailing lists should be huge to get it off the ground and eventually it should become something your email client does in the background. 2) Lots of unnecessary burned CPU time (but less than SPAM filters today? maybe not) Bennett Haselton? (0) Anonymous Coward | about 6 months ago | (#45206621) Re:Bennett Haselton? (0) Anonymous Coward | about 6 months ago | (#45206725) Re:Bennett Haselton? (0) Anonymous Coward | about 6 months ago | (#45207595) Back when I was still living at the Geek Compound, he visited one weekend (he was a friend of timothy or michael, I think) and sucked *everyone's* dick. -- HeUnique Unsubscribe (0) Anonymous Coward | about 6 months ago | (#45206645) I know I'd unsubscribe from Bennet if I could. Thanks Slashdot! (0) Anonymous Coward | about 6 months ago | (#45206665) I was wondering what to dress up as, for Halloween! I'll be the low sodium one! not really a problem (1) asmkm22 (1902712) | about 6 months ago | (#45206677) (1) tepples (727027) | about 6 months ago | (#45206909):Boarding schools (1) asmkm22 (1902712) | about 6 months ago | (#45207009) They can complain to their parents for sending them there. They're kids. Sorry if I don't shed a tear over it. Re:Boarding schools (1) H0p313ss (811249) | about 6 months ago | (#45207025):Boarding schools (0) Anonymous Coward | about 6 months ago | (#45207099) Should they really be using the college account on the college equipment for garbage (from the school's point of view)? We have similar restrictions at work, but should we really let all of our users browse facebook on your dime (I work in local government)? Re:Boarding schools (1) penix1 (722987) | about 6 months ago | (#45207703) it poorly. A better solution is to allow specific times that the system can be used for personal use such as lunch or in the case of the school, after the last class has dismissed. Still block the most egregious sites (porn) but allow the others at those specific times. The employees know exactly when they can do outside personal business (such as banking, shopping, chatting with friends and family, etc) which will make a happier employee. And a happy employee is far more productive than an unhappy one. Re:Boarding schools (1) BitZtream (692029) | about 6 months ago | (#45208151):not really a problem (1) khasim (1285) | about 6 months ago | (#45206993) Or post your spam on /. as an "article". FTspammyA: Just from that sentence, there is no way I would ever do business with them. Sounds like I should mentally replace "email deliverability industry" with "SPAM industry". I wonder how many times Mr. Fuck You has subscribed to their lists. Setup your system to they are processed automatically. It is 2013. This is /. Please submit an "ask Slashdot" if you require assistance with that. And most people I know will lie when asked that kind of information because we do not trust the people running the list to NOT SELL THAT AS OFTEN AS THEY CAN. If my email system detects a web bug then it is more likely to be flagged as spam. How about you only subscribe them for a set time period? If they're really interested in your messages then they'll read them and see that they have X more messages before they're automatically unsubscribed. Again, "ask Slashdot" if you need advice on how to do that. Which will immediately be added to every spammer's database. Which will almost as quickly be added to the anti-spam rule sets. If you don't want your "newsletters" to be flagged as spam then do not act like a spammer. That includes "advertisements" and "opportunities" and such. Re:not really a problem (1) Tom (822) | about 6 months ago | (#45207223) > :not really a problem (0) bennetthaselton (1016233) | about 6 months ago | (#45207719) Or post your spam on /. as an "article". FTspammyA: Just from that sentence, there is no way I would ever do business with them. Email "deliverability" does not necessarily refer to spammers. As long as legitimate senders are getting blocked too, there's every reason for them to need "deliverability" services to help avoid being blocked by spam filters. Re:not really a problem (1) gl4ss (559668) | about 6 months ago | (#45209149):not really a problem (1) Chibi Merrow (226057) | about 6 months ago | (#45207737) people who waste time at work, or trash school computers. Hmmm... (-1) Anonymous Coward | about 6 months ago | (#45206711) It's kind of difficult to take anyone seriously when they still mention Hotmail as if it still exists. Invisible Hands Don't Get Carpal Tunnel Syndrome (1, Insightful) cervesaebraciator (2352888) | about 6 months ago | (#45206715) Where'd that come from? Last I checked, "free-market forces" weren't capable of programming anything. Programmers do. Nothing's preventing anyone from making a better filter. The "free-market forces" non-sequitur bespeaks an author with an ax to grind. Re:Invisible Hands Don't Get Carpal Tunnel Syndrom (1) ScottCooperDotNet (929575) | about 6 months ago | (#45206815):Invisible Hands Don't Get Carpal Tunnel Syndrom (1) RogueyWon (735973) | about 6 months ago | (#45207067) I don't think Bennie's quite ready to be trusted with an ax of his own. I'm not even sure he's allowed metal spoons, since The Unfortunate Incident At Dinner. 1999 called, it wishes its faddish words returned. (0) Anonymous Coward | about 6 months ago | (#45206779) The webinar begins with some recommendations that are actually good netiquette. Is this webinar on the Information Super Highway? Re:1999 called, it wishes its faddish words return (0) Anonymous Coward | about 6 months ago | (#45206835) Woah there! Don't start a flame war, buddy. Re:1999 called, it wishes its faddish words return (1) gmhowell (26755) | about 6 months ago | (#45208713) The webinar begins with some recommendations that are actually good netiquette. Is this webinar on the Information Super Highway? Can you work the 'cyber' prefix in there somehow? What is this? (5, Insightful) IamTheRealMike (537420) | about 6 months ago | (#45206781):What is this? (0) Anonymous Coward | about 6 months ago | (#45206983) But if he did that everyone would click no, and he wouldn't be able to advertise his proxy websites. If you don't want your shit blocked, send emails people want to get. They will white list you if they want to receive it. Re:What is this? (1) nullchar (446050) | about 6 months ago | (#45207131) on client and settings). Re:What is this? (1) bennetthaselton (1016233) | about 6 months ago | (#45207561) I was hoping this would mean that the user's mail client would see that they're already "communicating with" my email address, and would be less likely to block messages from me as spam. Unfortunately, it doesn't always work. Re:What is this? (0) bennetthaselton (1016233) | about 6 months ago | (#45207547) For one thing, my domains got blacklisted almost immediately after they were mailed out. If it had been a spammer looking for a web proxy, they would have been far more likely to use one of the existing web proxies out there that was easily findable with Google. The fact that Spamhaus blacklisted them right away, is more consistent with the explanation that someone falsely reported the domain to them as "spam" and they blacklisted it without checking, or else that they subscribed an address to our list (going through the confirmed-opt-in process) and then blacklisted new domains sent to that address. Regardless, if Spamhaus's system said "Mails containing this domain name are probably spam", then they made an error, and what they should have done afterwards is come clean about how the domain got incorrectly blacklisted, and whether they were doing anything to avoid the problem in the future. Basically, I don't think the problems are unfixable. But part of the solution is to call out groups like Spamhaus that are making errors and refusing to acknowledge the errors as a matter of policy. Re:What is this? (1) penix1 (722987) | about 6 months ago | (#45208043) them where it counts.... The wallet when their users leave in droves because their emails are blocked nine ways to hell. Re:What is this? (1) bennetthaselton (1016233) | about 6 months ago | (#45209245) networks, were blacklisted 3) as soon as I submitted the domains in a form on Spamhaus's website, the form said, "OK, these domains have been un-blacklisted". Which I was happy about, of course, but they wouldn't have done that if they had had a good reason for blacklisting them in the first place. As I said, any confusion could have been avoided if Spamhaus had just said why the domains got blacklisted, and owned up to the error and made changes to avoid similar screw-ups in the future. I was never a fan of MAPS, but at least when you looked up an IP address on their site and they said it was blacklisted, they said why (and if you were blacklisted because you shared your network with the actual guilty party, the lookup form would tell you who that guilty party was and show you the evidence that they had been spamming) Re:What is this? (0) Anonymous Coward | about 6 months ago | (#45207909) I've not seen such rambling nonsense for a long time. At least not since Bennett's last submission. Re:What is this? (0) Anonymous Coward | about 6 months ago | (#45210127) I'm thinking of opting in to his mailing list so I can find out what I need to block across my systems! Good job Bennet! :) Is it just me or.... (2) fatboy (6851) | about 6 months ago | (#45206787) is this a non-problem? Re:Is it just me or.... (2) H0p313ss (811249) | about 6 months ago | (#45207005) to send it to them. I suspect the vast majority of newsletter recipients don't want them. Perhaps he also has a side business in buggy whips and household coal furnaces? Re:Is it just me or.... (-1, Troll) bennetthaselton (1016233) | about 6 months ago | (#45207599) If most people don't want to use email to receive content any more, fine. But for people who do, outfits like Spamhaus should not recklessly label people "spammers" when they know that their own classification system makes a lot of mistakes. Re:Is it just me or.... (1) H0p313ss (811249) | about 6 months ago | (#45208865) So why are you sending it in the clear? PGP Re:Is it just me or.... (1) bennetthaselton (1016233) | about 6 months ago | (#45210047) (1) Bite The Pillow (3087109) | about 6 months ago | (#45208109). So (0) Anonymous Coward | about 6 months ago | (#45206807) Is that why all of his posts make it through the firehose? wall o text (0) Anonymous Coward | about 6 months ago | (#45206851) Given past experiences with slashdot front page posts consisting of a wall of text, I'd have to assume that this is a nobody spouting insightless drivel or ranting against a cautionary principle he clearly doesn't understand. That said, beating spam filters is easy. Ordinary non-spammy emails get through fine. It's only when you doing something borderline spammy that the spam filter catches you. In this case, the asshole was running a mailing list. His post should have been deleted (2) imatter (2749965) | about 6 months ago | (#45206881) I liked the article despite its lack of answers (1) themushroom (197365) | about 6 months ago | (#45206889)igslist can wind up in the spam folder while a legitimate reply can make it to me, seeing they both have the same subject line, a legit-looking email address (some of the time), and part of the body content. Opt-in direct mailing shouldn't be affected by spam filters because despite being sent in bulk no one receiving it is complaining, and you'd think cloying titles like "Are we breaking up?" would trip filter triggers (or at least human brain triggers) quicker than "Weekly Report for 10/21/13". Re:I liked the article despite its lack of answers (0) Anonymous Coward | about 6 months ago | (#45206939) Opt-in direct mailing shouldn't be affected by spam filters because despite being sent in bulk no one receiving it is complaining, Are you sure? There are people who use "report as spam" as an unsubscribe button. From the email provider's perspective, there may very well be someone complaining, even if the mailing list owner doesn't know about it. Re:I liked the article despite its lack of answers (1) Cramer (69040) | about 6 months ago | (#45207259) from) -- this inspite of the "it's f'ing spam" button; deleting "without" reading does not make something spam, clicking the "it's f'ing spam" button makes it spam. Re:I liked the article despite its lack of answers (1) bmo (77928) | about 6 months ago | (#45207649):I liked the article despite its lack of answers (1) Cramer (69040) | about 6 months ago | (#45207699) That's deleting them from a pre-filtered folder, not the inbox. I have a few filters like that myself, and there's a checkbox along the lines of "this shit is NEVER spam". Re:I liked the article despite its lack of answers (0) Anonymous Coward | about 6 months ago | (#45210117) Outlook has the same mis-feature, and it can't be turned off. When I have a mail from home, with subject "Buy milk" and no body, I have no reason to open it before deleting, so once in a while, Outlook will decide that all mails from home are spam. Don't rely on just email (1) JanneM (7445) | about 6 months ago | (#45206933):Don't rely on just email (0) Anonymous Coward | about 6 months ago | (#45206999) Re:Don't rely on just email (1) nullchar (446050) | about 6 months ago | (#45207087) The point is most people who receive the proxy list by email cannot simply view the website or RSS feed showing proxies. Re:Don't rely on just email (1) JanneM (7445) | about 6 months ago | (#45208853):Don't rely on just email (1) RogueyWon (735973) | about 6 months ago | (#45207111) Don't go offering practical solutions; you might get in the way of a perfectly good uninformed moan. Re:Don't rely on just email (0) bennetthaselton (1016233) | about 6 months ago | (#45207637) Re:Don't rely on just email (1) Rockoon (1252108) | about 6 months ago | (#45208059) helping folks circumvent. Hes gotta look at every email going to that former employees box. That man is showing you as much respect as you showed him. Don't like the lack of respect? Show some yourself. The free market is working fine. Its your willingness to face the consequences of your own impact that isn't working fine. Re:Don't rely on just email (1) bmo (77928) | about 6 months ago | (#45208679) I believe you nailed this entire thing down to its actual causes and why he is clueless and whining. Also, as his stuff is recognized by various employers, filter rules are implemented to make sure that future ... mailings... don't go to other employees. -- BMO Re:Don't rely on just email (1) bennetthaselton (1016233) | about 6 months ago | (#45209267) I was about to read this... (0) Anonymous Coward | about 6 months ago | (#45207055) Re:I was about to read this... (1) RogueyWon (735973) | about 6 months ago | (#45207179) You know, War and Peace is actually rather good. Long and heavy going? Sure. But if you put the effort into it, it's a rewarding read. The same can't be said of this article. So this story is complaining about how hard it is (0) Anonymous Coward | about 6 months ago | (#45207065) to get past spam filters to allow kids to look at porn at school. Brilliant Web bugs are more invasive than he says. (0) Anonymous Coward | about 6 months ago | (#45207083) They expose the location and user agent of the readers location to the sender. The are also vulnerable to surveillance by anyone between the reader and the sender. See story number 3: or... (2) Tom (822) | about 6 months ago | (#45207183):or... (1) bennetthaselton (1016233) | about 6 months ago | (#45207671) had borrowed our proxy site and used it in their own spams, was just guessing. And there was no reason to think that guess was correct, since if a spammer wanted to do that, they would just use one of many web proxies already out there, instead of signing up to get the new ones. Re:or... (1) Rockoon (1252108) | about 6 months ago | (#45208141) replied to your confirmation message, and another theory is that the only victim is the service owner making you one of the villains. Re:or... (1) bennetthaselton (1016233) | about 6 months ago | (#45209201) subscribe to their list using an email address where someone else had the authority to make decisions about that email address, and didn't want that person joining any lists. Re:or... (1) gl4ss (559668) | about 6 months ago | (#45209305) site had nothing new on it worth sending email out about. Long-time, no chat (1) SethJohnson (112166) | about 6 months ago | (#45208197) Hey, it's been a while. Remember me? We were friends on MySpace a few years back. I've moved on to a new social service. Do you want to join me on Friendster? Take care, Seth Re:or... (0) Anonymous Coward | about 6 months ago | (#45208379) Your a moron who has no clue. He isn't a spammer. He's sending emails to people who requested them!!!! The problem is that anti-spam filters are kicking in because they are violating the privacy of users. Lehk228 (705449) | about 6 months ago | (#45207605) Bennett Haselton (1) BitZtream (692029) | about 6 months ago | (#45208107). STOP POSTING BENNETT'S SPAM! (0) Anonymous Coward | about 6 months ago | (#45208275) That will go a long way to stopping spam on /. TL;ADD (1) Kuranes (610880) | about 6 months ago | (#45208687) And my spam filters aren't filtering those. Jon Katz 2.0 (1) gmhowell (26755) | about 6 months ago | (#45208729). (1) Jamie Ian Macgregor (3389757) | about 6 months ago | (#45208971) You can't solve an economic problem this way (1) damn_registrars (1103043) | about 6 months ago | (#45209003) Good for you! (0) Anonymous Coward | about 6 months ago | (#45209749) I give people the option of replying with the word "unsubscribe", even though that creates some hassle for me to process those requests manually, because many of our users are on censored networks and cannot access the unsubscribe link on the peacefire.org website Oh, if all mailing lists were so insightful. Besides, not all of your users are reading your mail "in the browser". But here you lost me: (0) Anonymous Coward | about 6 months ago | (#45209795) Following up on myself: You do not appear to use web bugs in your mailing list messages. A wise choice: web bugs are malware [...] I think this is over the top -- all that a web bug does, is tell the sender whether you opened their message -- but, whether this opinion is valid or not, some people out there feel that way, and using web bugs in your email might piss them off. Well, I think it's not over the top, but as far as I'm concerned, I never "open" any mail, since my MUA can't load images or any other links and can't do active content. Heck, my browser's javascript is disabled by default most of the time. "Are we breaking up?" Yes, it seems we have already. tl;dr (0) Anonymous Coward | about 6 months ago | (#45209937) That litany just got flagged by my internal filter... [$MaxLength >> x] hey spammers! (2) martin-boundary (547041) | about 6 months ago | (#45210597).
http://beta.slashdot.org/story/193303
CC-MAIN-2014-15
refinedweb
4,307
70.53
The TLDR is: How do I import a .js file without webpack trying to lint/compile it with the boilerplate provided? I am having trouble importing libraries I’ve manually included (as in, not installed through a package manager such as NPM). I would like to be able to import 'src/statics/library'with a .js file I’ve included in the directory structure. The issue is not how I would do this, I have already been able to successfully import files I created. The problem is with the loaders that attempt to lint the file or compile it from ES2015 to something the browsers can understand. I’ve added an exclude property to the eslint-loader, babel-loader and vue-loader in the following way exclude: /(node_modules|src\/statics)/ The file library.jsis put in the src/staticsdirectory for context. - rstoenescu Admin last edited by I made progress on this but never managed to figure it out. I think the issue is with babel and not the webpack configuration in the boilerplate app (so I think I’m asking for help in the wrong place/being off-topic). Thanks for the reply though! Here is some more information for people who might be having similar problems, hopefully this might help someone. Exclude will only tell webpack to not use a loader on files that matched with test. This causes a problem with babel as we are both loading files into babel through the babel-loader and the vue-loader. We instead need to configure babel directly to tell it which files it should leave alone. You can do this with either the ignoreor onlyproperties in the options object. The best way to pass these into babel would be through the babelrc but, unfortunately, this doesn’t seem to work.
https://forum.quasar-framework.org/topic/178/using-your-own-modules-in-the-boilerplate
CC-MAIN-2021-10
refinedweb
300
64.1
Hi all, I am using jdev12c. I tried to create the following class package view; import java.awt.Dimension; import java.util.ListResourceBundle; public class Resource extends ListResourceBundle { protected Object[][] getContents() { return new Object[][] = { // }; } } The code is copied from java documentation ListResourceBundle (Java Platform SE 7 ) Looks like a documentation bug where "=" has to be removed The code is copied from java documentation ListResourceBundle (Java Platform SE 7 ) No - that code is not a copy of that Java documentation. The code at that link compiles just fine. You have modified the code. Looks like a documentation bug where "=" has to be removed No - looks like you have added "=" and caused the error you are probably complaining about. The code in the API is a constructor; it is constructing a new object. You don't use "=" in a constructor; you use "=" when you are making an assignment. Review the 'Anonymous Classes trail in The Java Tutorials The following example, HelloWorldAnonymousClasses, uses anonymous classes in the initialization statements of the local variables frenchGreetingand spanishGreeting, . . . HelloWorld frenchGreeting = new HelloWorld() { String name = "tout le monde"; public void greet() { greetSomeone("tout le monde"); } public void greetSomeone(String someone) { name = someone; System.out.println("Salut " + name); } }; That 'new HelloWorld() {' is constructing a new instance of an anonymous class. Note that the ONLY time "=" is used in that example is for the assignments. Message was edited by: Moderator. No SHOUTING please
https://community.oracle.com/message/11329475
CC-MAIN-2017-43
refinedweb
234
56.35
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Here's my problem - We migrated from SVN to GIT a while back, but it has been hard to get my team avoiding merge to master before something goes to production. As a result, I want to set up something which would force the user to merge to master only from a release/* or a hotfix/* branch. Is there any way (event a custom hook would help), that I can restrict the source branch to release/* and hotfix/* when the target branch is the master. Please pardon my lack of knowledge on writing the hooks and I'd really appreciate if someone could actually give me a hint on how to write this hook. The model I am trying to implement is something like this - Here's a snippet from my Foxtrot Merge Blocker to help get you started: public class MyHook implements RepositoryMergeRequestCheck { @Override public void check(@Nonnull RepositoryMergeRequestCheckContext context) { MergeRequest mr = context.getMergeRequest(); PullRequest pr = mr.getPullRequest(); PullRequestRef fromRef = pr.getFromRef(); PullRequestRef toRef = pr.getToRef(); Repository tR = toRef.getRepository(); Repository fR = fromRef.getRepository(); String fromBranch = fromRef.getDisplayId(); String toBranch = toRef.getDisplayId(); // etc... your logic goes here. // keep an eye out for PR's coming in from forks, though! mr.veto("bad-source-branch", "THOU SHALL NOT MERGE"); } } Setting up your pom.xml and atlassian-plugin.xml is also a pain, but the atlassian hook tutorial should help get you started, or study some of the open source hooks out there (e.g., yacc). p.s. I'm the original author of the add-on Bit-Booster for Bitbucket Server. Thanks a lot for the headstart. I would play around with this and see where I.
https://community.atlassian.com/t5/Bitbucket-questions/Implementing-Git-flow-branching-model/qaq-p/425649
CC-MAIN-2018-09
refinedweb
299
66.74
Here’s an interesting bit of history from Julian Havil’s new book The Irrationals. In 1593 Francois Vièta discovered the following infinite product for pi: Havil says this is “the earliest known.” I don’t know whether this is specifically the oldest product representation for pi, or more generally the oldest formula for an infinite sequence of approximations that converge to pi. Vièta’s series is based on the double angle formula for cosine. The first series for pi I remember seeing comes from evaluating the Taylor series for arc tangent at 1: I saw this long before I knew what a Taylor series was. I imagine others have had the same experience because the series is fairly common in popular math books. However, this series is completely impractical for computing pi because it converges at a glacial pace. Vièta’s formula, on the other hand, converges fairly quickly. You could see for yourself by running the following Python code: from math import sqrt prod = 1.0 radic = 0.0 for i in range(10): radic = sqrt(2.0 + radic) prod *= 0.5*radic print 2.0/prod After 10 terms, Vièta’s formula is correct to five decimal places. Related posts: more sophisticated and efficient series for computing pi: Computing pi with the AGM Ramanujan series for pi Algorithm for world record pi calculations 6 thoughts on “Oldest series for pi” It converges in fewer iterations, sure, but square roots are a pain to calculate without a computer. I wouldn’t be surprised if the arctangent method ends up being faster by hand. I propose the following exercise for the readers. How many reciprocals do you suppose you could compute in the time it takes to compute a square root? Say it’s k = 100, or whatever number you prefer. Compute N terms of Vièta’s formula and k*N terms of the alternating harmonic series. Compare the results. The latter may be more accurate for small N (like N = 1), but at some point (depending on your value of k) the former will be more accurate. (Although there are lots of square root signs in Vièta’s formula, you only need to compute one new square root at each step since each new expression contains the previous expression, as in the code above.) In A History of Pi Petr Beckmann states (pp. 94,95): “Second, Vièta was the first in history to represent pi by an analytical expression of an infinite sequence of algebraic operations”. and just a bit later: “Vièta’s expression, in fact, is the first known use of an infinite product, whether connected with pi or not”. and a few paragraphs later “Vièta himself did not use it for his calculation correct to 9 decimal places; he used the Archimedean method without substantial modifications by taking a polygon of 393,216 sides…” The “glacially slow” convergence of the alternating series makes it a nifty example to demonstrate acceleration techniques like Richardson Extrapolation For example, if you start with the approximations from 10, 20, 40, and 80 terms (the first three gotten for free while computing the fourth), then the individual estimates are all pretty far off (the 80 term estimate is 3.129…), but extrapolate three times and you get 3.14159274… (Certainly this is still nowhere near as fast as other techniques)
http://www.johndcook.com/blog/2012/07/20/oldest-series-for-pi/
CC-MAIN-2014-42
refinedweb
559
52.8
Template inheritance docs – thanks and notes I'm not pretty sure where to raise the issue so I'll post it here. I've just read the new template inheritance docs and I want to express big thanks to @DavidGrudl as that is just what I was waiting for so long – now I pretty much understand all these include stuff and what they are for. Though, I have few things I want to mention, hopefully to improve it a bit further. These are my notes from reading the docs. First, I was pretty much confused by {include block $name} syntax. I somewhat remembered what it means from a topic on this forum, but in docs it appears out of nowhere in one place and is presented together with another feature (block name passed by variable) so I was confused what the code example is related to, how does it relate to what is before and how the block thing it is different from simply {include xyz}. It was also presented as a must, I've made a PR fix so that it is presented as an option. A small improvement would be to write a use-case for Dynamic Block Names as there is just mentioned it is there, but have no clue what that could be good for (and I would like to!). As per horizontal inheritance, I'm not pretty sure what does the inheritance word (whether in Czech or English) mean in here. I am probably just not able to see some concept the word came from, because importing blocks from different template does not seem like class inheritance to me – maybe rather like trait. I think talking about reusability fits more here. The word reuse is used there alongside inheritance in there – what is the relationship between them actually? Finally, when I saw embed explained there finally, a question raised in my head – why do we need {layout}, {include} and {import} any more? Couldn't we just unite it into {embed} and have 3 tags less, which would make things much clearer? Honestly, I work with Latte for years and until now I had no clear idea of the differencies between all these tags. Thus my joy of this docs article, but still, I can not say, if {include file.latte} and {import file.latte} make the same thing. Do they? Anyway thank you once more for your work and hopefully this can help a bit. I did not make another PR because these things need more skilled man to process. :-) P.S.: looking forward to the day when I will be able to work with Latte API for creating macros and understand Forms API to create fully featured custom controls. These are my two favourite bits which I miss so much in the docs. :-) Last edited by dakur (2020-12-01 13:51) ad horizontal inheritance: The term horizontal reuse will maybe be better. In common with inheritance it has overriding. So when you import some blocks using {import} to the main template, you can have the same named blocks in that main template, which will overwrite them. And you can use {include parent} in that blocks to include original imported blocks. embed vs layout vs include: {embed} can be used instead of {include} and {layout}, but it is pair macro a and you probably won't want to write {embed file.latte}{/embed} instead of {include file.latte} and wrap the whole child template in {embed 'layout.latte'} ... {/embed} instead of single {layout layout.latte} in the header. import vs include: include is like include 'header.phtml' in PHP, import is like import from module in JavaScript, the first prints the contents, the second loads the blocks, it is not interchangeable. I tried to rename some terms and headings I added example for Dynamic Block Names. @dakur Are there still some unclear things? I tried to rename some terms and headings Yes, I think it is better, even though I've read the changes only in commit diff, not in context of whole docs chapter. Horizontal reuse still does not sound well to me as I imagine nothing under the term, but I don't know of any other better now.. I added example for Dynamic Block Names. It's probably OK for people who desperately need such feature and are looking for it in the docs. But for those who scan docs for features or just randomly read it like me, some real world example would be better. I'm still confused what is it for as I can omit {block} and the result is still the same, isn't it? So maybe an example which points out the difference between it. {foreach [Peter, John, Mary] as $name} Hi, I am {$name}. {/foreach} Last edited by dakur (2020-12-07 13:01) Result of your example is (of course): Hi, I am Peter. Hi, I am John. Hi, I am Mary. Result of dynamic block example is: Hi, I am Peter. Hello. I am John. Hi, I am Mary. How can you not see the difference? :) We'll get to that. I will try to modify the example to make it clear that we are talking about two files. parent.latte {foreach [Peter, John, Mary] as $name} {block $name}Hi, I am {$name}.{/block} {/foreach} child.latte {block John}Hello. I am {$name}.{/block} Can you think of another way to achieve the same result? Yeah, I understood it from the previous post that there are two files. And no, I don't know how to achieve it in another way. Just trying to say that I don't know why I would even need to do it at all. From this example in docs I know Latte has such feature but I don't know what I could use the feature for – what other people use it for, why it has been even implemented. Mention about its purpose would help me to know when to use it and understand Latte deeply in its intentions. If we do not understand each other yet we will probably have to switch to Czech as it is pretty difficult to have such discussion in English when every word meaning matters. :-D Last edited by dakur (2020-12-08 08:26) I understand. But I'm probably not able to give a real world example in that documentation chapter. I added it ten years ago because it's a great solution for some use cases, like form rendering etc. {block $input->name} {input $input} {/block} child.latte: {block description} ... {/block} (And no, this is absolutely not good example for Latte-only docs)
https://forum.nette.org/en/34014-template-inheritance-docs-thanks-and-notes#p213355
CC-MAIN-2022-21
refinedweb
1,115
72.66
Hi All, I'm new to Java. I'm using JDK 6 update 18, Netbeans 6.8 and Glassfish v3 to try to create a project that calls a web service. My intention is to create a web service proxy project. ie many web services to call one other webservice. The Java layer should be very thin and literally just pass messages back and forth between caller and web service client. In order to simulate this I have created a web service project. This web service returns a string, real basic. I build, deploy and test this and it works fine. Yay. I then created another project. I create a web service client pointing to the WSDL that GlassFish tells me is available, (this can be reached through the browser). In this project I also create a web service. I drag the operation from the web service client, (Arun Gupta stylee) into the new operation in the web service and it creates the necessary code etc... I build, deploy and run this only to find that I cannot test the web service. Why? Code : WARNING: Servlet web service endpoint 'NewWebClient' failure java.lang.IllegalArgumentException: class org.glassfish.webservices.JAXWSServlet has neither @WebService nor @WebServiceProvider annotation Code : @WebService() public class NewWebClient { @WebServiceRef(wsdlLocation = "WEB-INF/wsdl/localhost_8080/TestService/WebService1Service.wsdl") private WebService1Service service; /** * Web service operation */ @WebMethod(operationName = "clienttest") public String clienttest() { //TODO write your implementation code here: try { // Call Web Service Operation server1.WebService1 port = service.getWebService1Port(); // TODO process result here java.lang.String result = port.helloworldservice(); //System.out.println("Result = "+result); } catch (Exception ex) { // TODO handle custom exceptions here } return ""; My code is above. (relevant bits anyway) Can anyone tell my what I might be doing wrong? I've not really done anything special, I have a few plugins installed, those I thought I might muck about with (REST webservices etc) but other than that the installation should be standard and like I said I don't get any build errors, just the loads of stack trace info about these missing annotations. Any help would be great, Ta Jim
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/3457-trying-call-web-service-failing-printingthethread.html
CC-MAIN-2016-22
refinedweb
350
58.28
, but a value can be.. If Count is less than the capacity of the Hashtable, this method is an O(1) operation. If the capacity needs to be increased to accommodate the new element, this method becomes an O(n) operation, where n is Count. The following example shows how to add elements to the Hashtable. using System; using System.Collections; public class SamplesHashtable { public static void Main() { // Creates and initializes a new Hashtable. Hashtable myHT = new Hashtable(); myHT.Add( "one", "The" ); myHT.Add( "two", "quick" ); myHT.Add( "three", "brown" ); myHT.Add( "four", "fox" ); // Displays the Hashtable. Console.WriteLine( "The Hashtable contains the following:" ); PrintKeysAndValues( myHT ); } public static void PrintKeysAndValues( Hashtable myHT ) { Console.WriteLine( "\t-KEY-\t-VALUE-" ); foreach ( DictionaryEntry de in myHT ) Console.WriteLine( "\t{0}:\t{1}", de.Key, de.Value ); Console.WriteLine(); } } /* This code produces the following output. The Hashtable contains the following: -KEY- -VALUE- two: quick three: brown four: fox one: The */ Available since 10 .NET Framework Available since 1.1
https://msdn.microsoft.com/en-us/library/system.collections.hashtable.add.aspx
CC-MAIN-2016-30
refinedweb
165
62.85
I wrote a program calling a few different methods and it is all working, but I wanted to get feedback towards my own comments on the code. I read a link (posted just below) to help get me started on the path of proper documentation. Tips for maintainable Java code In supplement to general feedback, I would like specific comments on if any of the code is well explained, or if any descriptions are vague. Through out the program there are various description lines which I typically draft before writing my code, each only a few words, and they act as the total of my design process. Do these little things just get in the way or are they helpful? public class CouponCollector { public static void main(String[] args) { // 1. Display the purpose of the program for the user System.out.print("This program draws four cards until each suit is present." + " After each\n draw, the deck is reshuffled with the drawn cards.\n"); int[] deck = countingArray(52); //Grabbing a unopened deck of cards int[] suits = countingArray(4); /* 2. Place holder array for the four suits (0-3) *this will be used in the divideSetMethod. *(52 cards) / (13 cards per set) = Q + R * Q = suit of card */ int[] rank = countingArray(13); /* 3. Place holder array for the 13 ranks *this is currently reserved for future *modification (2/1/2014). */ int numberOfDraws = 0; //4. Purpose of program is to test the number of draws //5. until a complete set of each suit is formed boolean match = false; //6. reshuffle deck and draw Until all four suits are drawn together. while (match == false) { //7. Shuffle deck deck = shuffleArray(deck, 3); //8. Draw cards from deck numberOfDraws++; int[] drawnCards = stackDrawlArray(deck, 4); System.out.print(numberOfDraws + ": "); for (int printDrawnCards: drawnCards) { System.out.print(printDrawnCards + " "); } System.out.println(""); //9. Determine if one of each suit is present match = divideSet(drawnCards, suits, 13); } System.out.println("Number of matches until complete collection: " + numberOfDraws); } //10. countingArray generates an array of elements each incremented by one public static int[] countingArray(int numberOfElements) { //11. initialize array int[] countingArray = new int[numberOfElements]; //12. Fill array in increments of 1; for (int i = 0; i < numberOfElements; i++) countingArray[i] = i; //13. Return array return countingArray; } //14. arrayMultiplier multiplles the element of any array by a number public static int[] arrayMultiplier(int[] array, int multiplier) { //15. Multiply the array for (int i = 0; i < array.length; i++) array[i] = (array[i] * multiplier); //16. Return array return array; } //17. shuffleArray randomly sorts the elements of an array public static int[] shuffleArray(int[] array, int numberOfShuffles) { //18. Shuffle array for (int shuffle = 0; shuffle < numberOfShuffles; shuffle++) { int swap; int swapPosition; for (int i = 0; i < array.length; i++) { swap = array[i]; swapPosition = (int)(Math.random() * array.length); array[i] = array[swapPosition]; array[swapPosition] = swap; } } //19. Return shuffled array return array; } //20. stackDrawlArray collects numbers from top of an array public static int[] stackDrawlArray(int[] array, int drawCount) { //21. Prepare array to contain elements int[] topOfStack = new int[drawCount]; //22. Take top of the elements for (int i = 0; i < drawCount; i++) topOfStack[i] = array[(array.length - 1 - i)]; //24. Return top of stack return topOfStack; } /*25. modulusDivideSet allows the creation of a dynamic sorting technique of integers. *The suit of a card is sorted by: suit = (int) array[0-51] / divider(13) = quotient *Quotient indicates which suit the card is: * (Q = 0 = Spade), (Q = 1 = Heart) * (Q = 2 = Club), (Q = 3 = diamond) */ public static boolean divideSet(int[] array, int[] modulusParameters, int divider) { //26. Generate a boolean array for comparison boolean[] modulusPass = new boolean[modulusParameters.length]; //27. Compares an array to its modulus parameters for (int i = 0; i < array.length; i++) { int modulusCheck = (int) (array[i] / divider); for (int j = 0; j < modulusParameters.length; j++) { if (modulusPass[j]) continue; if (modulusCheck == modulusParameters[j]) modulusPass[j] = true; } } //28. Check to see if modullusPass are all true indicating a complete collection for (int k = 0; k < modulusParameters.length; k++) { if (modulusPass[k]) continue; //29. One of the checks did not pass. else return false; } //30. All of the checks passed return true; } }
https://www.javaprogrammingforums.com/cafe/35532-comments-java.html
CC-MAIN-2020-34
refinedweb
688
50.73
Patent application title: Method and Apparatus for Providing Retirement Income Benefits Inventors: Jeffrey K. Dellinger (Fort Wayne, IN, US) Stephen H. Lewis (Fort Wayne, IN, US) Denis G. Schwartz (Fort Wayne, IN, US) Jason H. Rickard (Fort Wayne, IN, US) IPC8 Class: AG06Q4000FI USPC Class: 705 36 R Class name: Automated electrical financial or business practice or management arrangement finance (e.g., banking, investment or credit) portfolio selection, planning or analysis Publication date: 2011-06-02 Patent application number: 20110131149 Abstract: A computerized method and system for administering an unannuitized variable annuity plan having a guaranteed minimum withdrawal payment feature associated with a systematic withdrawal program includes the steps of and system for storing data, determining an amount of a scheduled payment, periodically determining the account value, making the scheduled payment by withdrawing that amount from the account value, monitoring for an unscheduled withdrawal made under the plan and adjusting the amount of the scheduled payment in response to the unscheduled withdrawal. Scheduled payments will be made to the owner under the plan for the period of benefit payments, even if it is determined by the computerized method and system that the account value is or will be exhausted before all payments have been made. Payments made before such determination are made with the aid of the computer, and payments made thereafter may be made with or without the aid of the computer. Claims: 1. A computerized method for administering an unannuitized variable annuity plan having a guaranteed minimum payment feature associated with a systematic withdrawal program, and for periodically determining an amount of a scheduled payment to be made to the owner under the plan, comprising the steps of: a) using a computer, storing data relating to a variable annuity account, including data relating to an account value, a withdrawal rate, and a period of benefit payments; b) using the computer, determining an amount of the scheduled payment; c) using the computer, periodically determining the account value associated with the plan and making the scheduled payment by withdrawing that amount from the account value; d) using the computer,, wherein scheduled payments made after the account value is exhausted can be made without the aid of the computer. 2. The method of claim 1, wherein the amount of the scheduled withdrawal payment is determined by the following formula: Scheduled Payment=Account Value.sub.θ×WD Rate Where: Scheduled Payment=dollar amount of the scheduled payment Account Valuee=initial account value or account value as periodically determined at a subsequent time WD Rate=predetermined % rate established as part of the annuity plan. 3. The method of claim 1, wherein the account value is periodically determined by the following formula: Account Valuet+1=Max[(Account Valuet-Withdrawal),0]×(1+i) Where: Account Valuet+1=account value at time t+1 Account Valuet=account value at time t Withdrawal=dollar amount of the scheduled payment at time t i=net fund performance during period t to t+1. 4. The method of claim 1, wherein the scheduled payment is adjusted in response to an unscheduled withdrawal according to the following formula: Scheduled Payment'=Scheduled Payment×(1-USWithdrawalt/Account Valuet) Where: Scheduled Payment'=scheduled payment after an adjustment for an unscheduled withdrawal Scheduled Payment=scheduled payment prior to an adjustment for an unscheduled withdrawal US Withdrawalt=unscheduled withdrawal made at time t Account Valuet=account value at time t, prior to the unscheduled withdrawal. 5. The method of claim 1, further comprising the additional step of creating a master record for the variable annuity account, and wherein said storing steps include storing data on said master record. 6. The method of claim 5, wherein the step of creating a master record comprises the steps of providing an input screen having fields for entry of data relating to the owner, the type of annuity plan, relevant dates and amounts, and data relating to interest and mortality guarantees, entering data in the fields, and checking the data for validity and completeness. 7. The method of claim 6, further comprising the additional step of displaying the master record for visual checking by an operator, and storing the master record if the data is deemed to be satisfactory. 8. The method of claim 1, further comprising the additional step of generating a report, and forwarding the report to the owner. 9. The method of claim 1, further comprising the additional steps of generating at least one report, and storing data in at least one of an accounting file for use in preparing process and accounting records, a valuation file for use in establishing reserves, a payment center file for use in preparing benefit checks and reports for the owner, and a customer service file for use in preparing screens for use by customer service personnel. 10. The method of claim 1, wherein the period of benefit payments is a lifetime period. Description: RELATED APPLICATIONS [0001] This application is a continuation application of U.S. patent application Ser. No. 11/520,411 filed on Sep. 13, 2006, which is a divisional application of U.S. patent application Ser. No. 09/804,667 filed Mar. 12, 2001, now issued U.S. Pat. No. 7,376,608 which is a Continuation-in-Part of U.S. patent application Ser. No. 09/406,290 filed on Sep. 24, 1999, now issued U.S. Pat. No. 7,089,201, (which is the subject of a Certificate of Re-examination to issue on Dec. 7, 2010) which claims priority to U.S. Provisional Applications: Ser. No. 60/101,883 filed on Sep. 25, 1998; and Ser. No. 60/115,570, filed on Jan. 12, 1999, the complete disclosures of which are hereby expressly incorporated herein by this reference thereto. FIELD OF THE INVENTION [0002]. BACKGROUND OF THE INVENTION [0003] Annuities typically serve the useful function of providing economic protection against the risk of longevity, in that an annuitant has the option of electing a life-contingent retirement income, thereby transferring the risk of outliving one's accumulated assets to an insurer. [0004]. [0005]. [0006]. [0007] Non-life-contingent annuity benefit options are also available. For example, an annuity benefit that makes monthly payments for a specified period of time, such as thirty years, and then terminates is available. [0008]. [0009]. [0010]-tei in holding period. The table and graph of FIG. 1 illustrate annuity contract values as a function of time for both variable and fixed annuities. The fixed annuity contract of FIG. 1 illustratively earns 5% annually. [0011]. [0012]. [0013]. [0014]). [0015] For variable annuities, "annuity units" are the measure of value during the distribution phase. "Annuity units" work very much like accumulation units, with one exception. [0016]. [0017] The table and graph of FIG. 2 illustrate the growth of accumulation unit value and annuity unit value, assuming a 9% gross investment return and a 5% AIR in the annuity unit value, for 15 contract years. [0018] Variable annuity benefit options of sufficiently long duration have historically provided an inflation hedge to retirees superior to that available under fixed annuities. [0019]. [0020]. [0021]. BRIEF SUMMARY OF THE INVENTION [0022] FIG. 3 illustrates this aspect of the invention. FIG. 3 illustrates variable annuity payouts with a simple floor guarantee and a program administered by a method that funds current deficiencies (without interest) from future payments. Another aspect of the invention is the provision of alternative techniques (including a retrospective method and a prospective method) of implementing such a program. [0023]. [0024]. [00. [0026]. [0027]. [0028]. [0029] The invention described is intended primarily to apply to variable annuities and mutual funds. Nonetheless, the invention can also be applied to fixed annuities. [0030] Other goals, advantages and novel features of the present invention will become apparent from the following detailed description of the invention when considered in conjunction with the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS [0031] FIG. 1 shows a table and graph illustrating annuity contract values as a function of time for both variable and fixed annuities. [0032] FIG. 2 shows a table and graph illustrating the growth of accumulation unit values and annuity unit values over a 15 year term. [0033] FIG. 3 shows a chart illustrating variable annuity payouts with a simple floor guarantee and a program that repays current deficiencies from future payments in accordance with one aspect of the invention. [0034] FIG. 4 shows a table comparing a normal variable annuity benefit under an annuity contract to the benefit payable under a contract which incorporates a retrospective method of benefit determination, in accordance with one aspect of the present invention. [0035] FIG. 5 shows a table illustrating a reduction in units per payment under a program that guarantees a minimum payment and accounts for any shortfall by reducing the number of units used to calculate future benefit payments, in accordance with one aspect of the present invention. [0036] FIG. 6 shows a table illustrating the operation of a systematic withdrawal program, in accordance with one aspect of the present invention. [0037] FIG. 7 shows a graph illustrating variable payments made during and after a liquidity period, in accordance with one aspect of the present invention. [0038] FIG. 8 shows a graph illustrating the cash surrender value and death benefits in affect before and after annuitization for a program of the type illustrated in FIG. 7. [0039] FIG. 9 shows a flow chart illustrating the data collection and entry steps of the computerized method of the present invention. [0040] FIG. 10 illustrates a portion of a computerized method which utilizes a retrospective approach to annuity benefit calculation. [0041] FIG. 11 shows a flow chart which is a continuation of the flow chart of FIG. 10. [0042] FIG. 12 shows a flow chart which illustrates a portion of a computerized method which utilizes a prospective approach to annuity benefit calculation. [0043] FIG. 13 shows a flow chart which is a continuation of the flow chart of FIG. 12. [0044] FIG. 14 shows a flow chart illustrating a portion of a computerized method for implementing a systematic withdrawal program. [0045] FIG. 15 shows a flow chart which is a continuation of the flow chart of FIG. 14. [0046] FIG. 16 shows a flow chart illustrating a computerized method which provides for scheduled and unscheduled withdrawals in an investment program, in accordance with one aspect of the present invention. DETAILED DESCRIPTION OF THE INVENTION [0047]: Benefit t + 1 = Benefit t × 1 + i 1 + AIR ##EQU00001## [0048] where: [0049] Benefitt+1=dollar amount of variable annuity benefit at time t+1 [0050] Benefitt=dollar amount of variable annuity benefit at time t [0051] i=actual fund performance during period t to t+1 (as a %) [0052] AIR=assumed investment rate [0053]. [0054] As an example, if the benefit payment at time t is $1,000, the AIR is 5%, and actual fund performance is 10%, the subsequent variable annuity benefit payment is determined as follows: Benefit t + 1 = Benefit t × 1 + i 1 + AIR = $1 , 000 × 1 + .10 1 + .05 = $1 , 047.62 ##EQU00002## [0055]. [0056]. Account Value t + 1 = ( Account Value t - Benefit t ) × ( 1 + i ) × ( 1 / p y ) = ( Account Value t - Benefit t ) × ( 1 + i ) + ( 1 - p y ) / p y × ( Account Value t - Benefit t ) × ( 1 + i ) = Normal Account Value Progression + increment for survivorship ##EQU00003## where : ##EQU00003.2## Account Value t + 1 = Account Value at time t + 1 ##EQU00003.3## Account Value t = Account Value at time t ##EQU00003.4## Benefit t = dollar amount of variable annuity benefit at time t = maximum { Preliminary benefit , Guaranteed minimum benefit } , ##EQU00003.5## where : ##EQU00003.6## Preliminary Benefit t = Account Value t Attained age annuity factor ##EQU00003.7## i = actual fund performance during period t to t + 1 ( as a % ) ##EQU00003.8## p y = probability annuitant age y survives to age y + 1 ##EQU00003.9## [0057] The "Normal Account Value Progression" is for an active (unannuitized) deferred annuity contract from which withdrawals, including those under a form of systematic withdrawal program, are being made. [0058]. [0059] The table of FIG. 4 compares the normal variable benefit typically payable under an annuity contract to the benefit payable under a contract which incorporates the retrospective method of this example where the guaranteed minimum payment is equal to the initial payment. The total payments under the retrospective method exceed those under the normal benefit. The insurer pays all amounts after the account value is exhausted. [0060]. [0061]. [0062] In this example, whenever fund performance would cause a variable annuity benefit payment to be less than $1,000, a portion of the variable annuity benefit reserve held by the insurer will be liquidated in the exact amount to cover the shortfall. [0063] Under this approach to a guaranteed floor under variable annuity benefit payments, the following formula would govern the series of annuity benefit payments: Benefit t + 1 = Benefit t × 1 + i 1 + AIR × ( 1 - S / R ) ##EQU00004## [0064] where: [0065] Benefitt+1=dollar amount of variable annuity benefit at time t+1 [0066] Benefitt=dollar amount of variable annuity benefit at time t [0067] i=actual fund performance during period t to t+1 (as a %) [0068] AIR=assumed investment rate [0069] S=shortfall (below floor) [0070] R=reserve prior to adjustment for shortfall [0071]. [0072]. [0073]. [0074] The table of FIG. 5 shows the reduction in units per payment under a program that guarantees a minimum payment of $1,500 and accounts for any shortfall by reducing the number of units used to calculate future benefit payments. [0075] Other variations of the system and method of the present invention include, but are not limited to, the following: [0076] Non-level variable benefit floors--For example, a floor which starts at $1,000 and increases by a fixed dollar amount (e.g. $40) per year or by a fixed percentage (e.g. 4%) per year [0077]: [0077] Benefit t + 1 = Benefit t × 1 + i 1 + AIR × ( 1 + X / R ) ##EQU00005## [0078] where: [0079] Benefitt+1=dollar amount of variable annuity benefit at time t+1 [0080] Benefitt=dollar amount of variable annuity benefit at time t [0081] i=actual fund performance during period t to t+1 (as a %) [0082] AIR=assumed investment rate [0083] X=excess (above ceiling) [0084] R=reserve prior to adjustment for excess [0085]. [0086] Non-level variable benefit ceilings. For example, a ceiling which starts at $1,200 and increases by a fixed dollar amount (e.g. $40) per year or by a fixed percentage (e.g. 4%) per year. [0087].) [0088] In addition to distribution methods associated with true annuitizations, distributions associated withdrawal programs--including systematic withdrawal programs--from active (unannuitized) deferred annuity contracts are also encompassed by this invention. [0089]. [0090]. [0091]. [0092]. [0093]. [0094]. [0095] The table of FIG. 6 illustrates the operation of this aspect of the invention. In the illustration of FIG. 6, the initial account value is $100,000, the withdrawal guarantee is 7.5% of the highest account value attained, the investment return is assumed to be as illustrated, and the term is 15 years. [0096]. [0097]. [0098] Value t + 1 = ( Account Value t - Withdrawal t ) × ( 1 + i ) ##EQU00006## where : ##EQU00006.2## Account Value t + 1 = Account value at time t + 1 ##EQU00006.3## Account Value t = Account value at time t ##EQU00006.4## Withdrawal t = dollar amount of variable withdrawal benefit at time t = Withdrawal t + 1 × 1 + i 1 + AIR where AIR = assumed investment rate ##EQU00006.5## i = actual fund performance during period t to t + 1 ( as a % ) ##EQU00006.6## [0099]. [0100]. [0101]. FIG. 7 illustrates variable payments made during and after the liquidity period in a program of this type. FIG. 8 illustrates the cash surrender value and death benefits before and after annuitization for a program of this type. [0102]. [0103] Subsequent withdrawals are adjusted up or down exactly as payments are adjusted under normal variable annuitization. [0104] For example, assuming an n-year liquidity period and a life only annuity at the end of that period, the special annuity factor is calculated as follows: [0105] Special annuity factor=Σvt+Σvtt-npx+n [0106] where: [0107] v=1/(1+AIR) [0108] n=number of years in the liquidity period [0109] Σvt=the present value of payments from t=1 to t=n [0110] Σvtt-nPx+n=the present value of payments from t=n+1 to the end of the mortality table, where each payment depends on the probability that the owner lives from duration n to duration t. [0111] A second method for arriving at the initial withdrawal sets the special annuity value equal to the value of an annuity certain for the chosen liquidity period, divided by (1-d), where d is the decimal equivalent of the percentage a payment under the annuity certain must be reduced to provide enough unused principal (accumulated to the end of the liquidity period at the AIR) to provide for the chosen annuity at the end of the liquidity period. [0112] For example, assuming an n-year liquidity period and a life only annuity at the end of that period, the special annuity factor is calculated as follows: Special annuity factor = Σ v t / ( 1 - d ) ##EQU00007## where : ##EQU00007.2## d = percentage decrease in annuity certain payment , as a decimal = a x + n / [ Σ ( 1 + i ) t + a x + n ] ##EQU00007.3## n + number of years in the liquidity period ##EQU00007.4## Σ v t = the present value of payments from t = 1 to t = n ##EQU00007.5## a x + n = a life only annuity to the annuitant at the end at the end of the liquidity period ##EQU00007.6## Σ ( 1 + i ) t = the accumulating of payments from t = 1 to t = n at the AIR ##EQU00007.7## [0113]. [0114]. [0115]. [0116]. Description of the Flow Charts [0117] FIG. 9 is a flow chart which illustrates a portion of a computerized method of practicing the present invention. More particularly, FIG. 9 is an illustrative embodiment of the steps which are taken to collect data which is used in the remainder of the process, as described in more detail below. For a new annuity, the data collected through the individual steps illustrated in FIG. 9 may be entered manually at a computer terminal or equivalent input device, or electronically, or in any other manner which is customary at present or in the future. For an existing annuity, the data will generally be retrieved from an existing contract master record, or other file. [0118] FIG. 9 (block 16). [0119] FIG. 9. [0120] FIG. 9. [0121] FIG. 10 illustrates the next step in the overall process of the present invention. That step is calculation of an annuity benefit using information from the master record, as created or updated in the process of FIG. 9 and other retrieved data. More particularly, the flow charts of FIGS. 10 and 11 illustrate one embodiment of a computer-based process for calculating an annuity benefit in accordance with retrospective approach to benefit calculation. [0122] The first step in the flow chart of FIG. 10 is to retrieve additional data relating to annuity factors (block 46), survivor factors (block 48) and annuity unit factors (block 50). These data are typically stored in files used for other purposes, although duplicate or dedicated purpose files may be created to hold such information for use in the calculation process. The process of FIG. 10 then checks to determine whether the particular calculation at hand involves a new or existing annuity (block 52). If the calculation involves a new annuity, processing proceeds by deducting the premium load (if any) from the amount of money available for purchasing the annuity (block 54). Following this step, the minimum benefit is determined. This calculation uses the net money available for purchasing the annuity, the appropriate annuity factor for the age, sex and type of annuity, and the appropriate annuity unit value to determine the minimum benefit. The minimum benefit may also be adjusted according to other terms of the contract (e.g., multiplied by 0.8, or other factor) (block 56). [0123] For an existing annuity, the system calculates the investment return (i) for the recent period using annuity unit values (block 58). The results of step 58 are then used to update the account value (block 60). [0124]. [0125]). [0126] Processing in accordance with the retrospective approach continues as illustrated by the flow chart of FIG. 11. Generally, the flow chart of FIG. 11 illustrates the steps of using the benefit amount determined in the process of FIG. 10 to update files and make adjustments needed for the benefit calculations to be performed on the next benefit payment date. Also illustrated in FIG. 11 are steps relating to the generation of reports and updates for the benefit of both the annuity payer and the annuitant. [0127] With reference to FIG. 11, the benefit determined in step 66 or 68 is used to reduce the Account Value by the amount of the benefit (block 70). The system then checks to see if the Account Value is less than zero (block 72). If so, the Account Value is then set to equal zero (block 74). In either event, the system then proceeds to update the master record (block 76). All appropriate data and information entered or affected by the processing to this point are captured on the master record. This data would include such items as the amount of the benefit determined in step 66 or 68, the new account value or remaining units, payment date(s) of benefit(s), the next benefit due date, and similar information. Following the updating of the master record (and any other related files), the system generates reports (block 78). Reports may be generated for internal use, as well as for the annuitant. Representative usages are illustrated in FIG. 11. These include: accounting file (block 80) for use in preparing process and accounting records (block 82); a valuation file (block 84) for use in establishing reserves (block 86); a payment center file (block 88) for use in preparing benefit checks and reports to annuitants (block 90); a customer service file (block 92) for use in preparing screens for the use of customer service personnel in responding to inquiries from annuitants and related entities; and other files (block 96) for use in any other activities (block 98) which might be useful to the annuity payer or annuitant. [0128] FIGS. 12 and 13 illustrate one embodiment of a computerized process which utilizes a prospective approach to determining benefit payments under a variable annuity contract. As indicated by the connecting letter "A" at the top of FIG. 12, the data collection process illustrated in FIG. 9 is applicable to, and precedes, the process of FIG. 12. Following collection and storage of the data per FIG. 9, the system retrieves additional data, as indicated by blocks 100 and 102 and FIG. 12. The additional data includes annuity factors and annuity unit values which are typically stored in files used for other purposes, and which are useful in the calculations to follow. The system then determines whether the particular annuity of interest is a new or existing annuity (block 104). [0129] FIG. 10. In the case of an existing annuity, processing proceeds from step 104 to calculation of an investment return (i) (block 110). The investment return calculated is for the most recent past period using annuity unit values retrieved in step 102. [0130] In either event (i.e., with either a new or existing annuity), the process determines a preliminary benefit (block 112) in a manner which is substantially similar to determination of a preliminary benefit in step 62 of FIG. 10. Moreover, comparison of the preliminary benefit to the minimum benefit (where appropriate), and setting the "benefit" equal to the greater of the preliminary and minimum benefits (blocks 114, 116, and 118) proceeds in the process illustrated by FIG. 12 substantially similarly to the process of steps 64, 66, and 68 of FIG. 10. [0131] As indicated by connecting letter "C," processing continues as illustrated in FIG. 13. The first step in this continued processing is to determine whether the benefit set in steps 116 or 118 is greater than the preliminary benefit determined in step 112 (block 120). If so, the process proceeds to calculate the excess of the benefit over the preliminary benefit (block 122). The process then proceeds to reduce the number of annuity units to be used in the determination of future benefits (i.e., calculate the number of units payable in future benefits). As described in additional detail elsewhere in this specification, the reduction of the number of units is calculated (block 124) using the amount of the excess benefit, the current annuity unit values, and the attained age annuity factors. Following this step, the process checks to see if the number of units to be used in calculating future benefits is less than zero (block 126). If so, the system sets the number of units equal to zero (block 128). In either event, the system updates the master record (block 130) to reflect the reduction or resetting of annuity units. As indicated by the flow chart of FIG. 13, if the benefit determined by the process of FIG. 12 is not greater than the preliminary benefit, the system proceeds directly to step 130 (i.e., the number of annuity benefits is not reduced). [0132] Following step 130, the system generates reports (block 132). This portion of the process is substantially similar to the portion of the process described in connection with steps 78-98 of FIG. 11, and the description of these steps will not be repeated here. [0133] FIG. 14 is a flow chart which illustrates a computer-based process for administering an annuity contract which utilizes a systematic withdrawal approach. As indicated by the presence of the connecting letter "A" at the top of the flow chart of FIG. 14, the initial steps of collecting and storing information relating to the annuity described previously in connection with FIG. 9 may be used in the embodiment of FIG. 14. Following these steps, and with reference to FIG. 14, the system first retrieves additional information relating to accumulation unit values (block 134) and withdrawal factors (block 136). These values are typically stored in files which may also be used for other purposes. The system first checks to see whether the subject annuity is a new or existing annuity (block 138). If new, the system proceeds to determine a minimum withdrawal amount, based upon the Account Value and withdrawal factor (block 140). If the subject annuity is an existing annuity, the system calculates the investment return, (i), for the most recent period (block 142), updates the Account Value (block 144) using the results of the calculation of step 142 and checks to see if the new Account Value is greater than the prior Account Value (block 146). If so, the process proceeds to step 140 to determine the minimum withdrawal benefit. If not, the system omits this step. [0134] As indicated by the connecting letter "D," the process proceeds in accordance with the embodiment illustrated by the flow chart of FIG. 15. In general, this portion of the process makes adjustments, when appropriate, to allow benefit calculations to be made by or on the next benefit payment date. [0135] With reference to FIG. 15, the system first checks to see if the Account Value is greater than the withdrawal benefit (block 148). If so, the Account Value is reduced by the amount of the withdrawal benefit (block 150). If not, the Account Value is set equal to zero (block 152). Following either adjustment, the system proceeds to update the master record (block 154). As with the retrospective and prospective approaches, items updated in the master record include withdrawal benefit amount, new Account Value or remaining units, dates of payments, upcoming due dates, etc. Following updating of the master record, the system generates reports (block 156). Generation and handling of reports proceeds in substantially similar fashion to that described previously in connection with steps 78-98 of FIG. 11. Accordingly, that description will not be repeated here. In either case, the process of generating reports includes the step of updating any and all files relating to the subject benefit/withdrawal payment. [0136] FIG. 16 illustrates an alternative embodiment of an annuity-based retirement program constructed in accordance with the present invention. As indicated by the continuation letter "A" at the top of the flow chart of FIG. 16, this embodiment shares the data collection steps illustrated in FIG. 9 in common with the preceding embodiments. Similar information regarding the annuitant and account is collected in accordance with the steps described in connection with FIG. 9. Additional information specific to the present embodiment, such as length of the liquidity period, is also entered in accordance with the steps described in connection with FIG. 9. [0137] With reference to FIG. 16, the process continues by retrieving additional data (block 158), such as annuity unit values, annuity factors, and survivor factors. These values are typically stored in files which may be used for other purposes, as well. [0138]. [0139]). [0140]). [0141] As indicated in the flow chart of FIG. 16, after completion of the appropriate steps described above, the system converts the transaction amount (i.e., the amount of the scheduled withdrawal, premium payment, deposit, or unscheduled withdrawal) into an equivalent number of units, using the current unit value (block 188). The system then adjusts the number of units in the account (block 190). The master records is then updated (block 192). As indicated by the connecting letter "E", the system then updates the files and generates reports in the same manner as described in connection with the previously discussed embodiments of the invention. [0142] From the preceding description of the preferred embodiments, it is evident that the objectives of the invention are attained. Although the invention has been described and illustrated in detail, it is to be clearly understood that the same is intended by way of illustration and example only and is not to be taken by way of limitation. The spirit and scope of the invention are to be limited only by the terms of the appended claims. Patent applications by Denis G. Schwartz, Fort Wayne, IN US Patent applications by Jason H. Rickard, Fort Wayne, IN US Patent applications by Jeffrey K. Dellinger, Fort Wayne, IN US Patent applications by Stephen H. Lewis, Fort Wayne, IN US Patent applications in class Portfolio selection, planning or analysis Patent applications in all subclasses Portfolio selection, planning or analysis User Contributions: Comment about this patent or add new information about this topic:
http://www.faqs.org/patents/app/20110131149
CC-MAIN-2014-10
refinedweb
5,118
50.87
The following code is for a project I have to do where I recieve a text file that has a students first and last name followed by his grades. I then have to convert that into an output file that contains his name followed by his average score. The file I recieve has multiple students in it spereated line by line. The output should look relativly like Rzam, Look = 0.00 Bambi, Lambi = 40.47 Coop, Jason = 27.31 #include <iostream> #include <fstream> #include <sstream> #include <iomanip> using namespace std; struct Student { string fname; string lname; double average; }; int read(ifstream &fin, Student s[]); void print(ofstream &fout, Student s[], int amount); int main() { const int size = 10; ifstream fin; ofstream fout; string inputFile; string outputFile; Student s[size]; cout << "Enter input filename: "; cin >> inputFile; cout << "Enter output filename: "; cin >> outputFile; cout << endl; fin.open(inputFile.c_str()); fout.open(outputFile.c_str()); read(fin , s); print(fout, s, size); fin.close(); fout.close(); } int read(ifstream &fin, Student s[]) { string line; string firstName; string lastName; double score; double total; int i=0; int totalStudents=0; Student stu; while(getline(fin, line)){ istringstream sin; sin.str(line); while(sin >> firstName >> lastName){ stu.fname = firstName; stu.lname = lastName; while(sin >> score){ total *= score; i++; } stu.average = (total/i); } s[totalStudents]=stu; totalStudents++; } return totalStudents; } void print(ofstream &fout, Student s[], int amount) { ostringstream sout; for(int i = 0; i<amount; i++) { sout << left << setw(20) << s[i].lname << ", " << s[i].fname; fout << sout << setprecision(2) << fixed << "= " << s[i].average; } } You have a few bugs, which have added up to your issue: in your ostringstream and then try to write that to the file stream. Which is fine, but it is printing the address of the ostringstream buffer. So making this change will cause it to print the contents: fout << sout.str() << setprecision(2) << fixed << "= " << s[i].average; Note the usage of .str(). Though you don't really need a temporary stream here at all... so make another change making it look like this: fout << sout.str() << setprecision(2) << fixed << "= " << s[i].average << '\n'; You need to place the ostringstream sout; inside the loop, so it is reset each time too. Otherwise you will get weirdly compounding output. You don't use the count of students calculated by your read function! so it always tries to print 10! Do something like this: int count = read(fin , s); print(fout, s, count); If no score is read, I think you'll have a divide by zero. So you should add a check. You should ensure that no more than size Students are read. Or better yet, just place them in a std::vector and return that from the function. It's simpler and less error prone. You need to reset i each time you start reading a student, or the later students will get divided by way too much. Each needs to have an independent count. I don't know if these are the only issues, but certainly it should get you started on the right track :-)
https://codedump.io/share/wbgiSUV3CcLC/1/code-outputs-nonsense-to-output-file-instead-of-string
CC-MAIN-2018-26
refinedweb
512
74.79
JSF portlet and Public Render Parameters By User13334247-Oracle on Oct 19, 2008 In this blog i will talk about the support for public render parameter (new feature that was added in Portlet 2.0 specification) in the JSF Portlet Bridge. In JSFPortletBridge version 1.2.3 an enhancement was added to keep track of request scoped information. Check the issue 30 for more details. This feature can be used to support the sharing of render parameters among jsf portlets. I will explain how to do it. The example war and source is referenced at the end. 1. First you need to set the following initialization parameter in the portlet.xml for the portlet that wants to share the render parameter <init-param> <name>com.sun.faces.portlet.SAVE_REQUEST_SCOPE</name> <value>true</value> </init-param> This causes the form parameters to be set as render parameters. As a result all form parameters are available in the render of the portlet. 2. Now identify the parameter to be shared and specify it in the portlet.xml. Since JSF generates the name, you need to specify the generated name in the portlet.xml..For example consider the following snippet <f:view> <h:form ............. <h:inputText ................. </h:form> </f:view> If you want to share the inputText "userNo", then specify the parameter "helloForm:userNo" as supported-public-render-parameter. If you notice i have dropped the tag <p:portletPage> . This tag causes the portlet namespace to be prepended to the parameter name. The parameter name will be "portletnamespace:helloForm:userNo". The namespace that is generated by the portlet container will be different in each portal. So i have removed it. The parameter can be specified in the portlet.xml as follows.. <portlet-app> <portlet> ........... <supported-public-render-parameter>helloForm:userNo</supported-public-render-parameter> </portlet> <public-render-parameter> <identifier>helloForm:userNo</identifier> <qname xmlns:x:userNumber</qname> </public-render-parameter> </portlet-app> 3. If another JSF Portlet also specifies the same parameter as supported-public-render-parameter, then it can access the parameter as FacesContext context = FacesContext.getCurrentInstance(); PortletRequest request = (PortletRequest)context.getExternalContext().getRequest(); return request.getParameter("helloForm:userNo"); This will return the value entered in the first JSF Portlet. Deploy the guessnumbersharedportlet.war on OpenPortal Portlet Container 2.0 and see the public render parameter at work. You can check the sources to see how this is done. The sample will work on Project WebSynergy and Liferay Portal also. Hi Deepak, Is it possible to use Events in JSF portlets. Could you please provide some information on this? Thank you Posted by APps on June 30, 2009 at 03:59 PM IST # Yes it is possible to use events in JSF Portlets. This ability with be available in the jsfportletbridge shortly. Posted by Deepak on July 03, 2009 at 05:26 AM IST # hello and thanks for this example. i used it, but i got an error: SCHWERWIEGEND: /home.xhtml @80,77 value="#{UserNumberBean.userNumber}": Illegal Syntax for Set Operation 19:24:28,375 INFO [lifecycle] WARNING: FacesMessage(s) have been enqueued, but may not have been displayed. sourceId=...:_viewRoot:helloForm:userNo[severity=(ERROR 2), summary= (/home.xhtml @80,77 value="#{UserNumberBean.userNumber}": Illegal Syntax for Set Operation), detail=(/home.xhtml @80,77 va lue="#{UserNumberBean.userNumber}": Illegal Syntax for Set Operation)] do you know, what could be the problem ? thx! Posted by gabe on September 01, 2009 at 04:04 PM IST # iam sorry! was only a beginner mistake. wrong input, wrong naming. Posted by gabe on September 01, 2009 at 04:21 PM IST # did the portlets have to be in the same web-application? Posted by gabe on September 01, 2009 at 04:40 PM IST # The portlets need not be in the same webapp. In fact the example and sample shown above used two different jsf-portlet webapplication. Posted by Deepak on September 02, 2009 at 03:40 AM IST # Hello Deepak, does the example also work on jboss-portal? Posted by guest on September 05, 2009 at 02:01 PM IST # Sorry Deepak, but the example shows a single .war with the two portlet in it...It's important to me to understand if I' m able to use this approach with portlet within different .war files... Posted by uskassat on September 15, 2009 at 08:13 AM IST # You can use this approach with portlet in different wars. As the public render parameter functionality across portlet wars is handled by the portlet container, it should work. I have not tried though. As this was done long time back i thought it was two different wars. Sorry about that. Let me know if it does not work in different wars. Posted by Deepak on September 15, 2009 at 11:06 AM IST # This approach works with portlets into different wars using the same jars, but I want to know how can I retrieve the parameter in the display portlet without importing manged bean related jars? Posted by uskassat on September 15, 2009 at 11:47 AM IST # sorry .war is not working with liferay 5.2.3! getting error : 07.04.2010 20:00:10 org.apache.catalina.core.StandardContext start SCHWERWIEGEND: Context [/guessnumbersharedportlet] startup failed due to previous errors any ideas? Posted by pfeiflo on April 07, 2010 at 06:47 PM IST # Hi Deepak I want to implement Public Render Parameters using Jboss Portlet Bridge. Can you please share if you have any reference for that. I have a requirement to pass a parameter between two different portlets located in two different pages. Regards Srinadh Posted by Srinadh on April 10, 2010 at 06:39 PM IST # The guessnumbersharedportlet.war didn't have jsf libraries and hence it may have failed during deployment in Liferay 5.2 on Tomcat. I had tested on Liferay 5.2.3+GlassFish. Now i have updated guessnumbersharedportlet.war with JSF libraries and it should work on Liferay+Tomcat. Clear the browser cache and download guessnumbersharedportlet.war again. Thanks for pointing out the issue. Posted by Deepak on April 12, 2010 at 05:37 AM IST # Sorry i don't have any idea on JBoss Portlet Bridge. Did you check JBoss forum. Posted by Deepak on April 12, 2010 at 05:40 AM IST # Hello, sources link doesn`t contain archive. Posted by blindbear on May 01, 2010 at 04:18 PM IST # Fixed the sources link. Thanks for pointing it out. Posted by Deepak Gothe on May 03, 2010 at 04:16 AM IST # Hello Deepak, Your blog is really great. Its has been great help to a newbie like me regd the JSR 286. I have a wustion here. I have a secnario where i have to send the params from source portlet to destination portlet and redirecting at the same time. I mean, if i select some loan type in my source portlet and click on a button, i should be redirected to target portlet with the Loan type information ? We do not intend to use portlet Wiring. Can u pls suggest us a solution ? Thanks in advance, Anil. Posted by Anil Kumar on August 24, 2010 at 03:14 AM IST #
https://blogs.oracle.com/deepakg/entry/jsf_portlet_and_public_render
CC-MAIN-2016-26
refinedweb
1,195
59.8
- Advertisement KerkhoffMember Content count72 Joined Last visited Community Reputation220 Neutral About Kerkhoff - RankMember Another Demo Kerkhoff posted a blog entry in A love story: Me and my 2D engine.here is the last version... tell me what you thought...give me ideas... I'm a very bad layouter eauhuah this layout is very...very ugly I know...but Is the better(believe me) that I get.. Just copy and paste the font file to the c:\windows\fonts please ehehhe Thanx.... See ya Link resolved.... Kerkhoff commented on Kerkhoff's blog entry in A love story: Me and my 2D engine.probably yes...but only after I release the first version =D until there..only demos.exe ehhehe thanx.. Link resolved.... Kerkhoff posted a blog entry in A love story: Me and my 2D engine.the last link of the game was broked.. okay =D byeee Another chance? ehehhe Kerkhoff posted a blog entry in A love story: Me and my 2D engine.Hi my friends... here is the new version of Predator Engine with some basic controls... download this file: Thanx... =D Next controls will be the check box, the check button, radio buttons, numeric updown, another panels... give me ideas..give me =D byeee Update.... Kerkhoff posted a blog entry in A love story: Me and my 2D engine.Just to keep you informed... I'm rewriting all the Predator Engine... =D There was very bugs and the code organization was going to the hell.. I'm writing carefully all components and after the primary controls be ready...I'll post the demo here... Firstly I would like to left done: PPanel, PButton (done) PLabel (done) PTextBox, PProgressBar with good features to have few work in get amazing effects... thanx...byeee Let's play... = ) Kerkhoff commented on Kerkhoff's blog entry in A love story: Me and my 2D engine.but this is working = ( hehehe the only problem that I had is that the last message that was pushed on the queue is getting repeated until I send another message = ( for example...if I press a key, the keyup message is send until I move the mouse = ( but I fixed this bug... have find any other bug or have suggestions an how cai I improve the engine? thanx Let's play... = ) Kerkhoff posted a blog entry in A love story: Me and my 2D engine.Ladies and gentlemans.. I present you.. the Predator Engine =P (clap clap clap clap) thanx...thanx =D Ok... Download it from this address and test what we got until now.. I made the follow components.. PLabel PPanel PButton PProgressBar no laughs please...I haven't much time = ( test and send me the bugs and new ideas for I implement it... Thanx a lot... to everyone We can do something now =P Kerkhoff posted a blog entry in A love story: Me and my 2D engine.Hi my friends... After this late in post here... I bring some news... The PLabel class supports the Outline Fonts, Bitmap Fonts and Texture mapped fonts.. the PPanel class is almost done, it's working now, except by some details that I'll implement tomorrow, like caption align, borders visible, etc... the PGraphicObjectCollection is dead...it just brought me problems... PPanel : PInputEventReceiver private: PLabel* FCaption; SColor* FBackgroundColor; SColor* FBorderColor; std::string FTextureFileName; unsigned int FTexture; unsigned int FDisplayListRectWireFrame; unsigned int FDisplayListRect; unsigned int FDisplayListRectTexture; int FBorderWidth; int FRectType; bool FFilled; public: PPanel(SRect* ABounds, int ARectType, IVideoDriver* AVideoDriver); ~PPanel(void); PLabel* GetCaption(void); void SetCaption(PLabel* AValue); SColor* GetBackgroundColor(void); void SetBackgroundColor(SColor* AValue); SColor* GetBorderColor(void); void SetBorderColor(SColor* AValue); std::string GetTextureFileName(void); void SetTextureFileName(std::string AValue); int GetBorderWidth(void); void SetBorderWidth(int AValue); bool GetFilled(void); void SetFilled(bool AValue); void PrepareToDraw(void); void Draw(void); The IVideoDriver have some new methods too virtual AUX_RGBImageRec* LoadBMPRec(std::string AFileName) = 0; virtual void SetBlend(bool AValue) = 0; virtual void SetTexture(bool AValue) = 0; virtual unsigned int GenerateDisplayList(int AListSize) = 0; virtual void CallDisplayList(unsigned int ADisplayList) = 0; virtual void DestroyDisplayList(unsigned int ADisplayList, int ASize) = 0; virtual void LoadTexture(std::string AFileName, unsigned int &ATextureBuffer, int ATextureType) = 0; virtual void BindTexture(unsigned int ATextureBuffer) = 0; virtual void DrawText(std::string AText, float ASize, SColor* AColor, SCoord* ACoord, unsigned int ADisplayList, int AFontType) = 0; virtual unsigned int GenerateDisplayListRectWireFrame(SRect* ABounds, SColor* ABorderColor, int ABorderWidth) = 0; virtual unsigned int GenerateDisplayListRect(SRect* ABounds, SColor* ABackgroundColor) = 0; virtual unsigned int GenerateDisplayListRectTexture(SRect* ABounds, SColor* AColor) = 0; I'm using display list for everything, if someone has any suggestion in how I can approach the performance...tell me... I have made a little demo with two panels and one running label to show the use of layers and what we have until now: well...I hope you like the system... I think that our next step is make some buttons..right? see ya News...and PLabel!!! Kerkhoff posted a blog entry in A love story: Me and my 2D engine.Hi... after some days without post here, I show you the PLabel class... PLabel: PGraphicObject private: unsigned int FDisplayList; GLYPHMETRICSFLOAT FGlyphs[256]; HFONT FFont; SColor* FColor; std::string FCaption; std::string FFontName; int FSize; int FWeight; bool FWireFrame; bool FItalic; bool FUnderline; bool FStrikeOut; int GetCaptionWidth(void); int GetCaptionHeight(void); public: PLabel(std::string ACaption, std::string AFontName, int AFontSize, SCoord* ACoord); ~PLabel(void); std::string GetCaption(void); void SetCaption(std::string AValue); SColor* GetColor(void); void SetColor(SColor* AValue); std::string GetFontName(void); void SetFontName(std::string AValue); bool GetWireFrame(void); void SetWireFrame(bool AValue); int GetSize(void); void SetSize(int AValue); int GetWeight(void); void SetWeight(int AValue); bool GetItalic(void); void SetItalic(bool AValue); bool GetUnderline(void); void SetUnderline(bool AValue); bool GetStrikeOut(void); void SetStrikeOut(bool AValue); void BuildFont(void); void Draw(void); This class is quite simple, some properties... the BuildFont Method, have to be called before the Draw Method Well, like the title says...I have made some changes in our mechanism... Now, we have a PGraphicObjectCollection class, that have a list of PGraphicObjects (=P)... protected: PList* FObjects; IVideoDriver* FVideoDriver; public: PGraphicObjectCollection(IVideoDriver* AVideoDriver); ~PGraphicObjectCollection(void); virtual void Sync(void); the sync method, just distribute the FVideoDriver component(that is unique) between its FObjects... The Sync Method, must to be called after all objects are inserted and before call methods that uses the FVideoDriver of them. Okay, I'll try to be more fast and keep on posting everyday... See ya... Predator namespaces!!! Kerkhoff posted a blog entry in A love story: Me and my 2D engine.Hi... This is like our 2d engine looks after namespace organization: namespace Predator { namespace Common { PDefines PList PStack PStructs } namespace System { PDevice PInputDriver PKeyboardDriver PMouseDriver } namespace Drawing { IVideoDriver PGraphicObject PInputEventReciever PLayer POpenGLDriver PWindow } namespace GUI { (PLabel, PPanel, etc...) } } Well..any suggestions =D After that, I'll start the GUI programming.. See ya... -... A little doubt Kerkhoff posted a blog entry in A love story: Me and my 2D engine.Hi everyone.. I have spent my time reading about create my own engine and use some of existents... I think about and get in none result = ( I'm here to ask you, what you think? however... my next mission is dismember my engine in namespaces, tomorrow I'll show the engine's new organization I hope you like... PS: please give your suggestion about my doubt PLayer Kerkhoff commented on Kerkhoff's blog entry in A love story: Me and my 2D engine.Affirmative ;) It's a borland coding style =) and I like it... PLayer Kerkhoff posted a blog entry in A love story: Me and my 2D engine.Hi my friends... this is my first post in 2006 =D I present you the PLayer class: PLayer : PInputEventReceiver private: PList* FGraphicObjects; std::string FLayerID; SColor* FColor; public: PLayer(std::string ALayerID, SRect* ABounds, IVideoDriver* AVideoDriver); std::string GetLayerID(void); void SetLayerID(std::string AValue); void Draw(void); This class will be responsible for Keep all objects in the screen, I prefered use the layers because it has the order benefit... For while, this class just has this few methods, I have not thought in another funcionallity for it. The draw method just will draw its graphic objects =D Tomorrow I'll show the PPanel class and after that, I'll need your help to know "what path I have to follow" =P Thanx... PS: I'm sorry about the late... - Advertisement
https://www.gamedev.net/profile/94804-kerkhoff/
CC-MAIN-2018-22
refinedweb
1,403
55.74
Type: Posts; User: VENDETTA I finally got it working now Thank You very much for your time and help !!! :D #include <stdio.h> #include <conio.h> #include <stdlib.h> #include <memory.h> sorry ... this should be a c program ... i keep on getting it mixed up with c++ ... I am a newbie's newbie ... But the combination factor is driving me crazy ... there are so many possible combinations ... To determine if 2 sets of 4 digit numbers are true or false :- Given number 1234 Given number 1243 From the second number if I can find at least 3 digits in the first number ->>> this gives a... Sorry, Here is what I have done so far ... Please do not laugh ... Somehow the Strncmp() Function is not working ... :( Any help will be well appreciated ... Procedure to Compare 2 Strings with the following criteria Hi everyone - I hope your day is going great. Can someone please kindly help me with the coding of the following function - I have... thank you so much for replying Sahir, may I kindly ask how may I use the Ms Office installed api for grammar pharsing ? any example code will be much appreciated. :D Dear All, I am a newbie here, Hi, how do you do ? Nice to meet you all ? :D
http://forums.codeguru.com/search.php?s=644dfad6e1082e502b82b90939c6bec1&searchid=2755207
CC-MAIN-2014-15
refinedweb
214
78.25
Scenario: Download Script You are working as ETL developer / SSIS Developer. You need to developer an SSIS Package that should read all CSV files from a folder and create a new Excel file with Date time and load CSV files to it. Each CSV file should be loaded to new excel sheet. CSV file can have same column structure or different. Log File Information : In case your SSIS Package fail. Log file will be created in the same folder where your Excel File will be created.The datetime should of log file should match with excel file it was trying to create. Load CSV files to Excel File Dynamically in SSIS Package by using Script Task - SSIS tutorial Solution: We will be using Script Task to read all the csv files from a folder and then load to newly created Excel File.As number of files can be less or more, we need to use loop in Script Task to get each file and load dynamically. We are going to create variables File FileDelimiter : Provide the delimiter such as comma (,), Pipe( | ) Whatever your files are using. FileExtension : Provide the extension of files that you would like to read. SourceFolderPath : Source folder path where text files exists ExcelFileName : Provide the Name of Excel File you would like to create Create variables in SSIS Package to load all the text files to an Excel File by using Script Task Step 2: Add Script Task to SSIS Package and Map Variables Bring Script Task to Control Flow Pane and open it by double clicking. Add the SSIS Package variables to it so we can use inside. Use SSIS Packagre variables in Script Task to write all text files to Excel File dynamically in SSIS Package Step 3: Add Script to Script task Editor in SSIS Package to Load Each csv File to Excel Sheet in Excel File Click Edit Button and it will open Script Task Editor. Under #region Namespaces, I have added below code using System.IO; using System.Data.OleDb; Under public void Main() { I have added below code. string datetime = DateTime.Now.ToString("yyyyMMddHHmmss"); ExcelFileName = Dts.Variables["User::ExcelFileName"].Value.ToString(); string CreateTableStatement = ""; string ColumnList = ""; //Reading file names one by one string SourceDirectory = SourceFolderPath; string[] fileEntries = Directory.GetFiles(SourceDirectory,"*"+FileExtension); foreach (string fileName in fileEntries) { / + "\\" + ExcelFileName+"_"+datetime + ";" + "Extended Properties=\"Excel 12.0 Xml;HDR=YES;\""; OleDbConnection Excel_OLE_Con = new OleDbConnection(); OleDbCommand Excel_OLE_Cmd = new OleDbCommand();+".log")) { sw.WriteLine(exception.ToString()); Dts.TaskResult = (int)ScriptResults.Failure; } } Step 4: Save Script and Execute SSIS Package to Load Each CSV From Folder to an Excel File Save the script in Script task editor and close the window. Run your SSIS Package, It should read each text file, create new excel sheet in excel file and load it. I ran my SSIS Package for above sample files and here is my Excel File created by SSIS Package. How to load all CSV files from Folder to an Excel File Dynamically in SSIS Package - SSIS Tutorial
http://www.techbrothersit.com/2016/03/how-to-load-all-csv-files-to-excel.html
CC-MAIN-2019-04
refinedweb
500
61.56
Scott Mitchell August 2003 Applies to: Microsoft® ASP.NET Prerequisites: This article assumes the reader is familiar with ASP.NET. Level of Difficulty: 2 Summary: While, technically, all of an ASP.NET server control's functionality can be performed on the server-side, often the usability of a server control can be greatly enhanced by adding client-side script. In this article we'll examine two means by which server controls can emit client-side script. We'll also build two server controls that utilize these techniques: PopupGreeting, a server control that displays a client-side, modal dialog box with a specified message on a Web page's first load, and ConfirmButton, which is an enhanced Button Web control that, when clicked, prompts the user with a JavaScript confirm() dialog box before posting back the Web Form. (11 printed pages) PopupGreeting ConfirmButton confirm() Download InjectingClientSideScript.msi. Introduction Adding Client-Side Script Blocks with RegisterStartupScript() and RegisterClientScriptBlock() Examining IsStartupScriptRegistered() and IsClientScriptBlockRegistered() Emitting Client-Side Script Blocks from an ASP.NET Server Control Emitting HTML Attributes for a ASP.NET Server Web Control Conclusion While, technically, all of a Microsoft® ASP.NET server control's functionality can be performed on the server-side, often the usability of a server control can be greatly enhanced by adding client-side script. For example, the ASP.NET validation Web controls perform all validation checks on the server-side. However, for uplevel browsers, the validation Web controls also emit client-side script so that validation can be performed on the client-side as well. This means that users of those browsers get a more responsive, dynamic experience. When developing ASP.NET server controls you should ask yourself how you could enhance the usability through the use of client-side script. Once you have identified these areas, all that remains is to augment the server control so that it emits the proper client-side script. There are two types of client-side script ASP.NET server controls can emit: Client-side script blocks are typically written in JavaScript, and usually contain functions that are executed when certain client-side events transpire. Client-side HTML attributes provide a way to tie a client-side event with a piece of client-side script. For example, the following HTML page contains a client-side script block that contains a function called doClick(). The page also contains a button—created by the <input> HTML element—that has its onclick attribute wired up to the doClick() function. That is, whenever a user clicks the button, the client-side code in the doClick() function will execute. In this example, a popup dialog box will display (Figure 1). doClick() <input> onclick <html> <body> <form> <script language="JavaScript"> <!-- function doClick() { alert("You clicked me!"); } // --> </script> <input type="button" onclick="doClick()" value="Click Me!" /> </form> </body> </html> Figure 1 shows a screenshot of this HTML page when the Click Me! button is clicked. Figure 1. Popup dialog box that displays when Click Me! Button is clicked There are a couple of things worth mentioning in the client-side script in the HTML page above. First, note that the client-side script block is encased in HTML comments (<!-- and -->). These comments are in place because old, non-script aware browsers will simply display the contents of the <script> block if it is not encased in HTML comments. Furthermore, note that the closing HTML comment in the script block has a JavaScript comment preceding it—//. This is because older versions of Netscape would throw a JavaScript parsing exception when the --> was encountered, unless it was commented out. Fortunately, modern browsers do not require this extra pampering, so if you are developing Web pages for an intranet or other browser-controlled environment, you need not take such precautions. <!-- --> <script> // For those unfamiliar with client-side scripting, the alert(string) function simply displays a modal popup dialog box that contains the message specified by the string parameter. HTML elements all have a number of client-side attributes (such as onclick, onmouseover, onmouseout, onfocus, onblur, and so on) that can be assigned a piece of client-side JavaScript code. For example, in the HTML page above, the <input> element's onclick attribute is wired up to the doClick() function, thereby causing the doClick() function to execute when the button is clicked. A list of JavaScript events and their associated HTML attributes can be found in the article Introduction to Dynamic HTML. For more information on client-side JavaScript, refer to the article HTML and Dynamic HTML. onmouseover onmouseout onfocus onblur In this article we will see how to emit both client-side script blocks and HTML element attributes in ASP.NET server controls. First, we'll see how to use two methods in the System.Web.UI.Page class to add client-side script blocks to an ASP.NET Web page: RegisterStartupScript() and RegisterClientScriptBlock(). Armed with this knowledge, we'll examine building a simple server control that displays a client-side popup dialog box whenever the page is loaded. After this, we'll turn our attention to adding HTML attributes to the HTML element rendered by the ASP.NET server control. Finally, we'll put all that we've learned to practice and build a ConfirmButton Web control—one that, when clicked, prompts the user with a client-side confirm dialog box that asks if they are sure they want to proceed. System.Web.UI.Page RegisterStartupScript() RegisterClientScriptBlock() The System.Web.UI.Page class contains two methods for emitting client-side script code into the HTML rendered by the ASP.NET Web page: Both of these methods take two strings as input. The second parameter, script, is the client-side script—including the opening and closing <script> tags—to insert into the page. The first parameter, key, serves as a unique identifier for the inserted client-side script. The only difference between these two methods is where each one emits the script block. RegisterClientScriptBlock() emits the script block at the beginning of the Web Form (right after the <form runat="server"> tag), while RegisterStartupScript() emits the script block at the end of the Web Form (right before the </form> tag). <form runat="server"> </form> To better understand why there are two different methods for emitting client-side script, realize that client-side script can be partitioned into two classes: code that is designed to run immediately when the page is loaded, and code that is designed to run when some client-side event occurs. A common example of code that is designed to run when the page is loaded is client-side code designed to set the focus to a textbox. For example, when you visit Google, a small bit of client-side code is executed when the page is loaded to automatically set the focus to the search textbox. An example of code that is designed to run in response to a client-side event can be seen below. Specifically, in this example, a popup dialog box displays when a button is clicked: <html> <body> <form> <script language="JavaScript"> <!-- function displayPopup() { alert("Hello, world."); } // --> </script> <input type="button" value="Click Me!" onclick="displayPopup()" /> </form> </body> </html> Here, the onclick="displayPopup()" in the <input> tag indicates that when the button is clicked the JavaScript function displayPopup() should run. onclick="displayPopup()" displayPopup() The RegisterStartupScript() method is useful for adding script blocks that are designed to run when the page is loaded. The script blocks added via this method appear at the end of the Web Form because the HTML element the script modifies must be defined prior to the script running. That is, if you want to use client-side script to set the focus to a textbox, you must make certain that the textbox's HTML markup appears before the script that sets the textbox's focus. For example, the following HTML will display a textbox and set the focus to the textbox: <input type="text" id="myTextBox" /> <script language="JavaScript"> <!-- document.getElementById("myTextBox").focus(); // --> </script> Whereas the following HTML will not set the focus to the textbox, because the textbox is defined after the script block: <script language="JavaScript"> <!-- document.getElementById("myTextBox").focus(); // --> </script> <input type="text" id="myTextBox" /> Therefore, the RegisterStartupScript() method places the <script> block at the end of the Web Form to ensure that all HTML elements in the Web Form have been declared by the time the client-side script is executed. The RegisterClientScriptBlock() method should be used for script code that executes in response to a client-side event. The script blocks emitted by this method are emitted at the start of the Web Form since it is not imperative that the script blocks be placed after all of the HTML elements. In addition to the RegisterStartupScript() and RegisterClientScriptBlock() methods, the Page class contains two helper methods commonly used when emitting client-side script: Page Recall that when inserting a client-side script block with either RegisterStartupScript() or RegisterClientScriptBlock(), a key is provided that uniquely identifies the script block. These methods, both of which take in a single input—a string key—and return a Boolean value, indicate whether or not a script block with the specified key has already been added to the page. Specifically, the methods return True if a script block with the specified key has already been registered, and False otherwise. To understand the utility of these two methods, consider the ASP.NET validation Web controls RequiredFieldValidator, RegularExpressionValidator, and so on. These controls rely on a common validation JavaScript file, WebValidation.js, which is found in the aspnet_client/system_web/version_number directory of an ASP.NET Web application. Therefore, each of these controls emits an identical script block that calls the appropriate JavaScript function defined in the WebValidation.js file to start the client-side validation process. These controls accomplish this by using the Page class' RegisterClientScriptBlock() method, using the key ValidatorIncludeScript. WebValidation.js aspnet_client/system_web/version_number ValidatorIncludeScript Next consider what happens when there are multiple validation Web controls on a single ASP.NET Web page. Each of these Web controls wants to emit an identical script block with an identical key. If the RegisterClientScriptBlock() or RegisterStartupScript() method is called twice with the same key, the second call is considered a duplicate script block and is ignored. Therefore, even with multiple validation controls on a single Web page, only one instance of the common script block will be emitted. However, realize that all of the validation Web controls other than the first one that rendered will have wasted their time in building up the common client-side script to be emitted. This is where the IsClientScriptBlock() and IsStartupScript()methods come in handy. Rather than take the time to construct the client-side code to be emitted, the validation Web controls first check to see if there already exists a script block registered with the key ValidatorIncludeScript. If there is, then the control can bypass construction of the client-side script block, as it has already been completed by some other validation control on the page. IsClientScriptBlock() IsStartupScript() Therefore, whenever constructing client-side script, it is always wise to first call the IsClientScriptBlock() or IsStartupScript()method to determine if generating the client-side script is necessary. We'll see examples of using the IsClientScriptBlock() and IsStartupScript()methods in tandem with RegisterClientScriptBlock() and RegisterStartupScript() in the next section. Keep in mind that the RegisterStartupScript() and RegisterClientScriptBlock() methods are methods of the System.Web.UI.Page class. Fortunately, it is easy to call these methods from an ASP.NET server control because the System.Web.UI.Control class, the class from which all ASP.NET server controls are either directly or indirectly derived, has a property called Page that contains a reference to the Page instance, which contains the server control. Therefore, in order to add a client-side script block from an ASP.NET server control, all you have to do is use the following syntax: System.Web.UI.Control this.Page.RegisterClientScriptBlock(key, script); Typically adding client-side script blocks is a task handled in the OnPreRender() method, which is the method that executes during the pre-rendering stage of the control's lifecycle. OnPreRender() Let's create an ASP.NET server control that simply displays a client-side popup dialog box. This example will illustrate how easy it is to build a control that emits client-side script. Start by creating a new Web Control Library project in Microsoft® Visual Studio® .NET. This will create a new project with a single class that is derived from System.Web.UI.WebControls.WebControl. However, we want to have this class derived from the System.Web.UI.Control class instead. To understand why, understand that the WebControl class was designed to support server controls that render as HTML elements, while the Control class was designed for server controls that do not result in a rendered HTML element. System.Web.UI.WebControls.WebControl WebControl Control Most of the built-in ASP.NET server controls emit an HTML element. For example, the TextBox Web control emits an <input> element with its type property set to text; the DataGrid Web control emits a <table> element, with <tr> elements for each record to be displayed and <td> columns for each field. However, not all server controls necessarily emit an HTML element. For example, the Literal control merely outputs its Text property as-is, without wrapping it in an HTML element. Similarly, the Repeater does not encase its output in an HTML element. Those server controls that render as an HTML element—TextBox, Button, DataGrid, and so on—are derived from the System.Web.UI.WebControls.WebControl class, whereas those controls that do not produce an HTML element—Literal, Repeater, and so on—are derived from the System.Web.UI.Control class. <table> <tr> <td> Since the server control we'll be creating has no visual aspect (it merely emits a client-side script block that displays a popup control), it would be best for this control to be derived from System.Web.UI.Control as opposed to System.Web.UI.WebControls.WebControl. This control will need only two properties: PopupMessage Enabled In addition to these two properties, we need to override the OnPreRender() method. Here, we need to make a call to RegisterStartupScript(), passing in a key unique to the control and the suitable client-side script to display the popup dialog box. The complete code for this class can be seen below: using System; using System.Web.UI; using System.Web.UI.WebControls; using System.ComponentModel; namespace ClientSideScript { /// <summary> /// Summary description for WebCustomControl1. /// </summary> [DefaultProperty("Text"), ToolboxData("<{0}:PopupGreeting runat=server></{0}:PopupGreeting>")] public class PopupGreeting : System.Web.UI.Control { [Bindable(true), Category("Appearance"), DefaultValue("")] public string PopupMessage { get { // See if the item exists in the ViewState object popupMessage = this.ViewState["PopupMessage"]; if (popupMessage != null) return this.ViewState["PopupMessage"].ToString(); else return "Welcome to my Web site!"; } set { // Assign the ViewState variable ViewState["PopupMessage"] = value; } } [Bindable(true), Category("Appearance"), DefaultValue("")] public bool Enabled { get { // See if the item exists in the ViewState object enabled = this.ViewState["Enabled"]; if (enabled != null) return (bool) this.ViewState["Enabled"]; else return true; } set { // Assign the ViewState variable ViewState["Enabled"] = value; } } protected override void OnPreRender(EventArgs e) { base.OnPreRender(e); string scriptKey = "intoPopupMessage:" + this.UniqueID; if (!Page.IsStartupScriptRegistered(scriptKey) && this.Enabled && !Page.IsPostBack) { string scriptBlock = @"<script language=""JavaScript""> <!-- alert(""%%POPUP_MESSAGE%%""); // --> </script>"; scriptBlock = scriptBlock.Replace("%%POPUP_MESSAGE%%", this.PopupMessage); Page.RegisterStartupScript(scriptKey, scriptBlock); } } } } Take note of these two things: first, the properties Enabled and PopupMessage are saved in the ViewState. This allows these values to be persisted across postbacks. Next, in the OnPreRender() method, the key used for the script block is the text intoPopupMessage: concatenated with the control's UniqueID property. If a single, hard-coded key were used, then, if there were multiple controls on the page, only the first control would be able to register its script block, so only one popup dialog box would be displayed. By using the UniqueID in the script block key, each instance of this control is guaranteed to get its script block in. ViewState intoPopupMessage: UniqueID Before registering the script block, the code first checks three conditions: IsStartupScriptRegistered() If these three conditions pass, then the script is specified and the PopupMessage property value is inserted into the script in the proper location. Finally, the Page property's RegisterStartupScript() method is called, passing in the key and script code. The PopupGreeting code is available in a download at the end of this article. This download includes the Visual Studio .NET Solution named ClientSideControlsAndTester, which contains two projects: The compiled assembly for the ClientSideControls project is named ClientSideControls.dll. To use the PopupGreeting server control in your own ASP.NET Web application, add the ClientSideControls.dll file to your Web application's References. Next, in the Designer, right-click on the Toolbox and choose Add/Remove Items . . .. Again, select the ClientSideControls.dll file. This will add a new item to the Toolbox titled PopupGreeting. You can then drag and drop the control from the Toolbox onto the Designer. ClientSideControls.dll Figure 2 shows a screenshot of Visual Studio .NET after the PopupGreeting control has been added to the Toolbox and then added to the Designer. The PopupGreeting control in the Toolbox is circled in red, the PopupGreeting output in the Designer is circled in blue, and the properties of the PopupGreeting can be seen in the Properties pane in the right-hand side of the screenshot. Figure 2. The PopupGreeting Server Control has been added to an ASP.NET Web form page Recall that there are two ways to emit client-side script through a server control: In the previous section we examined how to add client-side script blocks to an ASP.NET Web page using the Page class's RegisterStartupScript() and RegisterClientScriptBlock() methods. In this final section we'll see how to add HTML element attributes to the HTML element rendered by the server control. Before we begin, realize that typically this approach will only be used for server controls that are derived from the System.Web.UI.WebControls.WebControl class, as controls derived from this class emit some HTML element. Server controls that do not emit an HTML element—like the PopupGreeting server control from the previous section—do not ever need to write out HTML element attributes because they do not write out an HTML element to begin with. The WebControl class contains a method for adding HTML element attributes to the HTML element being emitted by the Web control. This method is called AddAttributesToRender() and has a single input parameter, an HtmlTextWriter instance. To add HTML attributes to the Web control you can use one of these two methods from the HtmlTextWriter: AddAttributesToRender() HtmlTextWriter The AddAttribute() method adds an HTML attribute like title, class, style, onclick, and so on to the HTML element. AddStyleAttribute(), on the other hand, adds style settings to the HTML element, like background-color, color, font-size, and so on. AddAttribute() title class style AddStyleAttribute() background-color color font-size AddAttribute() has a few overloaded forms, but in the code we'll examine we'll use the following form: AddAttribute(HtmlTextWriterAttribute, value). The first parameter, HtmlTextWriterAttribute, needs to be a member from the HtmlTextWriterAttribute enumeration. This enumeration contains items like Align, Bgcolor, Class, Onclick, and so on. You can see a complete listing in the .NET Framework Class Library, HtmlTextWriterAttribute Enumeration. The value input parameter specifies the value assigned to the specified HTML attribute. Finally, if you want to add an HTML attribute that is not defined in the HtmlTextWriterAttribute enumeration, you can use an alternate form of the AddAttribute() method, AddAttribute(attributeName, value). Here, both attributeName and value are strings. AddAttribute(HtmlTextWriterAttribute, value) HtmlTextWriterAttribute Align Bgcolor Class Onclick AddAttribute(attributeName, value) To apply this information, let's create a server Web control that renders as a confirm button. A confirm button is a submit button that, when clicked, displays a popup dialog box asking the user if they're certain they want to continue. This gives the user a chance to click Cancel, which has the effect of not submitting the form. Such functionality is particularly useful when there are buttons to delete information—nothing can be more upsetting to an end user (or Web site administrator) than to have deleted an item from a database due to an accidental and unfortunate mouse click. To save ourselves a lot of work, we can have the ConfirmButton Web control be derived from the System.Web.UI.WebControls.Button class, since this class already does all the heavy lifting involved with rendering a submit button. All that we need to do in our derived class is add a property so that the user can specify the confirmation message, then override the Button's AddAttributesToRender() method, and then add an attribute to handle the onclick client-side event. System.Web.UI.WebControls.Button onclick Start by creating a new Web Control Library project in Visual Studio .NET, or add a new Web Custom Control into the ClientSideControls project. The complete source code for the ConfirmButton class can be seen below: using System; using System.Web.UI; using System.Web.UI.WebControls; using System.ComponentModel; namespace ClientSideControls { /// <summary> /// Summary description for ConfirmButton. /// </summary> [DefaultProperty("Text"), ToolboxData("<{0}:ConfirmButton runat=server></{0}:ConfirmButton>")] public class ConfirmButton : Button { [Bindable(true), Category("Appearance"), DefaultValue("")] public string PopupMessage { get { // See if the item exists in the ViewState object popupMessage = this.ViewState["PopupMessage"]; if (popupMessage != null) return this.ViewState["PopupMessage"].ToString(); else return "Are you sure you want to continue?"; } set { // Assign the ViewState variable ViewState["PopupMessage"] = value; } } protected override void AddAttributesToRender(HtmlTextWriter writer) { base.AddAttributesToRender(writer); string script = @"return confirm(""%%POPUP_MESSAGE%%"");"; script = script.Replace("%%POPUP_MESSAGE%%", this.PopupMessage.Replace("\"", "\\\"")); writer.AddAttribute(HtmlTextWriterAttribute.Onclick, script); } } } The first thing to notice is that this class, ConfirmButton, is derived from the Button class. Since the Button class already contains all the properties and methods a Button Web control uses, all we have to do is add the properties and methods to make the Button, when clicked, display a confirm dialog box. We now need one property, PopupMessage, which is the message that will display in the confirm popup dialog box. By default, this message is, "Are you sure you want to continue?" If you are using the ConfirmButton for verifying deletes, you might want to change the message to something like, "This action will permanently delete the selected item. Are you sure you want to do this?" Button We only need to override a single method, AddAttributesToRender(). In this method we simply construct the client-side JavaScript to execute when the rendered <input> element's onclick event fires, and then add this via the AddAttribute() method of the passed-in HtmlTextWriter object. One thing to note in this method is that we must replace all instances of double-quotes in the PopupMessage property value with escaped double-quotes (namely, \"). Also, realize that the AddAttribute() by default HTML encodes the characters in the second parameter. That is, an ASP.NET Web page with a ConfirmButton whose PopupMessage property is set to "Do you want to continue?" will emit the following HTML markup: \" <input type="submit" name="ConfirmButton1" value="Click Me!" id="ConfirmButton1" onclick="return confirm ("Do you want to continue?");" /> If you are unfamiliar with JavaScript's confirm(string) function, it simply accepts a string parameter and displays a modal dialog box with the specified string. This dialog box contains two buttons, OK and Cancel. If OK is clicked, the confirm() function returns True, otherwise it returns False. Note that the onclick event returns the result of the confirm() function call. When a form is submitted by clicking a submit button, if the submit button's onclick event returns False, the form is not submitted. Hence, the confirm() function can be used in this manner to only submit the form if the user gives his confirmation. For more information on confirm(), see Javascript Confirm Form Submission from the ASP Warrior site. confirm(string) Figure 3. The ConfirmButton in action While ConfirmButton uses inline JavaScript in the button's onclick event handler, another option is to create a function in a client-side script block in the ConfirmButton's OnPreRender() method, and then adjust the onclick attribute to call this function. In this article we examined two methods for injecting client-side script via an ASP.NET server control. The first method is to insert client-side script blocks using the Page class's RegisterStartupScript() and RegisterClientScriptBlock() methods. The second method is to add client-side script to an HTML element's attributes. This is accomplished by overriding the Web server control's AddAttributesToRender() method, and using the HtmlTextWriter's AddAttribute() method. We also examined in this article two simple server controls that utilize client-side script to improve their functionality. The PopupGreeting control simply displays a modal popup dialog box when the page was first loaded. Similarly, the ConfirmButton Web control prompts the user to confirm that they wish to continue when they submit a form by clicking on the button. You can greatly improve the user's experience by inserting client-side script into your custom server controls. While the two server controls examined in this article were relatively simple and won't win any awards for usability or ingenuity, at MetaBuilders.com there is an impressive display of the capabilities that can be realized with client-side script injection from an ASP.NET server control. Specifically, at MetaBuilders.com you can find server controls that automatically add the focus to a textbox, move items between two drop-down lists, add and remove items from a drop-down list, display parent-child related data in a series of drop-down lists, and on and on. Best of all, these controls are free and include the complete source code.).
http://msdn.microsoft.com/en-us/library/aa478975.aspx
crawl-002
refinedweb
4,318
54.42
📣 This post originally appeared on ReedBarger.com. in creating our overall app functionality. Let's get started! Want to build amazing apps with React like this one? Join the real-world React app course series. In it, you'll learn how to build an impressive, full-stack React project every month from scratch. Step 1: Model our Data and Create our Database Our application consists of two major parts, our Node backend and our React frontend. Our backend is going to be responsible for things like authentication and authorization to log in users and make sure they can access the right content. It will also be responsible for providing our video data (i.e. the video itself and whether we have liked or disliked it) and user-related data (i.e. each user's profile). The backend is going to do all these things by interacting with our database. The database that we're going to be using is the SQL database Postgres. What is going to be responsible for modeling that data (for telling our database what data it is going to store) is going to be a tool called Prisma. Prisma is what's known as an ORM or an object relational mapper. It does the work of managing how our data is structured in our database, including the relationships all of our data shares with each other through models. Our app will consist of six primary data models: User, Comment, Subscription, Video, VideoLike, and View data. You can see the final version of our schema below: // prisma.schema datasource db { provider = "postgresql" url = env("DATABASE_URL") } generator client { provider = "prisma-client-js" } model User { id String @id @default(uuid()) createdAt DateTime @default(now()) username String email String @unique avatar String @default("") cover String @default("") about String @default("") videos Video[] videoLikes VideoLike[] comments Comment[] subscribers Subscription[] @relation("subscriber") subscribedTo Subscription[] @relation("subscribedTo") views View[] } model Comment { id String @id @default(uuid()) createdAt DateTime @default(now()) text String userId String videoId String user User @relation(fields: [userId], references: [id]) video Video @relation(fields: [videoId], references: [id]) } model Subscription { id String @id @default(uuid()) createdAt DateTime @default(now()) subscriberId String subscribedToId String subscriber User @relation("subscriber", fields: [subscriberId], references: [id]) subscribedTo User @relation("subscribedTo", fields: [subscribedToId], references: [id]) } model Video { id String @id @default(uuid()) createdAt DateTime @default(now()) title String description String? url String thumbnail String userId String user User @relation(fields: [userId], references: [id]) videoLikes VideoLike[] comments Comment[] views View[] } model VideoLike { id String @id @default(uuid()) createdAt DateTime @default(now()) like Int @default(0) userId String videoId String user User @relation(fields: [userId], references: [id]) video Video @relation(fields: [videoId], references: [id]) } model View { id String @id @default(uuid()) createdAt DateTime @default(now()) userId String? videoId String user User? @relation(fields: [userId], references: [id]) video Video @relation(fields: [videoId], references: [id]) } Each of these models include various properties with their associated data types. In the first column of each model is are the different fields or individual properties that each model consists of, such as the id or unique identifier or createdAt timestamp when the database created a given entry. You can think of each model as a special type of JavaScript object with special properties that we are creating in our schema. If we look at the second column, we can see what the data type of each field must be. These values largely correspond to normal JavaScript types: strings, integers and dates. Associated types can also be different data models. For example, looking at our User model, we see it has a videos field, which has a data type of Video[], which means it is an array of data type Video. This makes sense--every user can logically have multiple videos that they've created. The same applies for their likes, comments, subscribers, users to which they've subscribed, and their video views. Step 2: Create Auth, Video, and User Routes Now that we have our schema created, we can create the business logic for our backend. We're going to be using Node with the library Express to build our backend. Express makes it very easy to build powerful APIs, which is exactly what we need for our YouTube app. The largest part of our API will be the routes, or individual endpoints to which our React app will be making requests for data. We will have separate routing for authentication, video and user-related resources that will begin as follows: I won't go through all of the individual routes that we need to create, but just to give you an idea of what one of them looks like, let's take a look at the video-related routes. // server/src/routes/video.js import { PrismaClient } from "@prisma/client"; import express from "express"; const prisma = new PrismaClient(); function getVideoRoutes() { const router = express.Router(); router.get("/", getRecommendedVideos); router.get("/trending", getTrendingVideos); // ... many more routes omitted return router; } export async function getVideoViews(videos) { for (const video of videos) { const views = await prisma.view.count({ where: { videoId: { equals: video.id, }, }, }); video.views = views; } return videos; } async function getRecommendedVideos(req, res) { let videos = await prisma.video.findMany({ include: { user: true, }, orderBy: { createdAt: "desc", }, }); if (!videos.length) { return res.status(200).json({ videos }); } videos = await getVideoViews(videos); res.status(200).json({ videos }); } async function getTrendingVideos(req, res) { let videos = await prisma.video.findMany({ include: { user: true, }, orderBy: { createdAt: "desc", }, }); if (!videos.length) { return res.status(200).json({ videos }); } videos = await getVideoViews(videos); videos.sort((a, b) => b.views - a.views); res.status(200).json({ videos }); } We use express.Router to append all of our subroutes to the main route ( /api/v1/videos) using the function getVideoRoutes. We create an individual route by specifying what type of request can be made to it with the appropriate method: get, put, or delete. We pass to that method what endpoint we want our frontend to make the request to as well as a function to handle any incoming requests to that endpoint. These functions below our routes which are used to handle requests for each of our API endpoints are commonly known as controllers. You can see some of the controllers that we're using here, such as getRecommendedVideos or getTrendingVideos. Their names make clear what function they perform. For example, if our React app makes a GET request to /api/v1/videos/, our controller responds with the user's recommended videos. Note that within each controller, we are using PrismaClientto interact with our database, which was generated based off of the prisma.schemafile we created. For our getRecommendedVideos controller, we use the findMany method to get many videos (an array of them), where the user data for each video is included (with the include operator for the user field). And we are ordering the results by the createdAt field from newest to oldest (with desc or in descending order). Step 3: Protect Auth Routes with Middleware In addition to our controllers, there is some important middleware that we need to associate with some of our routes. What is middleware? Middleware are functions that are used to run before another function to provide a value or perform an action. In our case, middleware will run before our controller function for each route. When a user wants to get videos that they've liked, we first need to write some middleware that will get the current user before our controller attempts to respond with the user data. // server/src/routes/user.js import { PrismaClient } from "@prisma/client"; import express from "express"; import { protect } from "../middleware/authorization"; const prisma = new PrismaClient(); function getUserRoutes() { const router = express.Router(); router.get("/liked-videos", protect, getLikedVideos); return router; } The protect middleware is placed before getLikedVideos, which means it will run first. The code for the protect function is provided below: // server/src/middleware/authorization.js import { PrismaClient } from "@prisma/client"; import jwt from "jsonwebtoken"; const prisma = new PrismaClient(); export async function protect(req, res, next) { if (!req.cookies.token) { return next({ message: "You need to be logged in to visit this route", statusCode: 401, }); } try { const token = req.cookies.token; const decoded = jwt.verify(token, process.env.JWT_SECRET); const user = await prisma.user.findUnique({ where: { id: decoded.id, }, include: { videos: true, }, }); req.user = user; next(); } catch (error) { next({ message: "You need to be logged in to visit this route", statusCode: 401, }); } } In our protect middleware function, if we don't have a user or if user has an invalid JSON Web Token, we use the next function to respond to the client with a 401 error. A 401 error code means the current user is not authorized to access a particular resource they are requesting. Otherwise, if user does have a valid token, we fetch them with our Prisma Client and pass it along to our getLikedVideos controller. We can do so by adding a property to the request or req object and then calling the next function (which is also a middleware function). Middleware is essential in our application primarily for things like authorization to get our currently authenticated user as well as protecting endpoints that include secure information. Middleware is also helpful for handling errors in our backend, so that we recover from them successfully and ensure our application doesn't break when there is an error. Step 4: Create React Client Pages and Styles Moving on to the React frontend, we can easily create our React app to consume our Node API with the help of Create React App. To get started with Create React App, you can simply run the command in the root of your project folder: npx create-react-app client After the installation is finished, we will have a React app placed in the folder client, right next to our server code in the server folder. The first step with our React app is to set up all the individual routes for our application. These will be placed in the App.js component and correspond with the routes that YouTube has for their app: // client/src/App.js import React from "react"; import { Route, Switch } from "react-router-dom"; import MobileNavbar from "./components/MobileNavbar"; import Navbar from "./components/Navbar"; import Sidebar from "./components/Sidebar"; import { useLocationChange } from "./hooks/use-location-change"; import Channel from "./pages/Channel"; import History from "./pages/History"; import Home from "./pages/Home"; import Library from "./pages/Library"; import LikedVideos from "./pages/LikedVideos"; import NotFound from "./pages/NotFound"; import SearchResults from "./pages/SearchResults"; import Subscriptions from "./pages/Subscriptions"; import Trending from "./pages/Trending"; import WatchVideo from "./pages/WatchVideo"; import YourVideos from "./pages/YourVideos"; import Container from "./styles/Container"; function App() { const [isSidebarOpen, setSidebarOpen] = React.useState(false); const handleCloseSidebar = () => setSidebarOpen(false); const toggleSidebarOpen = () => setSidebarOpen(!isSidebarOpen); useLocationChange(handleCloseSidebar); return ( <> <Navbar toggleSidebarOpen={toggleSidebarOpen} /> <Sidebar isSidebarOpen={isSidebarOpen} /> <MobileNavbar /> <Container> <Switch> <Route exact path="/" component={Home} /> <Route path="/watch/:videoId" component={WatchVideo} /> <Route path="/channel/:channelId" component={Channel} /> <Route path="/results/:searchQuery" component={SearchResults} /> <Route path="/feed/trending" component={Trending} /> <Route path="/feed/subscriptions" component={Subscriptions} /> <Route path="/feed/library" component={Library} /> <Route path="/feed/history" component={History} /> <Route path="/feed/my_videos" component={YourVideos} /> <Route path="/feed/liked_videos" component={LikedVideos} /> <Route path="*" component={NotFound} /> </Switch> </Container> </> ); } For our Router and all of our Routes, we are using the library react-router-dom, which will also give us some helpful React hooks to access values like route params ( useParams) and navigate our user programmatically around the app ( useHistory). When it comes to building out the appearance of our application, we're going to be using a library called styled-components. What's very useful about styled components is that it is a CSS-in-JS library. The benefit of a CSS-in-JS library is that we can write CSS styles in our .js files. It allows us to use React and JavaScript features that we wouldn't be able to use in a normal CSS stylesheet. We can pass certain values as props to our styled components as props just like we would a normal react component. So here's a look at one of our styled components, where we are conditionally setting several styles rules based off of the value of the prop red. As you might have guessed, by passing the prop blue with a value of true to our styled Button component, it makes our button the YouTube red color. // client/src/styles/Button.js import styled, { css } from "styled-components"; const Button = styled.button` padding: 10px 16px; border-radius: 1px; font-weight: 400; font-size: 14px; font-size: 0.875rem; font-weight: 500; line-height: 1.75; text-transform: uppercase; letter-spacing: 0.02857em; ${(props) => props.red && css` background: ${(props) => props.theme.darkRed}; border: 1px solid ${(props) => props.theme.darkRed}; color: white; `} `; export default Button; Here is how we would use the Button styled component we created above with the red prop passed to it: // example usage: import React from "react"; import Button from "../styles/Button"; import Wrapper from "../styles/EditProfile"; function EditProfile() { return ( <Wrapper> <div> <Button red onClick={() => setShowModal(true)}> Edit Profile </Button> </div> </Wrapper> ); Another benefit of using styled components is that it gives us scoped styles. In other words, styles written within a styled component will be applied only to the component they are used in and nowhere else in our application. This is very different as compared to normal CSS style sheets, where if you include them in their application they are global, they're applied to the entire app. Step 5: Add Client Authentication with Google OAuth The next step is to add authentication with the help of Google OAuth. This is something that's very easy to set up with the help of a library called react-google-login. It gives us both a custom hook as well as a special React component that we can use to log in our user if they have a Google account. Below is the code used for the GoogleAuth component which a user can press to login immediately using a popup modal from Google: // client/src/components/GoogleAuth.js import React from "react"; import Button from "../styles/Auth"; import { SignInIcon } from "./Icons"; import { GoogleLogin } from "react-google-login"; import { authenticate } from "../utils/api-client"; function GoogleAuth() { return ( <GoogleLogin clientId="your-client-id-from-google-oauth" cookiePolicy="single_host_origin" onSuccess={authenticate} onFailure={authenticate} render={(renderProps) => ( <Button tabIndex={0} <span className="inner"> <SignInIcon /> </span> sign in </span> </Button> )} /> ); } export default GoogleAuth; Step 6: Easily Fetch Data using React Query Once we're able to authenticate our users, we can move on to creating our pages or page content and start making requests to our API endpoints. One of the most fully-featured and simple libraries for making HTTP requests is called axios. Additionally, the way to most easily make requests across React components is with a special library called react-query. What is very helpful about React Query is the custom React hooks that make it possible not only to request data, but allow us to cache (save) the results of each query we make, to prevent having to refetch data if it is already in our local cache. In other words, React Query is a powerful data fetching and state management library rolled into one. Here's a quick example of how I used react query to request all the recommended videos for users on the homepage. // client/src/pages/Home.js import axios from "axios"; import React from "react"; import { useQuery } from "react-query"; import ErrorMessage from "../components/ErrorMessage"; import VideoCard from "../components/VideoCard"; import HomeSkeleton from "../skeletons/HomeSkeleton"; import Wrapper from "../styles/Home"; import VideoGrid from "../styles/VideoGrid"; function Home() { const { data: videos, isSuccess, isLoading, isError, error, } = useQuery("Home", () => axios.get("/videos").then((res) => res.data.videos) ); if (isLoading) return <HomeSkeleton />; if (isError) return <ErrorMessage error={error} />; return ( <Wrapper> <VideoGrid> {isSuccess ? videos.map((video) => <VideoCard key={video.id} video={video} />) : null} </VideoGrid> </Wrapper> ); } export default Home; If we're in a loading state, we show a loading skeleton like the YouTube app does. If there is an error, we show an error message within the page. Otherwise, if the request was successful, we show the videos that our backend recommends to our user. Step 7: Upload and Play User Videos For uploading our videos, we will use the library Cloudinary. We can upload a video from React to Cloudinary by using a file input, with which we'll select our video file from our computer and then make a request to Cloudinary API, which will then give us back a url once the video is uploaded to their servers. From there, the user will be able to provide their video information. Once they hit publish we can save their video information in our database. When it comes to displaying videos that users have created, we're going to be using an open source library called video.js. To watch an individual video, we will need to fetch the video according to its id. After that we'll pass the url to the video.js player, which will give the user the ability to scroll through the video, make it fullscreen, and change the volume. // client/src/components/VideoPlayer.js import React from "react"; import videojs from "video.js"; import "video.js/dist/video-js.css"; import { addVideoView } from "../utils/api-client"; function VideoPlayer({ video }) { const videoRef = React.useRef(); const { id, url, thumbnail } = video; React.useEffect(() => { const vjsPlayer = videojs(videoRef.current); vjsPlayer.poster(thumbnail); vjsPlayer.src(url); vjsPlayer.on("ended", () => { addVideoView(id); }); }, [id, thumbnail, url]); return ( <div data-vjs-player> <video controls ref={videoRef}</video> </div> ); } export default VideoPlayer; Underneath the video, the user will be able to add comments, like and dislike the video, as well as subscribe to the video author's channel. All of these different features are going to be made possible by making network requests to our appropriate API endpoints (again, using axios). Step 8: Protect Auth Actions with a Custom Hook Once we've created a lot of this functionality, we need to lock down some actions for users that are not authenticated. We do not want unauthorized users to be able to attempt to login to attempt to create a comment or like a video, etc. These are all actions that only certain authenticated users should be able to perform. As a result, we can create a custom hook in order to protect an authenticated action. The reason for creating this hook is for easy reuse across our many components that use authenticated actions within them. This custom hook will be called useAuthAction. // client/src/hooks/use-auth-action.js import { useGoogleLogin } from "react-google-login"; import { useAuth } from "../context/auth-context"; import { authenticate } from "../utils/api-client"; export default function useAuthAction() { const user = useAuth(); const { signIn } = useGoogleLogin({ onSuccess: authenticate, clientId: "your-client-id", }); function handleAuthAction(authAction, data) { if (user) { authAction(data); } else { signIn(); } } return handleAuthAction; } The handleAuthAction function is going to be returned from our hook and will accept around function that we want to execute as an argument, such as the functions to like or dislike a video. handleAuthAction will accept the function's argument as its second argument: // client/src/pages/WatchVideo.js function WatchVideo() { const handleAuthAction = useAuthAction(); function handleLikeVideo() { handleAuthAction(likeVideo, video.id); } function handleDislikeVideo() { handleAuthAction(dislikeVideo, video.id); } function handleToggleSubscribe() { handleAuthAction(toggleSubscribeUser, video.user.id); } // rest of component } If an unauthenticated user attempts to log in or create a comment, instead of making requests to our API to create a comment, they will be automatically logged in via the useGoogleLogin hook from the react-google-login library. Step 9: Change User Channel Data At this point we have displayed all of the videos that our users liked, their watch history, the channels that they are following, the trending videos and much more. Finally, we are also going to display each user's channel and make it possible for them to change their user information such as their username, bio, avatar, and cover image. These image uploads are going to be performed once again with Cloudinary. Users will be able to select the image that they want to make as their cover avatar images. We're going to make requests the Cloudinary API to give us a URL that we will then take and update our users information with. All of these changes are going to be made possible with a modal that we're going to create and this modal is going to be created with the package @reach/dialog that's going to give us a modal that is made with accessibility in mind and we can style as we like. Here is the code we will use inside our modal to upload our user's images and update their channel. // client/src/components/EditChannelModal.js import React from "react"; import { useSnackbar } from "react-simple-snackbar"; import Button from "../styles/Button"; import Wrapper from "../styles/EditChannelModal"; import { updateUser } from "../utils/api-client"; import { uploadMedia } from "../utils/upload-media"; import { CloseIcon } from "./Icons"; function EditChannelModal({ channel, closeModal }) { const [openSnackbar] = useSnackbar(); const [cover, setCover] = React.useState(channel.cover); const [avatar, setAvatar] = React.useState(channel.avatar); async function handleCoverUpload(event) { const file = event.target.files[0]; if (file) { const cover = await uploadMedia({ type: "image", file, preset: "your-cover-preset", }); setCover(cover); } } async function handleAvatarUpload(event) { const file = event.target.files[0]; if (file) { const avatar = await uploadMedia({ type: "image", file, preset: "your-avatar-preset", }); setAvatar(avatar); } } async function handleEditChannel(event) { event.preventDefault(); const username = event.target.elements.username.value; const about = event.target.elements.about.value; if (!username.trim()) { return openSnackbar("Username cannot be empty"); } const user = { username, about, avatar, cover, }; await updateUser(user); openSnackbar("Channel updated"); closeModal(); } return ( <Wrapper> <div className="edit-channel"> <form onSubmit={handleEditChannel}> <div className="modal-header"> <h3> <CloseIcon onClick={closeModal} /> <span>Edit Channel</span> </h3> <Button type="submit">Save</Button> </div> <div className="cover-upload-container"> <label htmlFor="cover-upload"> <img className="pointer" width="100%" height="200px" src={cover} </label> <input id="cover-upload" type="file" accept="image/*" style={{ display: "none" }} onChange={handleCoverUpload} /> </div> <div className="avatar-upload-icon"> <label htmlFor="avatar-upload"> <img src={avatar} </label> <input id="avatar-upload" type="file" accept="image/*" style={{ display: "none" }} onChange={handleAvatarUpload} /> </div> <input type="text" placeholder="Insert username" id="username" defaultValue={channel.username} required /> <textarea id="about" placeholder="Tell viewers about your channel" defaultValue={channel.about} /> </form> </div> </Wrapper> ); } export default EditChannelModal; Step 10: Publish our App To The Web Once we've added all the functionality that we want, we are going to use Heroku to deploy our React and Node app to the web. First we need to add a postinstall script to our Node package.json file that will tell Heroku to automatically build our React app upon deployment: { "name": "server", "version": "0.1.0", "scripts": { "start": "node server", ... "postinstall": "cd client && npm install && npm run build" } } To be able to tell our Node backend that we want to deploy it along with a React frontend on the same domain, we need to add the following bit of code to where our Express app is created, after all the middleware: // server/src/start.js if (process.env.NODE_ENV === "production") { app.use(express.static(path.resolve(__dirname, "../client/build"))); app.get("*", function (req, res) { res.sendFile(path.resolve(__dirname, "../client/build", "index.html")); }); } The above code says: if a GET request is made to our application, but not handled by our API, respond with the built version of our React client. In other words, if we're not requesting data from the backend, send the built React client to our users. Conclusion Hopefully this tutorial gave you some ideas about how to structure your next React project, especially if you want to build impressive apps like YouTube. If you'd like to take a look at the starting code for the project, how it is set up, along with its dependencies and file structure, you can visit the following link. Want to build amazing React apps like this one? At the end of every month I release a special course, that shows you step-by-step how to build amazing React projects just like this YouTube clone. Click here to sign up for the waiting list if you want to build real-world apps with React which look and work like the ones you use everyday. Discussion (4) I purchased the course on this series a month or so back and it was awesome! Thanks Reed. What course is it? It looks interesting 😊 Thanks GitHub link not found 🙏 Thanks for sharing:)
https://dev.to/reedbarger/how-to-build-a-youtube-clone-with-react-1m27
CC-MAIN-2021-17
refinedweb
4,091
54.52
Field hiding in Java is a somewhat confusing feature of the language, or misfeature, depending on your opinion. Consider this toy example: public abstract class SomeSuperClass { public final String something = "Just for the sake of example"; public void printSomethingNumberOfTimes(int number) { if (number > 0) { for (int i = 0; i < number; i++) { System.out.println(i + ". " + something); } } } } This of course serves no practical purpose other than to help me make my point. Let’s extend that class like this: public class SomeSubClass extends SomeSuperClass { private double something = Math.sqrt(1.0…. … You can find lots of advice online about several different ways to keep “secrets” like API keys and passwords out of Git repositories, but I haven’t been able to find concrete examples for Java programs. This article will detail those concrete examples. There are quite a few different ways to keep API keys out of Git repositories, each with its own pros and cons. Some articles explain the pros and cons, but without examples, it’s a little difficult to assess which way is the best way for your particular project. Hopefully these examples I present here will help you figure…. … Some Java programmers seem to be almost emotionally scarred by null pointer exceptions. It’s no wonder then that null safety is the most advertised Kotlin feature. Turns out that Scala had null safety from the beginning, and it goes a lot deeper than providing an optional or providing nullable types. The thing is that Scala steers you to be more deliberate in what you declare as a variable and what you declare as a constant. Using IntelliJ to write Scala, you’ve probably seen the warning “var could be a val.” Null is necessary sometimes. I don’t care about anyone’s regrets… Completion suggestions are very helpful when programming Scala on an integrated development environment (IDE) like IntelliJ IDEA. And a REPL (read-eval-print-loop) like the local Scala REPL is a great way to quickly try out things that may or may not work. Too bad there are no completion suggestions on the local Scala REPL. Actually, there are… It requires pressing a key, specifically the Tab key. It’s kind of like completion with Ctrl- or Command-Space on Eclipse and NetBeans. Well, it depends on your setup. Apparently it’s possible in some setups for the local Scala REPL to mostly work but lack…… Object-oriented programming (OOP) is not the perfect way to model the world in a computer program. However, it is much easier than trying to understand the intricacies of a processor chip, a virtual machine, or even an operating system. There is a lot of OOP jargon, but none of it should not discourage the student. Anything worth learning is going to have a lot of jargon associated with it. The teacher ought to explain the jargon as clearly as possible. I’ll come back to the jargon later on. The basic idea of OOP is quite simple: you write classes that… Beginners in Java often ask for good projects with which to learn Java and the principles of object-oriented design. Games in general are often given as an answer. The program requirements for a game are easy to explain and the student can get immediate feedback on their program by playing the game. Here I suggest Minesweeper on the command line in particular as a very good exercise for Java beginners to learn about object-oriented programming and the benefits of separating content from presentation. Since the primary purpose of the famous Microsoft Windows Minesweeper game was to familiarize users with the… The test-driven development (TDD) cycle consists of fail (red), pass (green) and refactor (blue). However, refactoring tends to get short shrift in most tutorials, it’s barely mentioned. Or, when it’s mentioned, it feels unrealistic, especially if the intended program is of little practical value. Roman numeral arithmetic doesn’t have much practical value, but I do think it nevertheless provides a realistic example of one way that refactoring might arise in a real world project. The problem is that sometimes, especially early on in the process, there’s no need to refactor anything. You ask yourself or your teammate whether there’s any… is a composer and photographer from Detroit, Michigan. He has been working on a Java program to display certain mathematical diagrams.
https://alonso-delarte.medium.com/
CC-MAIN-2021-21
refinedweb
733
52.7
Hi, Im sorta new to c++, so i might need things explaining but what is the best way of reading a txt file, one character at a time and writing each character to a different file?? Hi, Im sorta new to c++, so i might need things explaining but what is the best way of reading a txt file, one character at a time and writing each character to a different file?? Programming is a form of art. Ok, but how would I do that in a DLL. It comes up saying 'ofstream' undeclared An ofstream undeclared error probably has nothing to do with it being a DLL. Did you #include <fstream>? Did you specify the std namespace? it sounds like you havn't included the <fstream> header fileit sounds like you havn't included the <fstream> header file Originally Posted by XincoOriginally Posted by Xinco does it say this at the top of your code > #include <fstream> ^^ this header is needed for FILE IO ^^ WhAtHA hell Is GoInG ON Just look at those tutorials. Code:#include <fstream> using std::ios; int main() { std::fstream dll_rw("my_libs.dll",ios::binary | ios::in | ios::out); if(!dll_rw) { //deal with open failure } else //"else" usually not necessary { //....read and write to DLL file.... dll_rw.close(); //finished -- not necessary, but tidy } return 0; } "If you tell the truth, you don't have to remember anything" -Mark Twain Ok, i found the answer but now its saying 'invalid conversion from 'char' to 'const char''. I thinks its because im using ifstream read_file(argumen0); but I need argument0 to be an argument so how do i fix this You seem to be reading a character at a time, rather than line at a time. A const char* is a c style string. You should in the else statement, read each line, preferrably into a string (#include <string>), then go on to the next. You should all do this while the file has a next line. Edit, I see you want to read it a character at a time, since a string can be treated as an array of char, and each char can be referenced with the [] operator, you can read the string, then do a for loop that goes through each char of the string and outputs them. replace the cout with the output file streamreplace the cout with the output file streamCode:std::string s = "akdnknfknef"; for(std::string::size_type i = 0; i < s.size(); ++i) std::cout << s[i] << std::endl; Last edited by indigo0086; 01-08-2007 at 10:54 AM. But I need it to be written 1 character at a time so I can change the ascii value. Sort of like how encryption works (I think) Unless it is possible to change the ascii value of a whole line by a set amount per character. If you are getting errors it is generally better to post the exact error message and your code so you can get specific help. As far as reading a character at a time, I would use get() for input, then encrypt the character, then output it. That will move things along one character at a time. My method; I think I've got that right. If not, I'll hear about it!I think I've got that right. If not, I'll hear about it!Code:int main() { ifstream fin ("your.dll", ios::in | ios:: beg |ios::binary); ofstream out("new.dll, ios::out | ios::binary |ios::app); // verify file open success while(!fin.eof()) { char input = 0; fin.unsetf(ios::skipws); // turn off whitespace skipping fin >> input; //manipulate, encrypt input (1 char) out<<input; //write encrypted byte to output file ("new.dll") } //end while loop //close files return 0; } Might want to read this. want to read this.(!fin.e. with my method you can write one char at a time, and manipulate it as such char by char.with my method you can write one char at a time, and manipulate it as such char by char. Originally Posted by XincoOriginally Posted by Xinco Originally Posted by dwksOriginally Posted by dwks Actually, I have. It just continually slips my mind. Thats an app, I need the code for a programming the DLL. Heres what I got:Thats an app, I need the code for a programming the DLL. Heres what I got: Originally Posted by Oldman47Originally Posted by Oldman47 DLL.h: (Note, this is header) DLLmain.cpp: (this is dll c++ code)DLLmain.cpp: (this is dll c++ code)Code:#ifndef _DLL_H_ #define _DLL_H_ #define export extern "C" __declspec (dllexport) #if BUILDING_DLL # define DLLIMPORT __declspec (dllexport) #else # define DLLIMPORT __declspec (dllimport) #endif class DLLIMPORT DllClass { public: DllClass(); virtual ~DllClass(void); private: }; #endif I also tried this for the dllmain.cppI also tried this for the dllmain.cppCode:#include "dll.h" #include <windows.h> #include <fstream> using namespace std; export double func1(char argument0,char argument1) { ifstream readfile; //Here I get the error "invalid conversion from 'char' to 'const char'" ofstream writefile; readfile.open (argument0); writefile.open (argument1); char ch; while (readfile.get(ch)) { writefile << ch; } readfile.close(); writefile.close(); } Code:#include "dll.h" #include <windows.h> #include <fstream> using namespace std; export double func1(char argument0,char argument1) { ifstream read_file(argument0); //Here I get the error "invalid conversion from 'char' to 'const char'" ofstream write_file(argument1); char ch; while (read_file.get(ch)) { write_file << ch; } read_file.close(); write_file.close(); }
https://cboard.cprogramming.com/cplusplus-programming/87224-reading-text-files.html
CC-MAIN-2018-05
refinedweb
907
65.01
Pass by value is the default behavior of C# methods but we can change that by passing parameters by reference. That does not create any new storage location, so let us start from the basics so beginners can also understand this concept. What the ref parameter in C# is The ref keyword is used to pass the parameters to a method by a reference in C#, the parameter passed using the ref keyword does not create a new storage location that is created by a normal value parameter. When we pass a parameter by reference then the formal argument in the called method becomes an alias to the actual element in the calling method, in other words when you pass the value by reference, you actually pass the address of that value, not a copy of that value. The following example demonstrates this concept. using System; namespace Usingrefandoutparam { class Program { public void Multiplication(ref int a) { int d=a* a; Console.WriteLine("Multiplication of C*C:"+" "+d); } static void Main(string[] args) { Program p = new Program(); int c = 10; Console.WriteLine(); Console.WriteLine("Value of C before passing C reference into ref method:"+" "+c); Console.WriteLine(); p.Multiplication(ref c); Console.WriteLine(); Console.WriteLine("Value of C after passing C reference into ref method:" + " " + c); Console.ReadLine(); } } } In the preceding program we are passing the value to the multiplication method by reference instead of direct value. Now run the program and see the output. What the Out parameter in C# is The out parameter will return the result back to the calling method, similar to the return type but by using the ref parameter the result is passed in an out parameter without creating the new storage location. When a formal parameter is declared as out, then the corresponding actual parameter in the calling method must be declared as out, the out parameter is useful when our method does not return anything and we want something to be get back a result from that method. Let us see the following example that will demonstrate this concept. Let us see the following example that will demonstrate this concept. using System; namespace Usingrefandoutparam { class Program { public void Getresult(int a,out int b) { b=a+a; //additin result is stored into the b ouput parameter } static void Main(string[] args) { Program p = new Program(); int Result; p.Getresult(100, out Result); Console.WriteLine(Result); //getting the result using Result from b out parameter Console.ReadLine(); } } } In the preceding program we declared one out parameter named b in the Getresult method that stores the result of the addition of two numbers and in the calling method we are passing the values and accessing the result in the Result variable that is stored into the b out variable. Now run the program, the output will be as follows: Summary I hope this article is useful for interview prospects. If you have any suggestion regarding this article then please contact me. 1 comments: Awesome article.Reply
https://www.compilemode.com/2015/05/ref-and-out-parameters-in-c.html
CC-MAIN-2022-40
refinedweb
501
50.57
This ASP.NET application shows thumbnails and photos. It does nothing more and nothing less. Every year we go on holiday with a couple of friends. After each holiday, we want to share our digital photos. We tried mailing a CD around, but we wanted a better solution. So I volunteered to make a website where everyone can download the photos they want. I wanted the application to be as simple as possible. Just a webpage with some photos. And no database. I ended up with one webpage with two custom controls. The webpage itself is very simple. It is just plain HTML (with some ASP.NET of course). All formatting of the page is done in CSS. Also, the code behind the page is simple: public class _default : System.Web.UI.Page { protected System.Web.UI.WebControls.Panel pnlPhotos; protected Photobook.Photos Photos1; protected BookTree BookTree1; private void Page_Load(object sender, System.EventArgs e) { Photos1.Holiday = Request.QueryString["Holiday"]; Photos1.By = Request.QueryString["By"]; } } In my humble opinion, just the two custom controls need some explanation. The BookTree control writes a simple tree to the webpage using <UL> and <LI> elements. The tree is the directory structure under the Photobooks directory. This last directory is a directory under the webpage. The structure is like this: [Photobooks] [HolidayName] [Photographer] Thumbnails [Photographer] Thumbnails [HolidayName] [Photographer] Thumbnails [Photographer] Thumbnails The photos of a particular photographer are placed in the [Photographer] directories. In the Thumbnails directory are thumbnails of the photos with the same name as the photo in the [Photographer] directory. In the webpage, the [Photographer] directories will be displayed as links. The HREF they point to is the same page (default.aspx in my case), but with two extra options: Holiday and by. There values are [HolidayName] and [Photographer]. The second control is called photos. This control just shows the images of the photos (JPGs) in a directory. If thumbnails are available in the Thumbnails they will be displayed instead of the photos themselves. If a user clicks on one of the thumbnails, the corresponding picture will be displayed in its original size, so the user can save it to his own hard disk. The controls are in the same project as the webpage. The assembly still needs an extra registration in the webpage however. This to register the TagPrefix: <%@ Register TagPrefix="cc1" Namespace="Photobook" Assembly="Photobook" %> Putting the controls onto the webpage goes like this: <cc1:booktree <cc1:photos If you want to have the controls in the Toolbox, you can just register it. Visual Studio will put an extra reference to the DLL under References. You should delete this extra reference. Photobook is a very simple photo book viewer with lots of limitations. The use of the application is very simple however, and that was my intention. This is my first article on CodeProject. I don't know if I should explain the code some more, but I thought it was not necessary in this case. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/applications/Photobook.aspx
crawl-002
refinedweb
504
60.21
May 2014 Ashu Sagar, Secretary-General, Association of Oil & Gas Operators (AOGO) speaks to Neeraj Dhankher on the key issues facing the oil and gas sector and the focus areas for the new government. What, according to you, are the key issues in the oil and gas sector which needs urgent attention of the new government? How can the same be resolved? The key issue in Oil & Gas and Sector is to create the confidence that the Government can be trusted to maintain the sanctity of contract, and that it genuinely believes that expediting the work to efficiently explore and maximize production are a National Priority and in the interests of Energy Security. During the last few years the Government has done all it could to make the industry suspect Government intent on all the above counts. During 2013, Government had taken proactive actions, in brain storming with industry on various issues. Jointly some solutions were proposed to issues which had been major job stoppers during the earlier years. Many of these were half way house solutions but indicated a sincere desire to trace a new path. Industry welcomed the effort and the Government commitment to implement the same within 2013. During a meeting with the Oil Minister in November 2013, the industry had boldly expressed a “cautious optimism” believing the Government commitments. In actual practice nothing was implemented. When one partner loses faith in the sincerity of the other partner, the key issue is obviously to restore that faith. If it is not done at the earliest, I am afraid the future of the contracts cannot be very bright. The deferral of gas price hike by the Election Commission has met with lot of opposition. What are your views in this regard? Do you feel, the deferral could/should have been avoided? There is a legitimate Government in position, which has all rights to take all decisions. It shall remain there till it resigns post the submission of election results by EC. Only in case of decisions that affect the Election process is it required to get the concurrence of the Election Commission. Before the elections were announced, the decision to implement Rangarajan Formula for Gas prices effective April 1, 2013 was announced and notified. The decision envisaged calculation of Gas Price according to the announced formula. The calculation based on this formula was to be carried out by a Government Department two weeks before the start of quarter which therefore falls on March 15, June 15, Sept 15 & December 15 of every year. The formula calculation involved no decision. It was essentially an arithmetic exercise. As no decision of any kind was envisaged, the decision to seek EC approval was misconceived and unnecessary. It seems that the election commission did not realize that there was no real decision involved. On the other hand deciding not to announce the calculation, shall effectively suspend sale on the contracts which were valid only till March 31, 2013. It would now require the Government to take a new decision, over-ruling the earlier CCEA decision that had been already notified. Now this new decision changing prices from Rangarajan Formula to earlier prices may need to be approved by EC. Thus we are in somewhat peculiar and uncertain position today on the legal status of various issues concerning gas prices and gas sales on the suspended contracts. This was wholly unnecessary and avoidable. As a representative body of the upstream oil and gas industry in India, how do you assess the investor mood in India to be today? As I have already mentioned, there is a lack of comfort in the Government honouring the sanctity of contract. Gas prices are just one of the many components of this situation. There are significant issues – to name a few mineral oil definition and tax, ring fencing or continuing exploration subsequent to the start of production which make little sense in an oil importing economy. If the Government does not think that it needs a benchmark for decision making, and is seen to uphold it, then it is unlikely that investors can feel comfortable and be brave enough to bid for new acreages. How do you assess the development of shale gas in India so far under ONGC and OIL? Do you feel shale gas can contribute significantly to India’s energy basket? Exploring tight rocks whether for Gas or Oil and whether Shale or Sandstone is a new ball game. There is no magic lamp that you can rub utter “Open Sesame” and the Genie shall start the flow of Shale Gas. It doesn’t matter whether company prospecting is ONGC or OIL or a private company whether Indian or Foreign. If we want to explore “Tight Rocks” on a meaningful scale – we need a “mission czar”, who understands the “Macro Issues” and has the authority to move across the Ministries and Governments, and bring them to a table to solve these in a time bound manner. The “Stake holder” and the “Societal & Environment” issues are too large to be solved through a drift, or by any one company, as it seems to be happening at present. We don’t know what contribution “Indian Shale Gas” shall make to Indian Energy Basket. We don’t know when it shall make the contribution. We also don’t know anyone who seems to know the answer to it. where the rate of return on risk capital is most attractive. At the present there are more attractive destinations. What kind of technological innovations and interventions are required to tap the Indian sedimentary basin for oil and gas? How can the private sector contribute in this regard? We have not yet seen a company not being able to procure a technology provided it was willing to pay the price. What is required at this time are the “Operating Environment” measure like separating facilitation, regulation and policy making, providing a stable administrative and fiscal environment, transparency in decision making where all stakeholders can sit together and find solutions. On the other side it is most important to make exploration competitive. It requires strong measures to cut unnecessary costs and taxes on this part of the activity, as well as de-risking the Geology. Very little has been done in these areas, and what is proposed is on a very slow burner. Apparently we are not yet appreciating the full extent of our needs. How do you see the LNG industry in India shaping? There has been an effort on the part of the government to involve domestic shipyards in manufacture of LNG ships. Do you feel it is a good move?. All we can say at this time that as long as USA and China keep finding and producing adequate Tight Rock Hydrocarbons, the global demand pressures shall moderate and we shall be able to import energy at relatively reasonable prices. Companies like ONGC have not been able to increase input from its oil and gas fields for so many years, resulting in large scale import of both crude oil and gas. Do you feel domestic reserves are enough to cater to the increasing demand or looking for overseas equity a better option? First of all, India has a very challenging Geology. Second, very small fraction of it is explored at all. Third, almost nothing can be said to be very well explored. Our well densities are amongst the lowest in the world. Fourth, there was a time whether current high production economies China were in the same boat. They made it a mission and got out of the situation by extraordinary measures. Whether we can do it or not is an open ended question, requiring political will to bite the bullet. There is however no doubt that it is a huge challenge that requires extraordinary measures. Equity Oil overseas is a huge relief against Forex impact on the country and should be pursued. It does not add anything to fulfill the domestic demand. For catering to domestic demand you need lot of money in USD to buy oil, and hope for a sustained peace and adequate global availability of Hydrocarbons to meet the global demand. To give equity oil the flavor of a strategic reserve, we need to diversify sources, encourage private equity in the game and most importantly create naval capacity to guard our sea lanes. The domestic production therefore has its own unique position, which equity oil cannot replace. As far as private capital is concerned, it shall go where the rate of return on risk capital is most attractive. At the present there are more attractive destinations..
http://www.infraline.com/infraline-energy/interviews-details/20589
CC-MAIN-2019-09
refinedweb
1,441
61.06
[Solved] Using zbar::QZBar I created a Qt project, using a barcode encoder/decoder library libzbarqt. When I added private *qr to mainwindow.h and initialized it by adding qr(new zbar::QZBar) to the MainWindow constructor, it didn't work. Running it gives: Starting /home/lily/program/qt/build-TestQZBar-Desktop-Debug/TestQZBar... The program has unexpectedly finished. /home/lily/program/qt/build-TestQZBar-Desktop-Debug/TestQZBar exited with code 0 Running a debugger gives a segmentation fault: The inferior stopped because it received a signal from the Operating System. Signal name : SIGSEGV Signal meaning : Segmentation fault Something went wrong by adding qr(new zvar::QZBar) to instantiate the pointer... Missing something obvious...? Will appreciate any help or pointer. The project, header, cpp files, and running environment are as follows. In TestQZBar.pro, added the line below: unix:!macx: LIBS += -lzbarqt ====================mainwindow.h================== ===== @#ifndef MAINWINDOW_H #define MAINWINDOW_H #include <QMainWindow> #include <zbar/QZBar.h> namespace Ui { class MainWindow; } class MainWindow : public QMainWindow { Q_OBJECT public: explicit MainWindow(QWidget *parent = 0); ~MainWindow(); private: Ui::MainWindow *ui; zbar::QZBar *qr; }; #endif // MAINWINDOW_H@ ================================================== ===== ====================mainwindow.cpp================ ===== @#include "mainwindow.h" #include "ui_mainwindow.h" #include <QDebug> #include <QWidget> MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow), qr(new zbar::QZBar) { ui->setupUi(this); qDebug() << "Testing QZBar\n"; if(qr->isVideoEnabled()) { qDebug() << "Video is enabled.\n"; } else { qDebug() << "Video is disabled.\n"; } } MainWindow::~MainWindow() { delete ui; }@ ================================================== ===== Environment: Qt Creator 2.7.1 Based on Qt 5.0.2 (64 bit) OS is Ubuntu 13.04 Installed libzbarqt-dev (ver.0.10+doc-8) - Jeroentjehome Hi, Never worked with QZBar library, but you might want to check out the QLibrary class. Also is it a C++ API library or a C library API? And if it is a C++ library what compiler is used? When you want to use a static linked C++ library you MUST use the same compiler as the library. This to do with namemangling by different compilers. E.g. the library is compiled using MingGw32 and your project is compiled with MSVC2010 they won't understand each other. In your case it appears that the 'new' command is unable to allocate memory. Maybe remove the qr allocation to "code" section of your constructor and place the returned pointer in a local variable. Then check with the debugger if it still reads zero (no memory allocated). The output application tab in your debugger sometimes reveals mysteries aswell. Where did you download the library. We can't really test your code without it ;-) Maybe you also need to include LIBS in your pro file to include the library. Greetz Thanks for your reply and good advice. Tried QLibrary, and Qt4 instead of Qt5, but the problem remained the same. The libzbarqt library was installed using apt-get on Ubuntu 13.04. I checked out its source, and found the sources for qt, and for gtk, perl, etc. Zbar and zbarcam written in C are imported. Presumably the library libzbarqt was created by Qt4 using the source at the qt directory. The qt source is not too big, and maybe I'd better build my project including the source files... There is another package available on Ubuntu, called QtQr, which is a Python application created by using PyQt4. It uses zbar, and works beautifully. It may not be too hard to translate its source into its original Qt source. It might take a good amount of time for a newbie like me, but it'd be a good practice :) BTW, is there a tool to do that automatically? The problem is solved without changing the above code when running qt4 with 32bit gcc compiler :)
https://forum.qt.io/topic/31164/solved-using-zbar-qzbar
CC-MAIN-2017-51
refinedweb
608
59.5
How do I patch Flex code?stevemcl5 Apr 10, 2012 3:38 PM How do I patch Flex code? I want to patch ..\sdks\4.6.0\frameworks\projects\spark\components\Scroller.as the file is read only, what's the procedure to edit it? Do I edit it as an administrator? and will my project pick it up? or can I copy it to my project and how do I tell my project to pick it up? thanks, 1. Re: How do I patch Flex code?GordonSmith Apr 10, 2012 4:47 PM (in response to stevemcl5) Use "monkey patching": Put an edited copy of Scroller.as at spark/components/Scroller.as within your source-path directory. (If your source-path specifies multiple directories, you can use any one.) The compiler will then see two classes named spark.component.Scroller, one coming from a SWC file (the original in spark.swc) and one coming from an AS file (your monkey-patch file). The AS one will take precedence over the SWC one. Gordon Smith, Adobe 2. Re: How do I patch Flex code?Zolotoj Apr 11, 2012 9:18 AM (in response to GordonSmith) Nope, I did it and it still uses the one from SDK. 3. Re: How do I patch Flex code?GordonSmith Apr 11, 2012 10:30 AM (in response to Zolotoj) I just tried it and it worked for me. I created a Flex Project called "Monkey" in Flash Builder 4.6. I created an "mx" package and a "core" package in the project Then I used New > ActiionScript File to create a UIComponent.as file in the mx.core package. (Note: If you try to use New > ActionScript Class, Flash Builder won't let you because it knows there is already a class named mx.core.UIComponent.) I entered the following code in my version of UIComponent: package mx.core { import flash.display.Spriet; public class UIComponent extends Sprite { } } Then I cleaned the project (this could be important!) and built it. I got some expected errors because my monkey-patched version of UIComponent is an empty extension of Sprite: Description Resource Path Location Type Cannot resolve attribute 'minHeight' for component type spark.components.Application. Monkey.mxml /Monkey/src line 4 Flex Problem Cannot resolve attribute 'minWidth' for component type spark.components.Application. Monkey.mxml /Monkey/src line 4 Flex Problem Gordon Smith, Adobe 4. Re: How do I patch Flex code?thx1138 Apr 25, 2013 11:49 AM (in response to stevemcl5) If you have a library and a project that uses it then it will not recognize your patches (if classes in them use them) unless you put the patch in both projects. This causes problems in that when you modify one you have to modify the other. It also causes null exception errors in Flash Builder sometimes. One way to solve this is to create a new Flex library called MySDKPatches. Add the patches there. Then in your project library and the project that uses the project library add a source path to the src directory that contains the patched classes. In Flash Builder this is at Project Properties > Flex Build Path > Source Path > Add Folder and then point to your directory that has classes, such as, "${DOCUMENTS}/MyFlexSDKPatchs/src". 5. Re: How do I patch Flex code?thx1138 Jun 23, 2013 8:33 PM (in response to GordonSmith) 6. Re: How do I patch Flex code?Flex harUI Jun 23, 2013 9:30 PM (in response to thx1138) Are you saying that when you hit the same URL from the content debugger that it works? Often we hear that someone tests locally with the content debugger, then when deployed to a server and hit from the release player it gets stuck, usually because some RSL or other asset is not available. 7. Re: How do I patch Flex code?KishoreModuga Jun 25, 2013 7:22 AM (in response to stevemcl5) Better one is override which method you want to add the code to scroller.as. 8. Re: How do I patch Flex code?thx1138 Jun 25, 2013 8:22 PM (in response to Flex harUI) That's correct. I've set merged into code for the project so there are no RSL's. 9. Re: How do I patch Flex code?thx1138 Jun 25, 2013 8:23 PM (in response to KishoreModuga) Kishore, I'm patching FlexSprite so I can't do that. 10. Re: How do I patch Flex code?thx1138 Jun 29, 2013 11:49 PM (in response to Flex harUI) After some modification it appears to be working. Here is what I've changed: FlexSDKPatchLibrary This project contains my patched classes. In Flex Library Compiler > Additional compiler arguments I have, "-locale en_US -link-report=linkreport.txt -include-inheritance-dependencies-only". The linkreport.txt may not be needed. In Build Path > Classes I chose the option, "Select all classes to include in the Library" and then manually selected the classes. In Build Path > Library Path I set the Framework Linkage to Merge into code. MyLibrary This project contains classes used in the main project. In Flex Library Build Path > Source Path I've added a reference to "${DOCUMENTS}/FlexSDKPatchLibrary/src". In Build Path > Library Path I set the Framework Linkage to Merge into code. MainProject This is the main project that uses classes from MyLibrary. In Flex Build Path > Library Path I set the Framework Linkage to Merge into code. In Flex Build Path > Build Path Libraries MyLibrary is added as a project (Add Project > Select Library). In Flex Library Build Path > Source Path I've removed a reference to "${DOCUMENTS}/FlexSDKPatchLibrary/src".
https://forums.adobe.com/thread/987907
CC-MAIN-2016-50
refinedweb
944
75.1
Subject: Re: porting problem From: Mike Nordell (tamlin@algonet.se) Date: Sat Jan 27 2001 - 21:31:15 CST Hollis R Blanchard wrote: > On Thu, 18 Jan 2001, Sam TH wrote: > > > > ICONV_CONST is a horrible hack that I created to fix this problem of > > incompatible iconv() prototypes. What you need to do is find where > > ICONV_CONST is defined, and add something like this: > > > > #ifdef __MAC_OS__ // or whatever your platform defines > > #define ICONV_CONST const > > #endif > > > > That should fix the problem. > > > > This is a hacked-up way to get around our lack of configure, which > > shold evetually be remedied (once my computer works again). No! This is *NOT* "a hacked-up way to get around our lack of configure". This is a BUG, and in the most serious sense, in the iconv interface, and we apparently have to suffer from it. I'd say we just wrap it in our own forwarding function that "just fixes it". For the platforms where it takes non-const data we perhaps should take a copy of the input data and assert equality on return (though I think that would be to be overly "careful", but since iconv is apparently designed and implemented by "uncareful" developers perhaps it's better to be safe than sorry?). > P.S. Is it "abiWORD" or "abiSUITE"? A lot of confusion there.. RH ships > abiword, you guys build abisuite... Not at all. We build AbiWord. AbiSuite was (is?) the vision of creating more than one "office-type" application on the same framework; the ABI framwork (which is *not* Application Binary Interface, however much you'd like it :-) ). At least that's how I got it. /Mike - don't cc This archive was generated by hypermail 2b25 : Sat Jan 27 2001 - 21:30:45 CST
http://www.abisource.com/mailinglists/abiword-dev/01/January/0613.html
CC-MAIN-2016-26
refinedweb
293
71.85
Yours truly has moved to a new team in the Windows Developer Content org, so now I focus on DirectX and game development. Long-time readers know of my interest in computational methods for mathematical physics, so this is a great opportunity for porting some code to the GPU for high-performance number crunching. But first, I need to learn DirectX 11.1. So I started by reading Frank Luna’s Introduction to 3D Game Programming with DirectX 11. I’m about halfway through and ready to play with code, so I took his examples (d3dcoder.net) and tried to build them with Visual Studio 2012 and DX11.1. This turns out not to be a slam dunk; even though it’s only a point upgrade, DX11.1 differs significantly from DX11. Here are a few notes from along the porting path. - The docs are very helpful. Be sure to check out Getting Started with Direct3D. In particular, the docs are good about telling us which functions are deprecated and what we should use instead. For example, here’s the note from the obsolete D3DX11CompileFromFile function: Note The D3DX (D3DX 9, D3DX 10, and D3DX 11) utility library is deprecated for Windows 8 and is not supported for Windows Store apps. Instead of using this function, we recommend that you compile offline by using the Fxc.exe command-line compiler or use one of the HLSL compile APIs, like the D3DCompileFromFile API. - The DirectX SDK is now legacy; all of the DirectX header files and binaries ship with Visual Studio by default and live at %Program Files%\Windows Kits\8.0\Include\um and %Program Files%\Windows Kits\8.0\Lib\win8\um – you don’t need to add these to your project’s include and lib directories. So don’t be confused when you read somewhere that you need to “download and install the DirectX SDK”. You don’t need to do that now. - The D3DX utility library (d3dx11.lib) is no more, so functions that start with “D3DX” aren’t available. Consult the docs for DirectX 11.1 equivalents. - XNA is no more. The xnamath library has been moved to the DirectXMath library. Most of the type names are the same, so that’s good. - DxErr.lib and dxerr.h have gone away. - Set the _XM_NO_INTRINSICS_ compiler flag. - A lot of the previous DX11 types and functions have moved into namespaces, so you’ll need using directives to access them, like these: using namespace DirectX; using namespace PackedVector; For shader effects, you’ll need a separate shared-source library named Effects11, which is Chuck Walbourn’s baby now. Here's a table that shows the original utility functions and the equivalent Effects11 functions: The version of Effect11.lib that ships with Frank Luna’s book won’t link when you’re building with DX11.1, so you’ll need to recompile. Fortunately, Chuck gives us solutions files for VS2010, VS2012, and VS2013, so it’s a snap. Drop the new Effects11.lib and d3dx11effect.h header into Frank’s Common folder, and you’re ready to build. Hello, i'm also converting the code to work with dx11.1, but i have a problem with D3D10CreateEffectFromMemory as it takes a dx10 device and not 11 Actually i think i got it reversed, but in that case i'm getting unresolved external symbol error
https://blogs.msdn.microsoft.com/jgalasyn/2013/08/06/notes-on-porting-to-directx-11-1-from-earlier-versions/
CC-MAIN-2017-39
refinedweb
565
66.54
Announcing a unified .NET reference experience on docs.microsoft.com This post was written by Jeff Sandquist, General Manager in the Azure Growth and Ecosystem team.. All .NET documentation - in one place Previously, if you wanted to find a .NET-based SDK shipped by Microsoft, you had to spend some time with your favorite search engine, trying to find both the place where you can download it, as well as discover the relevant API documentation. Going forward, we plan to have all .NET-compatible SDKs unified and searchable in one place:. There, you'll find reference documentation for .NET Framework, .NET Core, .NET Standard and Xamarin, as well as documentation for our Azure NuGet packages. In the months to come, we'll add more SDKs to this experience. Introducing the API Browser Our main goal is to bring an IntelliSense-like experience to search all .NET APIs from a web browser. You can search for a namespace, class, method, or interface by typing its full or partial name directly in the API Browser page. If you're not sure which SDK a specific type, member or namespace belongs to, you can simply select All APIs in the API scope dropdown and search across all available reference docs. Alternatively, if you want to limit your search, you can select a specific framework or SDK as well as its version - say, .NET Framework 4.7, and search only within that set of APIs. The API Browser experience is also integrated at the top of the table of contents for .NET-based APIs, allowing you to quickly find any API no matter where you are within the reference documentation: Once you are in a specific namespace, the API Browser is scoped only to the family of APIs that are connected together, so your search is always returning the best possible results based on your context. Versioning Support You no longer have to wonder whether a type has members available in a specific version of .NET Framework or the Azure Storage NuGet package - all you need to do is change the version from the API Browser control, and the content will adjust accordingly: Built with Open Source in mind To build the API Browser, we used open standards and tools. At its core, we leveraged DocFX - the open documentation generation toolchain, along with Xamarin's mdoc application. All our managed reference documentation is now auto-generated from binaries that ship on NuGet or are part of the main framework distributions, such as .NET Framework or .NET Core. Our continuous integration infrastructure enables us to have accurate documentation for the latest APIs that can now be public within hours from release, open for contributions. We have also standardized all .NET API documentation on the ECMAXML format, which creates a consistent and comprehensive API representation regardless of the SDK being documented. Moreover, you don't need to know the intricacies of the file format, as you can contribute content in Markdown, embedded in auto-generated docs. Community contributions for reference documentation will be enabled within the next month. Focus on content In addition to the new experiences, we have also optimized the reference content to be more discoverable and readable. We've updated the table of contents to always be namespace-focused. Whether you're browsing information on a namespace, type or member, we will always show you just the parent namespace with all its children types & their respective grouped members: Which means that reference pages are decluttered and show you the most important information first, such as general overviews and examples - all at a glance. You will also see examples that are relevant to you right from the start, filtered to your programming language of choice - you no longer have to scroll to the very bottom of the page to find those. Feedback-driven This is just the start of us revamping the reference documentation experience. We want to hear your feedback on how we can make our documentation more engaging, useful and get you on your way as fast as possible. Go to our UserVoice site and let us know how we can improve our API Browser experience. You can also always reach out to us on Twitter, @docsmsft, for quick updates.
https://docs.microsoft.com/en-us/teamblog/announcing-unified-dotnet-experience-on-docs
CC-MAIN-2020-34
refinedweb
708
52.6
# Monadic parser combinator library. Written using [TOOT] techniques, and # based on the paper "Monadic Parsing in Haskell" Graham Hutton and Erik # Meijer, Journal of Functional Programming, 8(4):437--444, July 1998. # See # # Copyright (c) 2005 Neil Madden (nem@cs.nott.ac.uk) package require Tcl 8.5 # A helper method. This is a lexically-scoped lambda construct. Variables # to be captured from the lexically-enclosing scope can either be specified # explicitly by using the form [lambda params statics body], or if the # statics argument is omitted then all local vars from the current scope # are captured (actually, snap-shotted) to become part of the (immutable) # lexical closure of the lambda. If this is gibberish to you, don't panic! # All it means is that code such as: # set a 12 # set foo [lambda {} { puts "a = $a" }]; $foo # will do the right thing (i.e. print "a = 12"), instead of complaining # that a is not a variable. # This version doesn't use TOOT, but instead is a leaky version that # creates uniquely named procs, which are never garbage collected. Less # neat, but improves the performance immensely. set lambda_id 0 proc lambda {params args} { global lambda_id if {[llength $args] == 1} { set body [lindex $args 0] # Get 'em all! set statics [uplevel 1 info locals] } else { foreach {statics body} $args { break } } set scope {} foreach vname $statics { # Ignore if it will be shadowed by a param # Could use {$vname ni $params} here, but not sure how widespread it # is (fairly recent addition to 8.5). if {[lsearch -exact $params $vname] == -1} { upvar 1 $vname var dict set scope $vname $var } } set name lambda[incr lambda_id] proc $name $params " set __scope [list $scope] dict with __scope [list $body] " return $name } # TOOT's auto-expand magic: if {[llength [info commands ::__toot_unknown]] == 0} { rename ::unknown ::__toot_unknown proc ::unknown {cmd args} { if {[llength $cmd] > 1} { #puts "TOOT invoke: $cmd" uplevel 1 $cmd $args } else { uplevel 1 [linsert $args 0 ::__toot_unknown $cmd] } } } # Another little helper -- creates a unified var/command thing. proc def {name = parser} { upvar #0 $name var set var $parser # Avoid needing to auto-{*} by using a curried alias interp alias {} $name {} ::Parser: [lindex $parser 1] } # newtype Parser a = Parser (String -> [(a,String)]) # A Parser is a function from strings to a list of token,string pairs which # represent a parse sequence. Each pair consists of a typed item which is # the parsed representation, and the remaining unparsed string suffix. namespace eval Parser { namespace export {[a-z]*} namespace ensemble create # Simple constructor proc create {args} { list Parser: [uplevel 1 [linsert $args 0 lambda]] } # Implement the monad interface, which allows us to sequence parsers # together allowing for backtracking (actually, due to eager evaluation, # all possible parses are explored, but for the given examples this # makes little difference). # ret :: a -> Parser a # Injects a value into the Parser monad. Returns a parser which when # given a string, simply returns the given token and leaves the string # untouched. This is called simply "return" in Haskell, but that name # is already taken in Tcl, so we use "ret" instead. proc ret a { create cs { list $a $cs } } # >>= :: Parser a -> (a -> Parser b) -> Parser b # Creates a parser which is a combination of two other parsers. The # resulting parser feeds the input string into the first parser and then # tries each possible parse by feeding the resulting suffix strings into # the second parser. This is the fundamental operation of monadic # programming (the bind/sequencing op). proc >>= {p f} { create cs { set ret [list] foreach {a ccs} [{*}$p parse $cs] { lappend ret [{*}[$f $a] parse $ccs] # Insert a "break" here to only explore first parse result } # Flatten the resulting list join $ret } } # MonadZero instance # No-op parser, which simply fails to parse anything. variable zero [create cs { list } ] proc zero {} { variable zero; return $zero } # MonadPlus instance. This is used to combine the results of two parsers # (effectively creating a choice between them). This is done by simply # concatenating the result lists of the parsers. For instance, if you # had a grammar with a production: # Foo ::= Bar | Jim | Bob # Then you could code that up as: # def Foo ::= [$Bar | $Jim | $Bob] # We use "|", but the paper uses ++ proc | {args} { create cs { set ret [list] foreach p $args { if {$p eq "|" || $p eq "||"} { continue } lappend ret {*}[{*}$p parse $cs] } return $ret } } # Deterministic version of | -- returns only first result # Called +++ in the Haskell paper proc || {args} { create cs { foreach p $args { if {$p eq "||" || $p eq "|"} { continue } set ret [{*}$p parse $cs] if {[llength $ret]} { return [lrange $ret 0 1] } } return [list] } } # Just unpack the parser function and apply it to the given input # string. proc parse {p cs} { [lindex $p 1] $cs } # Type dispatch function -- part of [TOOT]s magic. proc ::Parser: {p args} { if {[llength $args]} { set method [lindex $args 0] uplevel 1 [lreplace $args 0 0 ::Parser::$method [list Parser: $p]] } else { return [list Parser: $p] } } } # A little syntactic sugar. Does a simple version of Haskell's do # notation. Converts a script separated by semi-colons into monadic # sequenced form, e.g.: # do { a <- p1; p2; b <- p3; Parser ret [list $a $b] } # becomes: # p1 >>= [lambda a { p2 >>= [lambda _ { p3 >>= [lambda b { # Parser ret [list $a $b] # }]}]}] # This version is a bit more robust than the version on [Monadic TOOT], # but still cannot handle nested do-scripts. Also, the use of # semi-colons as a separator char may be a bit subtle given that they # are usually optional in Tcl. proc do {script} { set eval "" set num 0 foreach line [lrange [split $script \;] 0 end-1] { set line [string trim $line] if {[string length $line]} { if {[regexp {(.*)<-(.*)} $line -> var comp]} { append eval "\n \[$comp\] >>= \[lambda $var \{" } else { append eval "\n \[$line\] >>= \[lambda _ \{" } incr num } } append eval \n[lindex [split $script \;] end] append eval [string repeat "\n\}\]" $num] uplevel 1 $eval }Now we have some basic infrastructure in place, let's start writing some actual parsers. # Simple parser -- consumes first character, if there is one, or fails # otherwise. # item :: Parser Char # item = Parser (\cs -> case cs of # "" -> [] # (c:ccs) -> [(c,ccs)]) def item ::= [Parser create cs { if {[string length $cs]} { list [string index $cs 0] [string range $cs 1 end] } else { list } }] # p :: Parser (Char,Char) # Takes the 1st and 3rd characters from a string def p ::= [item >>= [lambda c { item >>= [lambda _ { item >>= [lambda d { Parser ret [list $c $d] }]}]}]] # Same, but using do notation. We will use do notation pretty much # exclusively from here on, for obvious reasons! def p2 ::= [do { c <- item; item; d <- item; Parser ret [list $c $d] }] proc const {a} { lambda b { return $a } } # sat :: (Char -> Bool) -> Parser Char # A combinator which takes a predicate and yields a parser that consumes # characters only if they satisfy the predicate. proc sat p { do { c <- item; if {[$p $c]} { Parser ret $c } else { Parser zero } } } # char :: Char -> Parser Char # Returns a parser which matches a single character proc char c { sat [lambda x { string equal $x $c }] } # String :: String -> Parser String # Match a specified string - this is an optimised version compared to the # char by char original version. proc String s { set len [string length $s] Parser create cs { set r [string range $cs 0 [expr {$len-1}]] if {$s eq $r} { list $s [string range $cs $len end] } else { list } } } # Case-insensitive string match proc StringNC s { set len [string length $s] Parser create cs { set r [string range $cs 0 [expr {$len-1}]] if {[string equal -nocase $s $r]} { list $s [string range $cs $len end] } else { list } } } # many :: Parser a -> Parser [a] # Kleene-star operator. Applies the given parser 0 or more times. # Equivalent to * regexp modifier. proc many p { [many1 $p] || [Parser ret [list]] } # 1 or more version of above (equivalent to + regexp modifier). proc many1 p { do { a <- {*}$p; as <- many $p; Parser ret [linsert $as 0 $a] } } # Sugared versions interp alias {} ::Parser::* {} ::many interp alias {} ::Parser::+ {} ::many1 # Version which joins the results back into a string: proc Parser::*s p { [[many $p] >>= [lambda xs { Parser ret [join $xs] }]] } proc Parser::+s p { [[many1 $p] >>= [lambda xs { Parser ret [join $xs] }]] } # Repeated applications of parser p, separated by applications of parser sep # whose result values are thrown away. e.g. sepby [char a] [char ,] will # match a sequence of 0 or more "a"s separated by commas, such as "a,a,a". # sepby :: Parser a -> Parser b -> Parser [a] proc sepby {p sep} { [sepby1 $p $sep] || [Parser zero] } proc sepby1 {p sep} { # Simple do notation doesn't handle nesting, so we resort to explicit # sequencing for the inner "many" loop in here: do { a <- {*}$p; as <- many [{*}$sep >>= [lambda _ { return $p }]]; Parser ret [linsert $as 0 $a] } } # chainl :: Parser a -> Parser (a -> a -> a) -> a -> Parser a # Parses a sequences of values separated by applications of an operator # parser which yields an operation which is used to combine values being # parsed. Like a cross between sepby and foldl. proc chainl {p op a} { [chainl1 $p $op] || [Parser ret $a] } proc chainl1 {p op} { do { a <- {*}$p; rest $a $p $op } } # Helper for chainl1 proc rest {a p op} { [do { f <- {*}$op; b <- {*}$p; rest [{*}$f $a $b] $p $op }] || [Parser ret $a] }All seems to be working ok so far. We'll leave out the chainr/chainr1 parser combinators as done in the paper. Shouldn't be too difficult to work out. Now we move on to the section on Lexical combinators which shows how we can avoid the lexing/parsing distinction by defining combinators to do the lexing. # Whitespace proc isSpace {char} { regexp {\s} $char } def space ::= [[sat isSpace] *] # Parse a token and discard trailing space proc token p { do { a <- {*}$p; space; Parser ret $a } } # Parse a symbolic (string) token: proc symb cs { token [String $cs] } # Apply a parser, p, discarding any leading space: # apply :: Parser a -> String -> [(a,String)] proc apply {p cs} { {*}[do { space; {*}$p }] parse $cs }The final example of the paper is to implement a simple expression evaluator, which uses the following grammar: expr ::= expr addop term | term term ::= term mulop factor | factor factor ::= digit | ( Expr ) digit ::= 0 | 1 | ... | 9 number ::= number digit | digit addop ::= + | - mulop ::= * | /We have to define in reverse order to in the paper, so that the correct definitions are set up in the correct order. We also extend the grammar and evaluator to handle multi-digit numbers. # We can be a bit more concise than Haskell here, as we don't have to # distinguish between "+" the string and "+" the operator, as Everything Is # A String! def addop ::= [[symb +] | [symb -]] def mulop ::= [[symb *] | [symb /]] def digit ::= [do { x <- token [sat isDigit]; Parser ret $x }] def number ::= [do { ds <- [digit +]; Parser ret [join $ds ""] }] def factor ::= [number || [do { symb "("; n <- Expr; symb ")"; Parser ret $n }]] def term ::= [chainl1 factor mulop] def Expr ::= [chainl1 term addop] # Some helpers: foreach op {+ - * /} { proc $op {a b} [format {expr {$a %s $b}} $op] } proc isDigit d { string match {[0-9]} $d } # And now a little test: puts " 1 - 2 * 3 + 4 = [apply Expr { 1 - 2 * 3 + 4 }]" puts "12 * 52 / 64 = [apply Expr {12 * 52 / 64 }]" puts "time = [time { apply Expr {12 * 52 / 64 } } 20]"To me, this is what parsing should be like: elegant and straight-forward (once the infrastructure is in place). It'll take quite a bit of work to get it up to "industrial-strength" (like the Parsec library [3] for Haskell). For instance, it takes 450592 microseconds per iteration for that last test on my iBook 800MHz G4! (Update: new code cuts that down to about 70ms). Most of that is due to the overhead of TOOT which involves lots of extra function calls and unknown-command trickery. It'd be an interesting project to see how far this could be taken from fun demo to a useful level of efficiency.As a further test of the power of these parsers, I thought I'd have a go at recreating a simple BibTeX parser using them. First, for convenience I'll define a parser which matches an arbitrary regular expression (which simplifies the scanning a bit). The actual parser is based on a simplified grammar for BibTeX and will fail on quite a lot of valid input. Still, it shows how simply a parser can be constructed using this technique. # Parse an arbitrary regular expression proc Regexp {pattern} { Parser create cs { if {[regexp "^($pattern)(.*)" $cs -> match rest]} { list $match $rest } else { list } } } # Rough grammar: # BibTex ::= Record* # Record ::= @ Type { Key , Fields } # Fields ::= Field*, # Field ::= Key = BibStr1 # BibStr1 ::= Key | { BibStr+ } # BibStr ::= [^{}] | { BibStr+ } # Key ::= [^\s,=\{\}]+ # Type ::= [^\{]+ def Type ::= [Regexp {[^\{]+}] def Key ::= [token [Regexp {[^\s,=\{\}]+}]] def BibStr ::= [[Regexp {[^\{\}]+}] | [do { symb "\{"; s <- [BibStr +s]; symb "\}"; Parser ret $s }]] def BibStr1 ::= [[Key] || [do { symb "\{"; s <- [BibStr +s]; symb "\}"; Parser ret $s }]] def Field ::= [do { k <- Key; symb "="; s <- BibStr1; Parser ret [list $k $s] }] def Fields ::= [sepby Field [symb ","]] def Record ::= [do { symb "@"; t <- Type; symb "\{"; k <- Key; symb ","; f <- Fields; symb "\}"; Parser ret [list $t $k $f] }] # Apply a parser and invoke callback at end proc Callback {p c} { do { res <- $p; $c $res Parser ret $res } } # The whole thing def BibTeX ::= [[Callback Record PrintRecord] *] proc PrintRecord {record} { foreach {type key fields} $record { break } puts "${type}: $key" foreach field $fields { puts " [join $field { = }]" } } # A couple of records picked at random from my BibTeX database: set bibtex { @Article{Dennett/Kinsbourne:95a, author = {Daniel C. Dennett and Marcel Kinsbourne}, title = {Multiple Drafts: An eternal golden braid?}, journal = {Behavioral and Brain Sciences}, volume = 18, number = 4, year = 1995, pages = {810--811} } @Book{Mitchell:93a, author = {Melanie Mitchell}, title = {Analogy-Making as Perception: a computer model}, year = {1993}, publisher = {{MIT} Press}, address = {Cambridge, {MA}} } } set t [time { apply BibTeX $bibtex }] puts "Parsed in $t"It takes just under a second to parse and display those two records on my laptop -- still some work to do... jima 2010-12-16Just wanted to put here a link to some recent developments in parsing world: [4]. In this post a technique for using the concept of a derivative of a language is applied to the rules of a CFG or the Parser Combinators dealing with it. Some implementations (Scala, Racket,...) are also provided.
http://wiki.tcl.tk/14295
CC-MAIN-2014-42
refinedweb
2,359
57.84
Last Updated on August 28, 2020 Adding noise to an underconstrained neural network model with a small training dataset can have a regularizing effect and reduce overfitting. Keras supports the addition of Gaussian noise via a separate layer called the GaussianNoise layer. This layer can be used to add noise to an existing model. In this tutorial, you will discover how to add noise to deep learning models in Keras in order to reduce overfitting and improve model generalization. Improve Deep Learning Model Robustness by Adding Noise Photo by Michael Mueller, some rights reserved. Tutorial Overview This tutorial is divided into three parts; they are: - Noise Regularization in Keras - Noise Regularization in Models - Noise Regularization Case Study Noise Regularization in Keras Keras supports the addition of noise to models via the GaussianNoise layer. This is a layer that will add noise to inputs of a given shape. The noise has a mean of zero and requires that a standard deviation of the noise be specified as a parameter. For example: The output of the layer will have the same shape as the input, with the only modification being the addition of noise to the values. Noise Regularization in Models The GaussianNoise can be used in a few different ways with a neural network model. Firstly, it can be used as an input layer to add noise to input variables directly. This is the traditional use of noise as a regularization method in neural networks. Below is an example of defining a GaussianNoise layer as an input layer for a model that takes 2 input variables. Noise can also be added between hidden layers in the model. Given the flexibility of Keras, the noise can be added before or after the use of the activation function. It may make more sense to add it before the activation; nevertheless, both options are possible. Below is an example of a GaussianNoise layer that adds noise to the linear output of a Dense layer before a rectified linear activation function (ReLU), perhaps a more appropriate use of noise between hidden layers. Noise can also be added after the activation function, much like using a noisy activation function. One downside of this usage is that the resulting values may be out-of-range from what the activation function may normally provide. For example, a value with added noise may be less than zero, whereas the relu activation function will only ever output values 0 or larger. Let’s take a look at how noise regularization can be used with some common network types. MLP Noise Regularization The example below adds noise between two Dense fully connected layers. CNN Noise Regularization The example below adds noise after a pooling layer in a convolutional network. RNN Dropout Regularization The example below adds noise between an LSTM recurrent layer and a Dense fully connected layer. Now that we have seen how to add noise to neural network models, let’s look at a case study of adding noise to an overfit model to reduce generalization error. Want Better Results with Deep Learning? Take my free 7-day email crash course now (with sample code). Click to sign-up and also get a free PDF Ebook version of the course. Download Your FREE Mini-Course Noise Regularization Case Study In this section, we will demonstrate how to use noise regularization to reduce overfitting of an MLP on a simple binary classification problem. This example provides a template for applying noise regularization to your own neural network for classification and regression problems. Binary Classification Problem We will use a standard binary classification problem that defines two two-dimensional concentric circles of observations, one semi than in the hidden layer;. We can see that the model has better performance on the training dataset than the test dataset, one possible sign of overfitting. A figure is created showing line plots of the model accuracy on the train and test sets. We can see that expected shape of an overfit model where test accuracy increases to a point and then begins to decrease again. Line Plots of Accuracy on Train and Test Datasets While Training Showing an Overfit MLP With Input Layer Noise The dataset is defined by points that have a controlled amount of statistical noise. Nevertheless, because the dataset is small, we can add further noise to the input values. This will have the effect of creating more samples or resampling the domain, making the structure of the input space artificially smoother. This may make the problem easier to learn and improve generalization performance. We can add a GaussianNoise layer as the input layer. The amount of noise must be small. Given that the input values are within the range [0, 1], we will add Gaussian noise with a mean of 0.0 and a standard deviation of 0.01, chosen arbitrarily. The complete example with this change may see a small lift in performance of the model on the test dataset, with no negative impact on the training dataset. We clearly see the impact of the added noise on the evaluation of the model during training as graphed on the line plot. The noise cases the accuracy of the model to jump around during training, possibly due to the noise introducing points that conflict with true points from the training dataset. Perhaps a lower input noise standard deviation would be more appropriate. The model still shows a pattern of being overfit, with a rise and then fall in test accuracy over training epochs. Line Plot of Train and Test Accuracy With Input Layer Noise MLP With Hidden Layer Noise An alternative approach to adding noise to the input values is to add noise between the hidden layers. This can be done by adding noise to the linear output of the layer (weighted sum) before the activation function is applied, in this case a rectified linear activation function. We can also use a larger standard deviation for the noise as the model is less sensitive to noise at this level given the presumably larger weights from being overfit. We will use a standard deviation of 0.1, again, chosen arbitrarily. The complete example with Gaussian noise between the hidden layers can see a marked increase in the performance of the model on the hold out test set. We can also see from the line plot of accuracy over training epochs that the model no longer appears to show the properties of being overfit. Line Plot of Train and Test Accuracy With Hidden Layer Noise We can also experiment and add the noise after the outputs of the first hidden layer pass through the activation function.. Surprisingly, we see little difference in the performance of the model. Again, we can see from the line plot of accuracy over training epochs that the model no longer shows sign of overfitting. Line Plot of Train and Test Accuracy With Hidden Layer Noise (alternate) Extensions This section lists some ideas for extending the tutorial that you may wish to explore. - Repeated Evaluation. Update the example to use repeated evaluation of the model with and without noise and report performance as the mean and standard deviation over repeats. - Grid Search Standard Deviation. Develop a grid search in order to discover the amount of noise that reliably results in the best performing model. - Input and Hidden Noise. Update the example to introduce noise at both the input and hidden layers of the model. If you explore any of these extensions, I’d love to know. Further Reading This section provides more resources on the topic if you are looking to go deeper. - Keras Regularizers API - Keras Core Layers API - Keras Convolutional Layers API - Keras Recurrent Layers API - Keras Noise API - sklearn.datasets.make_circles API Summary In this tutorial, you discovered how to add noise to deep learning models in Keras in order to reduce overfitting and improve model generalization. Specifically, you learned: -. Do you have any questions? Ask your questions in the comments below and I will do my best to answer. Thanks Jason, nicely explained. Really enjoyed it. Thanks. Thanks, but I think we only have Gaussian noise layer. if I want to apply some attacks like cropping, do we have any layer in keras for this? do you have any suggestion for this? I look forward to hearing from you. Good question, generally no, you can use a custom data generator and perform random crops to images before they are fed into the model. Hi Jason, what do you think about backward pass when you add noise to either weights or activations? For example, when adding noise to activations (which serve as layer inputs), to calculate weight gradients for that layer, you multiply incoming gradient by these activations. Would you use the original activations, or the distorted ones? Or when backpropagating errors we multiply them by transposed weight matrices in each layer, again, would you use the original weights or distorted ones? Hmmm. I have not see it often, except with models like GANs and stochastic label smoothing – required only because training GANs is so unstable. If you have an idea, try it. It has never been easier with such amazing tools! It actually does not seem easy to me. For example, say we want to add noise to activations (inputs to second layer), and then update weights of that second layer. Standard autodiff in either TF or Pytorch would pass upstream gradients right through the noise addition op, to be multiplied by the original second layer inputs. But how can I change this so that they get multiplied by the distorted inputs? I don’t think the distorted inputs are being preserved for the backward pass. In this case, I think the tools actually make it harder to experiment. Or, if it is the distorted inputs that are being preserved by autodiff, then how do I skip them and pass the gradients to the original ones? The model does not see distorted inputs, it sees inputs/outputs/activations. It just so happens that you’ve distorted them with noise. Updates happen per normal. Perhaps I don’t follow the nuance of what you’re trying to implement. Hello Jason, I was wondering, if a layer of noise is added to the model architecture, would it then apply that noise to every test input as well? How would you go about training a model with noise, and then training with clean inputs? It depends. Input or output noise is usually turned off, sometimes it is left on a test time. Noise within the model is sometime left on. Perhaps eval with/without at test time and compare. If you wanted, you could reformulate the final model without the noise layer. Hi Jason, Great article, I have a question regarding the use of Gaussian Noise over some input that has been previously padded (with 0’s for example). Do you think the loss in the training could get worse in this case? An example could be padding different length inputs like speech spectrograms in order for them to have the same shape. Hmm, good question. Yes, noise over padding sounds like a bad idea. There are many ways to get noise into the system, get creative and test a suite of approaches. Hi! Is there also a simple way to tinker/augment the contrast? Something like model.add(Contrast(0.1))? For images, yes, you can use data augmentation: I want to to add some noise to the neural network I am using for the classification of jpg images. So, the input for my neural network are arrays of the pixels, that I have already normalized to be in the range 0 to 1. I wanted to do as in your suggestion: … model.add(MaxPooling2D()) model.add(GaussianNoise(x)) … But I am concerned that the GaussianNoise might make my data go outside the range 0 to 1 and spoil the training. Is this a valid concern or am I safe? Does it depend on the value x to be used in model.add(GaussianNoise(x)) and what x value would you use? Thanks It should be fine, perhaps test it and evaluate the effects? Alternately, you could create your own custom layer to achieve exactly what you want. Hi Jason, Thank you for this tutorial!. I have been playing with this tutorial adding other options to the script in order to experiment with them in a kind of “grid search”. Here it is my report. – I define my models with keras model class API instead of Sequential: But I do not expect any impact on results (!). – I set up the model (as you) but also I used other “high level” model constructor such as ‘KerasClassifier’ and ‘cross_val_score’ (for Kfold statical analysis) from Sklearn library, taken from other tutorial from you. In general the ‘cross_val_score’ got less average accuracy (69% mean accuracy) in front of 85.7% accuracy for model on test input. I understand it. But curiously I got in general better results when I use the KerasClassiffier (e.g. 84.3%). And I do not understand why I got better results on kerasclassifier than in my “manual” API class model if I am using the same “validation_split” in both cases (70% for test 30% for input training). – I got the same validation training results of some kind of “sinusoidal loss curve” (going down and up but with the long trend going up even when I re-train up to 8000 epochs ). And same effect on validation accuracy but little downing trend). All these cases applying with not adding gaussian noise. – I observed that X input data coming from “make_circles of sklearn are between -1.06… and + 1.06 …so I decided to normalize or standardize the input data (with MinMaxScaler and StandardScaler from sklearn and from yours tutorials. In general I got a little better performance on ‘cross-val-score. ( It is increased up t0 to 72% mean accuracy) , but better for my kerasClassifier (up to 88.6% accuracy) but a little worst for my “manual model” around 77% Accuracy on test. -the bing results sensitivity is when I decided to permute the 70% test and 30% training input for 30% test and 70% training (more natural exploitation of data). In this case I got 83% mean accuracy on cross_val_score with a sigma of 10.7% and 96.7 accuracy from Kerasclassifier and 90% accuracy for my manual model. it is clear the reason in this scenario. -Also I performed Dropout layers and weight constraint regularization (taken from your tutorials) but the results are not so much different. – I apply of course the Gaussian noise layer (after input or before output layer) , And clearly I obtain the right trend in terms of validation loss training curve (disappearing the loss increase in validation during training epochs increase), but I do get similar accuracy for my manual model and a little better for the scross-val-score constructor. I Observed that are very sensitivity to the sigma (estandard deviation figure) apply to the gaussian noise layer. – Even I apply everything for regularization altogether in a kind of ‘totum revolutum’ (dropout layer + gaussian noise + weight constraint regularization ) plus input data scaler … I get accuracy around 50% (not learning at all) so it is clear that I need more control for every of these tools…:-) – As a summary I do not get so much impact on accuracy results when apply gaussian noise layer (but of course better behavior on loss and accuracy training curves) when using gaussian noise layer (even when using both of them layer after input and before output at the same time)…probably because sigma noise (standard deviation) has to be better fit … thank you for your tutorial Jason Wonderful experimentation, thanks for sharing. This would be valuable stuff if you write it up and shared it – valuable as in it shows how systematic and curious one must be to really dive into these techniques. Adding noise was really popular in the 90s, less so now that we have dropout. Yet, I see it popup in big modern gan models, so it’s still around and useful. Hey Jason, great article. How would you add input noise to a pre-trained model such as: from tensorflow.keras.applications.resnet50 import ResNet50 I just want to do input noise, but I’m struggling on how to insert it. Add a noise layer as the first layer of the mode – e.g. with the functional api. Perhaps I don’t understand the problem? Hi Jason Thanks for your great explanations Do you have any suggestion for any document that studies the robustness of LSTM to training noise Thanks Not off hand, perhaps run a sensitivity analysis on your dataset?
https://machinelearningmastery.com/how-to-improve-deep-learning-model-robustness-by-adding-noise/
CC-MAIN-2021-04
refinedweb
2,801
62.17
In this C++ tutorial, let us see C++ interfaces with an example program. Introduction to C++ Interfaces The behaviour or abilities of class in C++ are defined by the interfaces, without assigning to a particular implementation of that class. The C++ interfaces are implemented using abstract classes. A class is made abstract by declaring at least one of its functions as a pure virtual function. A pure virtual function is defined by setting “= 0” in its declaration. Declaration Syntax It can be declared as follows, class Box { public: // pure virtual function virtual double getVolume() = 0; private: double length; // Length of a box double breadth; // Breadth of a box double height; // Height of a box }; The purpose of an abstract class is to offer a relevant base class from which other classes can inherit. Abstract classes cannot be used to instantiate objects and acts only as interfaces. If any chance of attempting to instantiate objects can cause a compilation error. C++ Program for Interfaces #include <iostream> using namespace std; // Base class class Shape { public: // pure virtual function providing interface framework. virtual int getArea() = 0; void setWidth(int w) { width = w; } void setHeight(int h) { height = h; } protected: int width; int height; }; // Derived classes class Rectangle: public Shape { public: int getArea() { return (width * height); } }; class Triangle: public Shape { public: int getArea() { return (width * height)/2; } }; int main(void) { Rectangle Rect; Triangle Tri; Rect.setWidth(5); Rect.setHeight(7); // Print the area of the object. cout << "Total Rectangle area: " << Rect.getArea() << endl; Tri.setWidth(5); Tri.setHeight(7); // Print the area of the object. cout << "Total Triangle area: " << Tri.getArea() << endl; return 0; } Output Total Rectangle area: 35 Total Triangle area: 17
https://www.codeatglance.com/cpp-interfaces/
CC-MAIN-2020-40
refinedweb
279
54.63
table of contents - bullseye 7.74.0-1.3+deb11u1 - bullseye-backports 7.84.0-2~bpo11+1 - testing 7.84.0-2 - unstable 7.84.0-2 NAME¶ curl_getdate - Convert a date string to number of seconds SYNOPSIS¶ #include <curl/curl.h> time_t curl_getdate(char *datestring, time_t *now); DESCRIPTION¶ curl_getdate(3) returns the number of seconds since the Epoch, January 1st 1970 00:00:00 in the UTC time zone, for the date and time that the datestring parameter specifies. The now parameter is not used, pass a NULL there. This function works with valid dates and does not always detect and reject wrong dates, such as February 30. PARSING DATES AND TIMES¶. EXAMPLE¶ time_t t; t = curl_getdate("Sun, 06 Nov 1994 08:49:37 GMT", NULL); t = curl_getdate("Sunday, 06-Nov-94 08:49:37 GMT", NULL); t = curl_getdate("Sun Nov 6 08:49:37 1994", NULL); t = curl_getdate("06 Nov 1994 08:49:37 GMT", NULL); t = curl_getdate("06-Nov-94 08:49:37 GMT", NULL); t = curl_getdate("Nov 6 08:49:37 1994", NULL); t = curl_getdate("06 Nov 1994 08:49:37", NULL); t = curl_getdate("06-Nov-94 08:49:37", NULL); t = curl_getdate("1994 Nov 6 08:49:37", NULL); t = curl_getdate("GMT 08:49:37 06-Nov-94 Sunday", NULL); t = curl_getdate("94 6 Nov 08:49:37", NULL); t = curl_getdate("1994 Nov 6", NULL); t = curl_getdate("06-Nov-94", NULL); t = curl_getdate("Sun Nov 6 94", NULL); t = curl_getdate("1994.Nov.6", NULL); t = curl_getdate("Sun/Nov/6/94/GMT", NULL); t = curl_getdate("Sun, 06 Nov 1994 08:49:37 CET", NULL); t = curl_getdate("06 Nov 1994 08:49:37 EST", NULL); t = curl_getdate("Sun, 12 Sep 2004 15:05:58 -0700", NULL); t = curl_getdate("Sat, 11 Sep 2004 21:32:11 +0200", NULL); t = curl_getdate("20040912 15:05:58 -0700", NULL); t = curl_getdate("20040911 +0200", NULL); STANDARDS¶ This parser handles. AVAILABILITY¶ Always RETURN VALUE¶¶ curl_easy_escape(3), curl_easy_unescape(3), CURLOPT_TIMECONDITION(3), CURLOPT_TIMEVALUE(3)
https://manpages.debian.org/unstable/libcurl4-doc/curl_getdate.3.en.html
CC-MAIN-2022-33
refinedweb
330
68.7
(For more resources related to this topic, see here.) Essentially, Fabric is a tool that allows the developer to execute arbitrary Python functions via the command line and also a set of functions in order to execute shell commands on remote servers via SSH. Combining these two things together offers developers a powerful way to administrate the application workflow without having to remember the series of commands that need to be executed on the command line. The library documentation can be found at. Installing the library in PTVS is straightforward. Like all other libraries, to insert this library into a Django project, right-click on the Python 2.7 node in Python Environments of the Solution Explorer window. Then, select the Install Python Package entry. The Python environment contextual menu Clicking on it brings up the Install Python Package modal window as shown in the following screenshot: It's important to use easy_install to download from the Python package index. This will bring the precompiled versions of the library into the system instead of the plain Python C libraries that have to be compiled on the system. Once the package is installed in the system, you can start creating tasks that can be executed outside your application from the command line. First, create a configuration file, fabfile.py, for Fabric. This file contains the tasks that Fabric will execute. The previous screenshot shows a really simple task: it prints out the string hello world once it's executed. You can execute it from the command prompt by using the Fabric command fab, as shown in the following screenshot: Now that you know that the system is working fine, you can move on to the juicy part where you can make some tasks that interact with a remote server through ssh. Create a task that connects to a remote machine and find out the type of OS that runs on it. The env object provides a way to add credentials to Fabric in a programmatic way We have defined a Python function, host_type, that runs a POSIX command, uname–s, on the remote. We also set up a couple of variables to tell Fabric which is the remote machine we are connecting to, i.e. env.hosts, and the password that has to be used to access that machine, i.e. env.password. It's never a good idea to put plain passwords into the source code, as is shown in the preceding screenshot example. Now, we can execute the host_typetask in the command line as follows: The Fabric library connects to the remote machine with the information provided and executes the command on the server. Then, it brings back the result of the command itself in the output part of the response. We can also create tasks that accept parameters from the command line. Create a task that echoes a message on the remote machine, starting with a parameter as shown in the following screenshot: The following are two examples of how the task can be executed: We can also create a helper function that executes an arbitrary command on the remote machine as follows: def execute(cmd): run(cmd) We are also able to upload a file into the remote server by using put: The first argument of put is the local file you want to upload and the second one is the destination folder's filename. Let's see what happens: Deploying process with Fabric The possibilities of using Fabric are really endless, since the tasks can be written in plain Python language. This provides the opportunity to automate many operations and focus more on the development instead of focusing on how to deploy your code to servers to maintain them. Summary This article provided you with an in-depth look at remote task management and schema migrations using the third-party Python library Fabric. Resources for Article: Further resources on this subject: - Through the Web Theming using Python [Article] - Web Scraping with Python [Article] - Python Data Persistence using MySQL [Article]
https://www.packtpub.com/books/content/fabric-library-%E2%80%93-deployment-and-development-task-manager
CC-MAIN-2017-39
refinedweb
673
57
I gave up on functions before because I could never seem to get them to work so I figured its about time I learned. I'm making a calculator to solve all of the equations we are using in physics. Really messed up.Really messed up.Code:#include <cstdlib> #include <iostream> using namespace std; float df(float appgrav=10, float time) { return ((.5*appgrav)*(time*time); } int main(int argc, char *argv[]) { float acceleration; float time; float velocity; float distance; float speed; float gravity=9.8; float appgrav=10; char begin; cout<<"What equation would you like to use? [d]istance fallen, [v]elocity,\n[h]horizontal distance, [a]verage velocity."; cin>>begin; if (begin=='d') cout<<"Enter time:"; cin>>time; df(appgrav,time); system("PAUSE"); return EXIT_SUCCESS; }
http://cboard.cprogramming.com/cplusplus-programming/62907-ggggrrrr-functions.html
CC-MAIN-2016-07
refinedweb
128
51.65