text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
As a researcher who writes publications regularly, I'm frequently faced with the issue of producing neat graphs. This wasn't always easy for me, and I had to use the available tools in the best way I could, but I wasn't satisfied with the graphs I produced most of the time. I always used to wonder how other researchers produced their neat graphs! This issue started to diminish after I came across Python's library, matplotlib, which produces such neat graphs. As mentioned on the library's website: matplotlibis a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. matplotlibcan be used in python scripts, the python and ipython shell (ala MATLAB®* or Mathematica®), web application servers, and six graphical user interface toolkits. matplotlibtries to make easy things easy and hard things possible. You can generate plots, histograms, power spectra, bar charts, errorcharts, scatterplots, etc, with just a few lines of code. In this tutorial, I'm going to show you how to install matplotlib, and then I'll walk you through some examples. If you're interested in digging deeper into Python and learning how to use of the power of Python to handle data, why not check out these two courses: Installing matplotlib Installing matplotlib is very simple. I'm currently working on a Mac OS X machine, so I will show you how to install the library on that operating system. Please see the matplotlib installation page for more information on installing matplotlib on other operating systems. matplotlib can be installed by running the following commands in your Terminal (I'm going to use pip, but you can use other tools): curl -O python get-pip.py pip install matplotlib That's it. You now have matplotlib up and running. Just as simple as that! Drawing Basic Plots Let's now look at some examples of using matplotlib. The first set of examples will be on drawing some basic plots. Line Plot Let's consider a simple example of drawing a line plot using matplotlib. In this case, we are going to use matplotlib.pyplot, which provides a MATLAB-like plotting framework. In other words, it provides a collection of command-style functions that enable matplotlib to work like MATLAB. Let's say we wanted to plot a line for the following set of points: x = (4,8,13,17,20) y = (54, 67, 98, 78, 45) This can be done using the following script: import matplotlib.pyplot as plt plt.plot([4,8,13,17,20],[54, 67, 98, 78, 45]) plt.show() Notice that we represented the x and y points as lists. In this case, the result will be as follows: The line in the figure above is the default line that gets drawn for us, in terms of shape and color. We can customize that by changing the shape and color of the line using some symbols (specifiers) from the MATLAB plot documentation. So let's say we wanted to draw a green dashed line, with diamonds markers. The specifiers we need in this case are: 'g--d'. In our script above, we place the specifiers as follows: plt.plot([4,8,13,17,20],[54, 67, 98, 78, 45],'g--d') In which case, the figure line plot will look as follows: Scatter Plot A scatter plot is a graph that shows the relationship between two sets of data, such as the relationship between age and height. In this section, I'm going to show you how we can draw a scatter plot using matplotlib. Let's take two sets of data, x and y, for which we want to find their relationship (scatter plot): x = [2,4,6,7,9,13,19,26,29,31,36,40,48,51,57,67,69,71,78,88] y = [54,72,43,2,8,98,109,5,35,28,48,83,94,84,73,11,464,75,200,54] The scatter plot can be drawn using the following script: import matplotlib.pyplot as plt x = [2,4,6,7,9,13,19,26,29,31,36,40,48,51,57,67,69,71,78,88] y = [54,72,43,2,8,98,109,5,35,28,48,83,94,84,73,11,464,75,200,54] plt.scatter(x,y) plt.show() The output of this script is: Of course, you can change the color of the markers in addition to other settings, as shown in the documentation. Histograms A histogram is a graph that displays the frequency of data using bars, where numbers are grouped in ranges. In other words, the frequency of each data element in the list is shown using the histogram. The grouped numbers in the form of ranges are called bins. Let's look at an example to understand this more. Let's say that the list of data we want to find the histogram for is as follows:] The Python script we can use to display the histogram for the above data is: import matplotlib.pyplot as plt] num_bins = 6 n, bins, patches = plt.hist(x, num_bins, facecolor = 'green') plt.show() When you run the script, you should get something similar to the following graph (histogram): There are of course more parameters for the function hist(), as shown in the documentation. Further Reading This tutorial was a scratch on the surface for working with graphs in Python. There is more to matplotlib, and you can do many interesting things with this library. If you want to learn more about matplotlib and see other types of figures you can create with this library, one place could be the examples section of the matplotlib website. There are also some interesting books on the topic, such as Mastering matplotlib and Matplotlib Plotting Cookbook. Conclusion As we saw in this tutorial, Python can be extended to perform interesting tasks by utilizing third-party libraries. I have shown an example of such a library, namely matplotlib. As I mentioned in the introduction of this tutorial, producing neat-looking graphs wasn't an easy task for me, especially when you want to present such graphs in scientific publications. matplotlib gave the solution to this issue, because you are able not only to produce nice-looking graphs in an easy manner, but also to have the control (i.e. parameters) over such graphs since you are using a programming language to generate your graphs—in our case, Python. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/introducing-matplotlib--cms-26543
CC-MAIN-2018-22
refinedweb
1,106
62.07
Acme::CPANAuthors - We are CPAN authors use Acme::CPANAuthors; my $authors = Acme::CPANAuthors->new('Japanese'); my $number = $authors->count; my @ids = $authors->id; my @distros = $authors->distributions('ISHIGAKI'); my $url = $authors->avatar_url('ISHIGAKI'); my $kwalitee = $authors->kwalitee('ISHIGAKI'); my @info = $authors->look_for('ishigaki'); If you don't like this interface, just use a specific authors list. use Acme::CPANAuthors::Japanese; my %authors = Acme::CPANAuthors::Japanese->authors; # note that ->author is context sensitive. however, you can't # write this without dereference for older perls as "keys" # checks the type (actually, the number) of args. for my $name (keys %{ Acme::CPANAuthors::Japanese->authors }) { print Acme::CPANAuthors::Japanese->authors->{$name}, "\n"; } Sometimes we just want to know something to confirm we're not alone, or to see if we're doing right things, or to look for someone we can rely on. This module provides you some basic information on us. We've been holding a Kwalitee competition for Japanese CPAN Authors since 2006. Though Japanese names are rather easy to distinguish from Westerner's names (as our names have lots of vowels), it's tedious to look for Japanese authors every time we hold the contest. That's why I wrote this module and started maintaining the Japanese authors list with a script to look for candidates whose name looks like Japanese by the help of Lingua::JA::Romaji::Valid I coined. Since then, dozens of lists are uploaded on CPAN. It may be time to start other games, like offering more useful statistics online. Now we have a website:. You can easily see who is the most kwalitative author in your community, or who released or updated most in the past 365 days. More statistics would come, and suggestions are welcome. Since 0.14, Acme::CPANAuthors checks ACME_CPANAUTHORS_HOME environmental variable to look for a place where CPAN indices are located. If you have a local (mini) CPAN mirror, or a source directory for your CPAN clients ( ~/.cpan/sources etc), set the variable to point there. If not specified, the indices will be downloaded from the CPAN (to your temporary directory, or to the current directory). creates an object and loads the subclasses you specified. If you don't specify any subclasses, it tries to load all the subclasses found just under the "Acme::CPANAuthors" namespace (except Acme::CPANAuthors::Not). returns how many CPAN authors are registered. returns all the registered ids by default. If called with an id, this returns if there's a registered author of the id. returns all the registered authors' name by default. If called with an id, this returns the name of the author of the id. returns an array of Acme::CPANAuthors::Utils::Packages::Distribution objects for the author of the id. returns gravatar url of the id shown at search.cpan.org (or undef if you don't have Gravatar::URL). See for details. returns kwalitee information for the author of the id. This information is fetched from a remote API server. my @authors = Acme::CPANAuthors->look_for('SOMEONE'); foreach my $author (@authors) { printf "%s (%s) belongs to %s.\n", $author->{id}, $author->{name}, $author->{category}; } takes an id or a name (or a part of them, or even a regexp) and returns an array of hash references, each of which contains an id, a name, and a basename of the class where the person is registered. Note that this will load all the installed Acme::CPANAuthors:: modules but Acme::CPANAuthors::Not and modules with deeper namespaces. As of writing this, there are quite a number of lists on the CPAN, including: These are not regional ones but for some local groups. These are lists for specific module authors. And other stuff. Thank you all. And I hope more to come. Kenichi Ishigaki, <ishigaki at cpan.org> This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~ishigaki/Acme-CPANAuthors-0.22/lib/Acme/CPANAuthors.pm
CC-MAIN-2015-35
refinedweb
652
56.55
An API for the Thinkst Canary Console Project description Thinkst Applied Research Overview The Python Canary API Wrapper allows access to the Canary Web API. Installation The API is supported on python 2.7. The recommended way to install the API Wrapper is via pip. pip install canarytools For instructions on installing python and pip see “The Hitchhiker’s Guide to Python” Installation Guides. Quickstart Assuming you have your API key handy as well as the domain of your website: import canarytools console = canarytools.Console(api_key='API_KEY', domain='CLIENT_DOMAIN') Note: You can find your API key and domain on your console. Head over to the console’s setup page and under Canary Console API you’ll find your API key. Your domain is the tag in-front of ‘canary.tools’ in the console’s url. For example in testconsole is the domain. Alternatively, you can download a configurations file from the Canary Console API tab. Inside the file you’ll find instructions on where to place it. If you have this on your system the api_key and domain parameters are no longer necessary when instantiating a Console object. With the console instance you can then interact with a Canary Console: # Get all devices console.devices.all() # Acknowledge all incidents for a device older than 3 days console.incidents.acknowledge(node_id='329921d242c30b5e', older_than='3d') # Iterate all devices and start the update process for device in console.devices.all(): device.update(update_tag='4ae023bdf75f14c8f08548bf5130e861') # Acknowledge and delete all host port scan Incidents for incident in console.incidents.unacknowledged(): if isinstance(incident, canarytools.IncidentHostPortScan): incident.acknowledge() incident.delete() # Create a web image Canarytoken console.tokens.create( kind=canarytools.CanaryTokenKinds.KIND_WEB_IMAGE, memo='Drop this token on DC box', web_image='/path/to/test.png', mimetype='image/png') # Print out the name of all incidents and the source IP address for incident in console.incidents.all(): print incident.description, incident.src_host Please see the API doc’s documentation for more examples of what you can do with the Canary Console API. Discussion and Support Please file bugs and feature requests as issues on GitHub after first searching to ensure a similar issue was not already filed. If such an issue already exists please give it a thumbs up reaction. Comments to issues containing additional information are certainly welcome. Documentation The documentation is located at. License The Python Canary API Wrapper’s source (v1.0.0+) is provided under the Revised BSD License. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/canarytools/1.0.12/
CC-MAIN-2021-04
refinedweb
434
50.43
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. I am not sure which script runner function to use or how to code for a change in the issue.priority field. We want to send an email when a Priority 1 issue is down graded (so not = 1-Critical). How do I build it? I have already created a script Listener which will send an email if a Priority 1 Issue is changed to resolved. However, that rarely happens without being down graded. Hello Kelly, In the script runner plugin, you can check if the value of the field "Priority" is changed or not and based on changeHistory of the issue. Thus, you can send an email if the value of priority has been downgraded. def change = event?.getChangeLog()?.getRelated("ChildChangeItem").find {it.field == "Priority"} if(change) { // you r code here to check the new priority value and check if it's downgraded then return true } This is good. I was going to suggest something similar. Let us know if you need more help @Kelly Besmar..
https://community.atlassian.com/t5/Jira-questions/How-do-I-send-an-email-when-a-Priority-1-issue-is-down-graded-so/qaq-p/675397
CC-MAIN-2018-51
refinedweb
189
64.81
. System.Console.WriteLine() This is because Console is a member of the System namespace. However, since System is usually always included as a using directive, you can just write Console.WriteLine(). String is also part of the System namespace, but the ToLower method is not static -- in other words, it operates on instantiated members of the String class. This is why you can do things like string s = "Hello"; s.ToLower(); --BUT NOT-- String.ToLower(); String.ToLower is in System.String Console.WriteLine is in System.Console both in the System namespace. I believe if you put: using System; at the top of your file, you shouldn't need to specify a namespace just the object: Console.Writeline("..."); // this works fine for me Can you be more specific about what you are required to do here? -Kelly System.String.ToLower(); or String.ToLower(); If it's part of System then how come if I dont include System as a namespace it still works with just .ToLower(); ? But if I do that with WriteLine(); it errors and I have to put System.Console.WriteLine() For instance, without a using System directive you can do: string str = "this is a string"; // note lowercase "string" System.Console.WriteLine( str.ToLower() ); // or any other string method BUT NOT String s = new String("this is not valid"); // compiler error: "missing a using directive"
https://www.experts-exchange.com/questions/20778540/Easy-Question-I-think.html
CC-MAIN-2017-51
refinedweb
228
65.93
Method Overloading in C# In this article, I’m trying to explain the concept of method overloading. Method overloading is the concept of polymorphism which means one name multiple forms. It defines that method can have same name with different parameters or signatures. It allows you to have same name method in one class. In method overloading, method performs different tasks at the different input parameters. 1 - Same method name. 2 - Number of parameters should be different. 3 - Types of parameters should be different. To implement the method overloading, the methods must have same name and the numbers of parameters or data types of parameters must be different, having different return types of method does not make the method overloaded. using System; namespace MethodOverLoadingConsoleApplication { class MethodOverloading { /// <summary> /// mehtod area(int side) for calculating the area of square /// </summary> /// <param name="side">side of square</param> public void area(int side) { int z = side * side; Console.WriteLine("Area Of Square : "+z); } /// <summary> /// mehtod area(int length,int width) for calculating the area of rectangle /// </summary> /// <param name="length">length of rectangle</param> /// <param name="width">width of rectangle</param> public void area(int length,int width) { int z = length * width; Console.WriteLine("Area Of Rectangle : " + z); } } class Program { static void Main(string[] args) { MethodOverloading mth = new MethodOverloading();//Create object mth.area(4); mth.area(7,5); } } } Area Of Square : 16 Area Of Rectangle : 35 In this example, i have created two methods with the name area() - one for calculating the area of square and another for rectangle, note that both the methods have same name but different number of parameters which make the methods overloaded.
https://www.mindstick.com/blog/503/method-overloading-in-c-sharp
CC-MAIN-2016-44
refinedweb
274
52.19
01 November 2011 07:21 [Source: ICIS news] SHANGHAI (ICIS)--?xml:namespace> The country’s purchasing managers’ index (PMI) fell by 0.8 percentage points to 50.4% in October from September, according to data released by the China Federation of Logistics & Purchasing (CFLP) on Tuesday. From September to October, the new orders index fell by 0.8 percentage points to 50.5%, while the production index dropped by 0.4 percentage points to 52.3%, according to the data. The declining PMI indicates that economic growth will continue to slow in the near future, said Zhang Liqun, an analyst from CFLP. The country’s third-quarter export and investment growth has slowed and companies are having problems with funding, Zhang added. The new export orders index dropped by 2.3 percentage points from September to 48.6% in October, the data showed. The decline reflects weaker demand from the international market due to the eurozone debt crisis, an analyst from China Customs said. In October, both demand and production in This is because orders were made three months in advance, the analyst added. In October, the import index declined by 3.1 percentage points to 47.0% and the purchasing index declined by 10.4 percentage points to 46.2%, the data showed. “It is the first time the purchasing index has declined to below 50% since the economic crisis in late 2008,” the Huachuang analyst added. The PMI, which is a measurement of the monthly performance of A figure above 50% indicates an expansion, while a figure below 50% represents a contraction.
http://www.icis.com/Articles/2011/11/01/9504282/china-oct-pmi-declines-to-50.4-manufacturing-to-slow.html
CC-MAIN-2014-49
refinedweb
264
60.61
RepeatPattern Since: BlackBerry 10.0.0 #include <bb/cascades/RepeatPattern> Specifies how and if an image should be repeated within a container. The RepeatPattern::Type is used by ImagePaint and ImagePaintDefinition to specify if and how the image should be repeated over the filled area. Fill: stretches the image to fit assigned area without preserveing image's aspect ratio X: image is repeated along X-axis and is stretched in the Y-axis direction Y: image is repeated along Y-axis and is stretched in the X-axis direction XY: image is repeated along X-axis and Y-axis If repeat is set and the area dimensions are not exact multiples of the source image dimensions, the final images in the repeat sequence will be cut off in order to preserve the area. The provided image must also have a width and height that is a power of two in order to be tileable (for example, 4x4, 8x8, 16x16, 32x32, 64x64, 128x128, 256x256, 128x64, 32x16). It Overview Public Types Index Public Types The different repeat methods. BlackBerry 10.0.0 - Fill 0x0 - X 0x1 - Since: BlackBerry 10.0.0 - Y 0x2 - Since: BlackBerry 10.0.0 - XY 0x3 - Since: BlackBerry 10.0.0 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/cascades/bb__cascades__repeatpattern.html
CC-MAIN-2016-50
refinedweb
219
54.73
ran the JIT CodeGen bringup tests from CoreCLR and one of them failed (tests/src/JIT/CodeGenBringUpTests/Localloc.cs). Reduced to the following code: > using System; > > public class Program > { > public static void Main() > { > try { > unsafe { > byte* a = stackalloc byte[0]; > *a = 0; > } > } > catch (Exception) { > Console.WriteLine("Catched Exception"); > } > Console.WriteLine("Hello"); > } > } Running it on MS.NET prints > Catched Exception > Hello Running it on Mono prints > Hello Removing the try-catch results in a NullReferenceException on MS.NET and a segfault on Mono. I've now ran all of the CoreCLR tests and there are a bunch of other failures apart from this, so I refrained from filing tickets for now: Supporting this would require adding checks to localloc, slowing it down for little gain. I'm fine with WontFix as a resolution if you think fixing it adds too much overhead :) Disabled the test in.
https://xamarin.github.io/bugzilla-archives/33/33138/bug.html
CC-MAIN-2019-39
refinedweb
145
63.39
Hey everyone, I'm writing a program for a class. It's a relatively simple C program that I'm compiling in Dev C++. I'm trying to make it as complex as possible, but I'm just a college freshman and don't know much beyond the basics of programming. I'd like to enable option to have the user select an input file and output file for the text to be encoded and decoded, as well as an option of which substitution cypher to use (26 in total). I'm a total noob when it comes to file I/O, but here's my code so far. Any tips/recommendations are very much appriciated! #include "stdio.h" #include "stdlib.h" #define ESC 27 /* standard ascii value for the escape key */ int main() { int i=0; // declare a variable named i of type int char ch=0; // declare a variable named ch of type char char *alpha="abcdefghijklmnopqrstuvwxyz"; // declare a char pointer to the beginning of the alphabet char *key="zyxwvutsrqponmlkjihgfedcba"; // declare a char pointer to the beginning of the key // FILE *infile, *outfile; // infile=fopen("quad.dat", "r+"); system("cls"); printf("Cryptology Tool Prototype\n"); while(ch!=ESC) // do the following while the escape character is not present { printf("Type the message you would like to encode\n"); ch=fgetc(stdin); // store a character entered from standard input into ch if(isalpha(ch)) // do the following if ch is a letter { for(i=0;i<26;i++) // for(INIT ; LOOP WHILE THIS IS TRUE ; MANIPULATE) { if(alpha[i]==tolower(ch)) // if the character residing at alpha + offset i = ch { ch=(isupper(ch)) ? (toupper(key[i])):key[i]; printf("%i\n"); break; } // end of if } // end of for } // end of if fputc(ch,stdout); } // end of while return(0); } // end of main
https://www.daniweb.com/programming/software-development/threads/36281/cryptology
CC-MAIN-2018-30
refinedweb
303
56.79
The BufferedWriter class of the java.io package can be used with other writers to write data (in characters) more efficiently. It extends the abstract class Writer. Working of BufferedWriter The BufferedWriter maintains an internal buffer of 8192 characters. During the write operation, the characters are written to the internal buffer instead of the disk. Once the buffer is filled or the writer is closed, the whole characters in the buffer are written to the disk. Hence, the number of communication to the disk is reduced. This is why writing characters is faster using BufferedWriter. Create a BufferedWriter In order to create a BufferedWriter, we must import the java.io.BufferedWriter package first. Once we import the package here is how we can create the buffered writer. // Creates a FileWriter FileWriter file = new FileWriter(String name); // Creates a BufferedWriter BufferedWriter buffer = new BufferedWriter(file); In the above example, we have created a BufferedWriter named buffer with the FileWriter named file. Here, the internal buffer of the BufferedWriter has the default size of 8192 characters. However, we can specify the size of the internal buffer as well. // Creates a BufferedWriter with specified size internal buffer BufferedWriter buffer = new BufferedWriter(file, int size); The buffer will help to write characters to the files more efficiently. Methods of BufferedWriter The BufferedWriter class provides implementations for different methods present in Writer. write() Method write()- writes a single character to the internal buffer of the writer write(char[] array)- writes the characters from the specified array to the writer write(String data)- writes the specified string to the writer Example: BufferedWriter to write data to a File import java.io.FileWriter; import java.io.BufferedWriter; public class Main { public static void main(String args[]) { String data = "This is the data in the output file"; try { // Creates a FileWriter FileWriter file = new FileWriter("output.txt"); // Creates a BufferedWriter BufferedWriter output = new BufferedWriter(file); // Writes the string to the file output.write(data); // Closes the writer output.close(); } catch (Exception e) { e.getStackTrace(); } } } In the above example, we have created a buffered writer named output along with FileWriter. The buffered writer is linked with the output.txt file. FileWriter file = new FileWriter("output.txt"); BufferedWriter output = new BufferedWriter(file); To write data to the file, we have used the write() method. Here when we run the program, the output.txt file is filled with the following content. This is a line of text inside the file. flush() Method To clear the internal buffer, we can use the flush() method. This method forces the writer to write all data present in the buffer to the destination file. For example, suppose we have an empty file named output.txt. import java.io.FileWriter; import java.io.BufferedWriter; public class Main { public static void main(String[] args) { String data = "This is a demo of the flush method"; try { // Creates a FileWriter FileWriter file = new FileWriter(" flush.txt"); // Creates a BufferedWriter BufferedWriter output = new BufferedWriter(file); // Writes data to the file output.write(data); // Flushes data to the destination output.flush(); System.out.println("Data is flushed to the file."); output.close(); } catch(Exception e) { e.getStackTrace(); } } } Output Data is flushed to the file. When we run the program, the file output.txt is filled with the text represented by the string data. close() Method To close the buffered writer, we can use the close() method. Once the close() method is called, we cannot use the writer to write the data. Other Methods of BufferedWriter To learn more, visit Java BufferedWriter (official Java documentation).
https://www.programiz.com/java-programming/bufferedwriter
CC-MAIN-2021-04
refinedweb
593
58.79
What I'm trying to do is check if a string is a palindrome or not. I'm supposed to ignore all non-letter characters but it doesn't appear to be working. Any suggestions? For ex. When I type in "He lived as a devil, eh?" it tells me that it's not a palindrome even though it is. Code:#include <iostream> #include <string> #include <cctype> using namespace std; int main() { char ch; string s; cin.get(ch); while (ch!='\n') { if(ispunct(ch) || isspace(ch) || isdigit(ch)) { ch='\0'; } else ch=tolower(ch); s+=ch; cin.get(ch); } if(s==string(s.rbegin(),s.rend())){ cout<<s<<"\nis a palindrome."; } else cout<<s<<"\nis not a palindrome."; return 0; }
https://cboard.cprogramming.com/cplusplus-programming/175245-palindrome-program-not-working-post1273593.html?s=bee9f12521a887647ea58b0f788caa3c
CC-MAIN-2020-05
refinedweb
121
78.04
As has come up repeatedly over the past few years, nobody seems to be very happy with the way that NumPy's datetime64 type parses and prints datetimes in local timezones. The tentative consensus from last year's discussion was that we should make datetime64 timezone naive, like the standard library's datetime.datetime: That makes sense to me, and it's exactly what I'd like to see happen for NumPy 1.11. Here's my PR to make that happen: As a temporary measure, we still will parse datetimes that include a timezone specification by converting them to UTC, but will issue a DeprecationWarning. This is important for a smooth transition, because at the very least I suspect the "Z" modifier for UTC is widely used. Another option would be to preserve this conversion indefinitely, without any deprecation warning. There's one (slightly) contentious API decision to make: What should we do with the numpy.datetime_to_string function? As far as I can tell, it was never documented as part of the NumPy API and has not been used very much or at all outside of NumPy's own test suite, but it is exposed in the main numpy namespace. If we can remove it, then we can delete and simplify a lot more code related to timezone parsing and display. If not, we'll need to do a bit of work so we can distinguish between the string representations of timezone naive and UTC. Best, Stephan -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://mail.python.org/pipermail/numpy-discussion/2015-October/073932.html
CC-MAIN-2021-39
refinedweb
256
65.86
I'm sticking with Link list tho @_@ It's your choice, but remember that the list container from STL is also a linked list (a doubly linked list) I'm sticking with Link list tho @_@ It's your choice, but remember that the list container from STL is also a linked list (a doubly linked list) Since I'm sticking with linked lists. How do I correct the memory leak problem. Deleting that pointer would do the trick. //Example Code int * temp=new int; delete temp; char action; cout<<"how many?"; cin>>action; if(sub_choice(action)) {int count = 1; while (action != 'x') { cin>>action; switch(action) { case 'c': { Customer *CUST = new Customer; CUST->lineNumber = count; count++; CUST = CUST-> next; delete CUST; // you mean this pointer? } break; default: cout<<"hello"; } } This would stop the memory leak , But would not make cust accessible.I mean the new customer profile will get deleted. You should find a way of handling new pointers. For this,Just imagine how a linked list works and how should you define it ;) char action; cout<<"how many?"; cin>>action; if(sub_choice(action)) {int count = 1; while (action != 'x') { cin>>action; switch(action) { case 'c': { Customer *CUST = new Customer; CUST->lineNumber = count; count++; CUST = CUST-> next; delete CUST; // you mean this pointer? } break; default: cout<<"hello"; } } Thats probarbly the reason you are getting 1. after 2 times because you are taking in data twice. Removing the red part would solve it. Secondly. Customer *CUST ; I think Customer must be defined outside the while loop. And then Figure out the way to handle the linked list. If you think that the linked List is getting a little complex. there is always a vector :) What does sub_choice ( d) do??? I am really unable to analyse this . Firstly How can you remove a customer without creating the customer. I think you should add in a case such as 'r' in the while loop to remove a customer. And how do you plan on removing a customer from the line? Make sure that all the functions such as remove customer add customer talk to customer comes within the while loop and be sure On how you will be handling customers with the link-list ? If you've hit a key once, you haven't hit it twice :P Can you give me that code ? #include <iostream> #include <cstdlib> using namespace std; struct Node{ Node *next; int value; }; bool menu(const char& m) { return m == 'x'|| m == 'z'; } int main() { int data = 0; cout<<"How many"<<endl; cin>>data; Node * head = NULL; Node * tail = NULL; int counter = 1; while(counter <= data) { Node * list = new Node; list -> value = counter; list->next = NULL; if(counter == 1) { head = list; tail = head; } else { tail -> next = list; tail = tail->next; } counter++; } char n; cout<<"choose from menu: "<<endl; cin>>n; if(menu(n)) { int data2=0; cout<<"which one do you watn?"<<endl; cin>>data2; while(n != 't') { cin>>n; switch (n) { case 'x': data2++; cout<<data2; break; default: cout<<"hey"<<endl; } } Node *cur = head; for(int i=0; i<data2; i++) { if(data2==cur->value) { cout<<"Your choice is: "<<cur->value<<endl; } else { cout<<"try agian"<<endl; } cur=cur->next; } } system("PAUSE"); return 0; } type x twice, and it'll go into an indefinite loop. I'm wondering something. can input be something like CCCCCCCC - this creates the amount of C's customers. or should it be C C C C C C etc With the above code. You will be able to enter CCCCCCC and create many C's After you enter 'C' you should press ENTER or RETURN for the command to execute. I never seen an alternative to that. So i really cant help out with enter 'C' and execute Command. Did you read my remark on system("pause"); in my signature ? BTW, is there any specific reason to write this: Node * head = NULL; Node * tail = NULL; ? Remark(s): newreturn a NULL-pointerwhen allocation has failed, then you should use new(nothrow) :) Check out your code and look and what creates a new customer. char action; cout<<"how many?"; cin>>action; Customer *cust = new Customer; Customer * head = NULL; Customer * tail = NULL; head = Customer; tail = head; if(sub_choice(action)) {int count = 1; while (action == 'c') { switch(action) { case 'c': { tail -> next = new Customer; tail = tail->next; CUST->lineNumber = count; count++; } break; default: cout<<"hello"; } } is this better? A switch with one case, it's allowed but I would use an if-statement instead :) >actually, now everytime i hit enter. it's like I'm pressing c You could also use cin.get() with 'c' as a delimiter (Though I'm not sure whether this is the best method to do this): The problem with your code is that . Once you enter 'C' It goes into an infinite loop making and deleting pointers. i GUESS YOU WILL NEED TO cin>>action; inside the while loop. If you want the loop to run for ever each time taking in C. U can do this while(true) { cin>>action; switch(action) { case 'c': //DO something. } } ...
https://www.daniweb.com/programming/software-development/threads/191481/pressing-a-button-makes-a-new-thing/2
CC-MAIN-2018-05
refinedweb
848
71.65
import "github.com/cube2222/octosql/parser/sqlparser/dependency/hack" Package hack gives you some efficient functionality at the cost of breaking some Go rules. String force casts a []byte to a string. USE AT YOUR OWN RISK StringPointer returns &s[0], which is not allowed in go StringArena lets you consolidate allocations for a group of strings that have similar life length func NewStringArena(size int) *StringArena NewStringArena creates an arena of the specified size. func (sa *StringArena) NewString(b []byte) string NewString copies a byte slice into the arena and returns it as a string. If the arena is full, it returns a traditional go string. func (sa *StringArena) SpaceLeft() int SpaceLeft returns the amount of space left in the arena. Package hack imports 2 packages (graph) and is imported by 1 packages. Updated 2019-08-11. Refresh now. Tools for package owners.
https://godoc.org/github.com/cube2222/octosql/parser/sqlparser/dependency/hack
CC-MAIN-2020-34
refinedweb
144
63.19
I want to darken current screen a bit before poping up a new surface. I know there is no function in pygame to do this and that I will have to process the image to darken it. But as long as I know the only way to get current displaying surface in pygame is by saving it to disk as a file which slows down the game. Is there any other way to do this with pygame? Like saving the image to a value in memory so that I can process it without saving it somewhere. Thanks in advance, Stam You don't need to save anything to a file. When you read an image to a file, it is a Surface object. You them blit this object to the screen. But these Surface objects have the same methods and properties than the object working as the screen - (which is also a Surface): you can draw primitives, and blit other images to them - all in memory. So, once you read your image, just make a copy of it, draw a filled rectangle with a solid transparent color on it to darken it, and then blit it to the screen. Repeat the process increasing the transparency level and pasting it on the screen again if you want a fade in effect. import pygame screen = pygame.display.set_mode((640,480)) img = pygame.image.load("MYIMAGE.PNG") for opacity in range(255, 0, -15): work_img = img.copy() pygame.draw.rect(work_img, (255,0, 0, opacity), (0,0, 640,480)) screen.blit(work_img, (0,0)) pygame.display.flip() pygame.time.delay(100)
https://codedump.io/share/xZljaIrOSS6w/1/pygame-how-to-darken-screen
CC-MAIN-2017-09
refinedweb
268
81.43
Recently Browsing 0 members No registered users viewing this page. Similar Content - UEZ? - By Jemboy Recently I was working on a script with icons using GuiCtrkCreatIcon. I decided to change the sub folder name of the icons to a more meaning name, however made a typo. I tested the .exe on my test computer and it worked flawlessly (because both icon folder where on my test computer) 😁 But after I installed the script on the intended computers , I got chaos!😵 Zooming into the problem, I discovered, that because the icons could not be found, the ControlID were returned with a value of 0 and thus played havoc within the GuiGetMsg() switch/case statement. I have been able to reproduce this (see example) #include <GUIConstantsEx.au3> ;============================================================================================================ ; PLEASE, do not save this example in the example folder: C:\Program Files (x86)\AutoIt3\Examples\Helpfile ;============================================================================================================ Example() Func Example() GUICreate(" My GUI Icons", 250, 250) $Icon1 = GUICtrlCreateIcon("shell32.dll", 10, 20, 20) $Icon2 = GUICtrlCreateIcon(@ScriptDir & '\Extras\horse.ani', -1, 20, 40, 32, 32) $Icon3 = GUICtrlCreateIcon("shell32.dll", 7, 20, 75, 32, 32) GUISetState(@SW_SHOW) ;$Icon2 = -1 ; ==> When this line is uncommented the script "works", so -1 could be a potential fix. ; Loop until the user exits. While 1 Switch GUIGetMsg() Case $GUI_EVENT_CLOSE ExitLoop Case $Icon2 Beep (500,500) EndSwitch WEnd GUIDelete() EndFunc ;==>Example If you save the above script outside the Autoit example folder and run it, it will keep beeping because GuiCtrlCreatIcon did not find horse.ani and return $Icon2=0. At the moment GUICtrlCreateIcon () only returns the conntrolID on success and 0 on failure. I would like to propose a return of -1 on failure, so a existing and working script won't go awry when the icon can not be found. - By matwachich AutoIt3 Lua Wrapper This is an AutoIt3 wrapper for the Lua scripting language. Consider it beta software, but since I will be using it in commercial product, expect it to evolve. It has been developped with Lua 5.3.5. Updates will come for new Lua version. Everything works just fine, except one (big) limitation: Anything that throws a Lua error (using C setjmp/longjmp functionality) will crash your AutoIt program. That means that it is impossible to use throw errors from an AutoIt function called by Lua (luaL_check*, lua_error...). It is hosted in Github: Simple example #include <lua.au3> #include <lua_dlls.au3> ; Initialize library _lua_Startup(_lua_ExtractDll()) OnAutoItExitRegister(_lua_Shutdown) ; create new execution state $pState = _luaL_newState() _luaopen_base($pState) ; needed for the lua's print function $iRet = _luaL_doString($pState, 'print("Hello, world!")') If $iRet <> $LUA_OK Then ; read the error description on top of the stack ConsoleWrite("!> Error: " & _lua_toString($pState, -1) & @CRLF) Exit EndIf ; close the state to free memory (you MUST call this function, this is not AutoIt's automatic memory management, it's a C library) _lua_close($pState) - By tbwalker I'm trying to build a script that will eventually create a log with time stamps of the active windows used on a workstation throughout the day, but I'm having a problem figuring out how to actually get this information. For example, if someone has Microsoft Word open, I'd like to be able to pop-up/log "word.exe" along with the full path to that file if at all possible (sort of like seeing the application DETAILS name in Windows Task Manager and being able to right-click on the name and choose "Open FIle Location" to get the full path to the file). Is what I'm asking even possible within the realm of AutoIt? I have the below script as a test that gets me the current active window handle and title in a message box every 6 seconds, but for the life of me, I don't know what code I need to use to get the actual .EXE name/path of the active window. #include <MsgBoxConstants.au3> Local $i = 0 Do Global $handle = WinGetHandle("[ACTIVE]") Global $title = WinGetTitle("[ACTIVE]") MsgBox(0,"Active Handle & Title",$handle & " - " & $title, 3) $i = $i + 1 Sleep(3000) Until $i = 100 Any help or suggestions would be greatly appreciated. I don't mind figuring out the code myself, if someone could just point me in the right direction. Thanks, TBWalker - Recommended Posts You need to be a member in order to leave a comment Sign up for a new account in our community. It's easy!Register a new account Already have an account? Sign in here.Sign In Now
https://www.autoitscript.com/forum/topic/179077-how-to-get-icon-associated-with-an-extension/?tab=comments#comment-1285277
CC-MAIN-2021-49
refinedweb
742
62.07
Hi all! *OS*: Mint Linux 17, 64bit Linux (3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 19:11:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux *Python*: Python 2.7.7 |Continuum Analytics, Inc.| (default, Jun 2 2014, 12:34:02) [GCC 4.1.2 20080704 (Red Hat 4.1.2-54)] on linux2 *PyRosseta*: PyRosetta.Ubuntu-12.04LTS.64Bit.devel-r56795 While trying to import PyRosseta I get the following error: "ImportError: rosetta/libmini.so: undefined symbol: __sinh_finite" Here are the full steps: Inside of a bash shell I do: . SetPyRosettaEnvironment.sh python import rosetta Here is the stack trace: Traceback (most recent call last): File "", line 1, in File "rosetta/__init__.py", line 30, in import utility File "rosetta/utility/__init__.py", line 1, in from _utility_ import * ImportError: rosetta/libmini.so: undefined symbol: __sinh_finite Any help is very welcome. Thank you, Ajasja Ljubetič This seems to be connected to Anaconda python. If I use the system python ( Python 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2) Then I get a different error: File "rosetta/__init__.py", line 45, in <module> import numeric ImportError: No module named numeric And indeed the whole numeric subdirectory is missing under the rosetta module. Will try to re-download the tar.bz2 file. ajasja, I am having this missing numeric module when I tried to install on Ubuntu. What fixed this problem for you? Did you use a different file than the regular tar.bz2 file? Thanks for your help Hi, Yes, I downloaded a different release. (What each release means is described here). Best regards, Ajasja Happy to report the PyRosetta.Ubuntu-12.04LTS.64Bit.namespace.mode=release.branch=release-r21 works correctly with the system python version! I did notice that the database directory is no longer named "rosetta_database", but just "database". In "SetPyRosettaEnvironment.sh" the line for the database is: export PYROSETTA_DATABASE=$PYROSETTA/rosetta_database (so the wrong name, but somehow the correct database directory is found nevertheless).
https://rosettacommons.org/node/3719
CC-MAIN-2021-43
refinedweb
339
52.46
C++ Tutorial: Day 5! Day 5! Bugs/Typos Forgot semicolon. Thanks to @SpaceFire Please upvote to help dominate the tutorial section. REMEMBER. ALL FUNCTIONS MUST END WITH A SEMICOLON Day 5!........... For Loops! What in the world are For Loops? Definition: For Loops are disigned to iterate a number of times. Basically a template for a for loop looks like this... for(initialization; condition; increase/decrease) statement; bla bla bla So this would be a cool for loop for a SpaceX countdown. Don't copy the code, try to write it yourself. #include <iostream> using namespace std; int main() { for (int n = 10; n>0; n--) { cout << n << ", "; } cout << "Liftoff!\n"; } Initialization: Is executed, and sets it to some inital value. Condition: This is checked, and if true, run, if false, end. Statement: basically the statements enclosed in the curly brackets. Increase: is executed and loops over again until false. While Loops! What are While Loops? In a while loop, the condition is evauated first and if true, then the statement in sde the while loop will be executed. This will continue until the condition becomes false. When the condition is false, it comes out of the loop and goes to the next statement. Syntax of a while loop: while(conditon) { statement(s); } A Example: #include <iostream> using namespace std; int main() { int i = 1; while(i <= 6){ cout << "Value of variable i is: " << i << endl; i++; } } This will print out the numbers 1,2,3,4,5,6. Ummmm that's it! Challenges: Build a launch sequence again, but instead of it launching, make it abort. Like this: 10,9,8,7,6,5,4,3, abort! Using a while loop, make a infinite sequence. Go study pointers, because the next one is going to be HARD Hey everyone, all these competitors in the tutorials section must be wiped out. Everyone upvote! (To dominate the tutorial section.) Ummm featured shoutout..... @TheForArkLD Always think the name is THeFORk @Jakman A hardcore Rust programmer. Better than C++: [ link redacted by moderators for advertising ] Also a lot more in depth than this tutorial. :D @Muffinlavania yeah, I'm looking at stupid traders at r/wallstreetbets it is like speedrunning for the stock market @CodeLongAndPros Hey man, sorry to say this, but you guys are really getting agressive. I'm trying to be nice, but I'm starting to get threats from wuru @CodeLongAndPros Hey CodeLong, to reply to your statement, I wasn't really talking to you in general. I was talking more about wuru, and his kinda agressive statements. (again just look at the comments) but sorry for the agressive stance in your opinion. @Muffinlavania Dang muffin, a 100% increase in cycles? Whatttt, (also 1/5 of your cycles come from a dead account but whatever) @HahaYes @johnstev111 ya know you are commenting on my comment, can you like duke it out somewhere else @Muffinlavania Oh dear... Ok.Sorry for late reply, this comment was posted at 3am my time @johnstev111 oh its 8:50 Pm for me, so its 7 hours? idk btw i have HAHAYES's time @SpaceFire yeah thanks, share this with anyone that you can. I am competing with CodeLongAndPros, and of course I must win. @CodeLongAndPros Yes, good content is awesome. Competition is awesome. Someone always loses @AmazingMech2418 C is good for buiding a OS, C++ is better for pretty much anything else @AmazingMech2418 Cpp programmers would be better off if they followed Rust memory guidelines @HahaYes They both compile to machine code. Neither is actually faster. Actually, I'd say that Rust is faster since it has immutability by default (and, therefore, can inline most variables). @CubeyTheCube Rust is a relatively new programming language. C holds your hand too. Rust is made for men and so is Cpp. Are you not a real man @johnstev111 ? How is a simple language the best? Scratch is not anywhere other than its own website. @johnstev111 lol. I bet you hate static typing and had multi-brain failure when you saw C the first time. @HahaYes THE MESSGE THEAT 404ED YOU: THE LINK TO IT: @johnstev111 bro the messages are public. how am i going to hide. One thing i dont do is fake things. @lightningrock. @lightningrock feel free to upvote(I think you already have) and share this with everyone! @awesome10 @lightningrock @mkw @somersgreg @adityaru @Muskan786 @DamienKilduff @AshishSarkar @mkhoi @TheForArkLD @DigitCommander @Jakman @Muffinlavania @DannyIsCoding @SpaceFire @OrangeJooce123 @AmazingMech2418 Day 6 is out! @HahaYes YEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE @SpaceFire SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS @HahaYes AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAY[email protected]HahaYes i used my alt acount @HahaYes @HahaYes nice. @HahaYes EEEEEEEEEEEEEE @TheForArkLD FFFFFF Upvote All my projects @HahaYes Ok later ( ͡° ͜ʖ ͡°) @HahaYes haha yes @HahaYes NO SPAM PING ALLOWED OR I WILL HAHANO YOU AGAIN @sean098 feel free to upvote! @lightningrock oh okay @HahaYes NO SPAM PING @johnstev111 what do you mean. You literally just 404 ed us @HahaYes Wait... did I? @HahaYes HAHAYES @Warhawk947 YESssSSs Upvote @HahaYes Please try not to mass-ping users @theangryepicbanana I've stopped mass-pinging. Someone told me to stop before this. But thanks for the warning anyways. :) @HahaYes, could you challenge me to make a program, it’s been a few weeks, or months of learning python, and I understand a bit of OOP, classes and objects, exception handling (somewhat), scopes, functions, and beginner stuuff. @dillonjoshua68 hmmm go make a calculator. What kind of Calc? Basic or with complex functions? @HahaYes @dillonjoshua68 first do basic then complex. Sorry for the late reply. Np, and I already did the basic as previous school work, so imma improvise, and do complex, btw which functions should I add? @HahaYes I might have to use the math module for some, such as sin cos and tan, cuz those require unit circles to understand without a calculator @HahaYes @dillonjoshua68 ummm look at that iphone calculator. Do like sqrt, cubic, log, sine, cos, tan that stuff @dillonjoshua68 yep
https://replit.com/talk/learn/C-Tutorial-Day-5/42757
CC-MAIN-2021-25
refinedweb
980
74.79
What is Big O notation? It is the language we use to talk about how long an algorithm takes to run which allows us to compare the efficiency of different approaches to a problem. When we talk about Big O we are talking about the concept of asymptotic analysis which focuses on how the runtime of an algorithm increases as the input grows and approaches infinity. What this means is that as datasets get larger, it is the growth function which will be the dominating factor of runtime. What do you mean by a growth function? This is how we express the speed of which how quickly the runtime grows. Since we aren't measuring the runtime directly we can't express it in something like seconds. Instead we use Big O notation and the size of the input, which is denoted by "n" and we can express the growth using that input with expressions such as O(n), O(n^2), O(lg(n)), etc. Here are some algorithmic examples: O(1) : "Constant time" def print_first_item(items) puts items[0] end This method runs in constant because regardless of the size of the array of items, only one step is happening. O(n) : "Linear time" def print_all_items(items) items.each do |item| puts item end end This method is considered O(n) where n is the number of items in the array. It is linear because we perform a step for each item in that array. If there are 10 items, we print 10 times, if there are 100, we print 100 times. O(n^2) : "Quadratic Time" def print_all_possible_ordered_pairs(items) items.each do |first_item| items.each do |second_item| puts first_item, second_item end end end This method is is considered quadratic because if the array has 10 items, we print 1000 times. This because we have a loop nested within a loop. For n items in the array, the outer loop runs n times, and the inner loop runs nu times for each iteration of the outer loop. This brings us to printing n^2 times. Note: if you have a loop within a loop, it is often (but not always) an indication that you will have a O(n^2) runtime. Why do care about the dataset as its size approaches infinity? It is helpful to see this visually. Below we can see that for that with a smaller dataset, the O(n^2) function runs in less time than the linear or logarithmic function, so we might be tempted to use the n^2 method. However, most of the time as programmers we work with very large datasets, and sometimes we can't know how big they will become as more users are created or more information added. We can see that as the input gets increasingly larger, the O(n) will have a faster runtime than O(n^2) and O(lg(n)) will be even faster as the dataset gets even larger. We also care about the input as it get arbitrarily large because when talking about our runtimes, we can drop constants, coefficients, and lower-order terms associated with the function. This is because all of these have relatively minor significance compared to the highest order component. So if an algorithm was O(2n), we can ignore the 2 and just call it O(n). Similarly, a function that is O(n + n^2) we can refer to it as O(n^2) because the lower-order term becomes insignificant to the cost of the run time of n^2 as the input increases. Worst Case Analysis When programmers talk about Big O notation, it is typically implied that we are discussing the worst-case situation. There are a few reasons for this. It is usually easier to calculate the worst-case runtime than the average runtime because there are few factors that need to be accounted for. A worst case analysis is a guarantee that the runtime will never take any longer. It is actually not that uncommon that the worst case scenario happens frequently when implemented. Honesty is important! You may have written an algorithm that in the best case scenario has O(lg(n)) and if that is how you base your determination for your runtime, you may end up with a very unpleasant situation when all of a sudden the worst happens and your runtime is actually O(n^2) and suddenly an app becomes unusable. In summary, Big O is a way that we can communicate with others the runtime of an algorithm. When we know the runtime, we can make improvements to our code to make it more efficient. If we have an algorithm that we know runs in O(n^2) we can then try to refactor it to make it more efficient with O(n * lg(n)) or a function that is even faster. Happy coding! Resources Intro to Asymptotic Runtime Analysis by Micah Shute Big O Notation from Interview Cake Top comments (0)
https://dev.to/mmcclure11/introduction-to-big-o-notation-50di
CC-MAIN-2022-40
refinedweb
838
68.4
.NET Programmers In Demand, Despite MS Moves To Metro 319 mikejuk writes "Are you a newbie programmer looking for a job? It seems your best bet is to opt for .NET. According to technical jobs website Dice.com, companies in the U.S. have posted more than 10,000 positions requesting .NET experience — a 25 percent increase compared to last year's .NET job count. So Microsoft may want us to move on to Metro but the rest of the world seems to want to stay with .NET." Confused Re:Confused (Score:4, Informative) Except that's not what they're doing, they're un-deprecating C++, not deprecating C#. Microsoft seems finally to have come to the conclusion that anyone with a lick of sense could have told them a decade ago. Some things work really well in managed code, some things don't. A large part of Microsoft's product suite has been migrated to .NET, but for reasons anyone with half a brain could tell you, not everything should be written that way. At the absolute least, you need to write enough code to run .NET(or any managed code) in an non managed language because unlike C++, managed code cannot run itself. In addition to that sort of stuff there's plenty of things which could be written in .NET, but for which doing so wouldn't make any real sense. At present there are certain things in the Windows OS which are a huge pain to do in C++ since Microsoft has essentially replaced MFC with .NET, so you end up mixing in C# code where it really doesn't make any sense. Microsoft are rectifying this and allowing C++ to be an equal player, they're not getting rid of .NET, they are continuing to build their own software in .NET(where it makes sense), they're just allowing C++ programmers to play too.:5, Informative) I as understand it WinRT replaces COM. WinRT consists of several parts. The first such part is a replacement for COM, heavilly inspired by managed code. Indeed the restrictions on the exported interface are explicitly designed such that objects remain easy to call from managed code. It also lifts some idiotic restrictions that COM had. The interfaces are now described using CLI metadata in the form of a WinRT file. Despite being heavily inspired b y managed code, this is still all native code, and does not require a garbage collector. The second part of WinRT is a set of APIs that replace many Win32 Apis, exported using this new COM replacement. These APIs are also inspired by managed code, especially the naming and namespacing conventions. The APIs are not particularly low level, but are actually rather similar to many of the APIs in the .Net Framework. For example, consider the 'Windows.Data.Json' namespace. That hardly seems low level. Or how about 'Windows.Data.Xml.Dom' which is very roughly a ported version of the .NET 'System.Xml' namespace? When writing .NET metro style apps in C#, the Full .Net framework is actually available, although only a portion of it is exposed by default, because the app store will reject Metro-style apps that use APIs not exposed by default, because those APIs can be used to escape the Metro Sandbox. (This is much like how metro-style C++ code could call any win32 API, even those not exposed by default, but that will cause the application to be rejected from the app store). Re:Confused (Score:4, Informative) AFAIK, it's not a replacement of COM, as such. more like a set of enhancements to COM: - it's still based on IUnknown [microsoft.com] - instead of CoCreateInstance [microsoft.com], HKCR [microsoft.com] & CLSID [microsoft.com]s, theres RoActivateInstance [microsoft.com] and its string-based registry entries. - instead of VB's IDispatch [microsoft.com]/ITypeInfo [microsoft.com], there's IInspectable [microsoft.com]/IMetaDataImport2 [microsoft.com] for getting type information - instead of BSTR [microsoft.com], there's the immutable HSTRING [microsoft.com] the confusing this is that there's a whole bunch of work done in the language environments (C++ compiler, .NET runtime) to make all this invisible.. Mod Parent Down (Score:5, Informative) This isn't insightful. It's plain wrong. As someone who attended the Build conference and spoke directly with several Microsoft program managers, I can attest that Metro/WinRT is not a replacement for .NET. I asked several times something like "But can I do Q in the sandbox?" and they would say "No, in that case use regular .NET to do Q and distribute your apps through traditional channels (or link to the installer in the app store)." I never got the impression that Metro was always the preferred approach, just the preferred approach for slate devices. I don't know what Microsoft wants to do in the future past Windows 8. Maybe you're right, and Microsoft wants to give up their stronghold on enterprise applications that have certain hardware or interoperability requirements not allowed by Metro, so that they get control over tablet apps. But I'm not betting the bank on that. Re:Confused (Score:5, Informative) Sigh ... please tell me you don't tell people you know the .NET framework. What makes WinRT a royal pain is that it is low level C++ API. Thus C# becomes a second rate citizen and C++ a first rate citizen and it uses COM technologies. The Windowing and other GUI apis have ALWAYS been low level C (not C++) APIs, and likely always will be. Windows.Forms was built on top of that. I'm not real sure how you can say that C# is a second class citizen or a first class citizen, it is neither and both. .NET Libraries care not what language they are being used by, sure the API may not feel as natural in one language over the other, but thats not anything new and will always be there. Its rather retarded to think that any API other than the one at the lowest level is going to be the one that is most natural. Its all implemented in C at the low level, regardless of what lazy language you throw on top of it. Never thought that would happen in that COM is brought back to life.. It is what it is and personally I think WinRT will fail overall because it means you are completely beholden to the Windows platform! You mean like Windows.Forms is beholden to the Windows platform? So I guess you're saying it would be absolutely impossible for someone to write a clean implementation of it or a wrapper around Qt or GTK to do the same? Thats odd, why do you seem to think what can be done for the Windows.Forms namespace can't be done again, why do you think thats the case? I'm fairly certain you have absolutely no idea how the .NET framework works. You have heard of Mono haven't you? Re:Confused (Score:4, Informative) The Windowing and other GUI apis have ALWAYS been low level C (not C++) APIs, and likely always will be. The whole point of WinRT is to change that, actually. It's no longer low-level C. It's an object-oriented API from ground up using a framework that's deliberately designed to be consumable from different languages (GC or no GC, static or dynamic typed... there are a lot of considerations there).. You are absolutely wrong here. .NET assemblies are not extensions to ActiveX. It doesn't even make sense, because an "ActiveX object" is a COM component with a visual UI. Nor are they COM objects. You can take a .NET class and make it visible to COM as a COM object, but it has to be explicitly done, limits what you can do with such a class, and is implemented via a separate bridge (COM Interop). Heck, .NET classes don't implement IUnknown (the most basic requirement for a COM object). Re: 't Quite crappy headline (Score:5, Informative) Metro is a UI on top of Windows 8. WinRT is the new Windows 8 runtime, which will be accessible by C++, C# and any .Net language. The .Net standard libraries will be available for Windows 8 Desktop applications but not for Metro applications, which will be written targeting WinRT. So, the summary is wrong because: .Net-related skills remain central in Windows 8 even when targeting Metro a) Metro is not a development framework b) Re: (Score:2) You mean we can write C# applications using Metro that runs on tablets, but with a leaner C# library? Or if you want to target a tablet, you have to write C++? (I consider the latter less likely. ) Re: (Score:2) Re:Quite crappy headline (Score:5, Informative) WinRT is the new Windows 8 runtime, which will be accessible by C++, C# and any .Net language. WinRT demystified [tirania.org] [Miguel de Icaza]. WinRT wraps both the new UI system as well as old Win32 APIs and it happens that this implementation is based on top of. Re:Quite crappy headline (Score:4, Informative) That's not quite right. The .NET standard libraries exist in several profiles -- "Core", "Client", "Full". People today write their libraries under the "Core" profile so that they work equally well on any platform -- Silverlight, desktop, phone. Core contains the common standard libraries -- e.g. things like StringBuilder, LINQ, generic collections, and the other day-to-day programming side. "Client Profile" also contains UI stuff, and "Full Profile" also contains server stuff. For Metro, you will use APIs from both .NET Core Profile and from WinRT. WinRT will provide things like local storage APIs and UI. Core Profile will provide all the other stuff. NB. I'm on the C#/VB language design team at Microsoft. Re: (Score:2) Re:Quite crappy headline (Score:4, Informative) The .NET standard libraries WILL be accessible from Metro applications. You'll write your C#/VB metro applications targeting both WinRT APIs and standard .NET APIs at the same time. I suspect that very nearly all C#/VB metro apps will be using many .NET APIs. (you had said that the .NET standard libraries wouldn't be available for Metro apps). For example: IAsyncInfo ai = MessageBox.ShowAsync("hello world"); // using a WinRT API // here we're bridging from WinRT to .NET // here we're using standard .NET APIs Task t = ai.StartAsTask(); await Task.WhenAll(t, Task.Delay(100)); (disclaimer: I work for Microsoft on the VB/C# language team)? Re:Metro or .NET, why use any? (Score:4, Informative) 2) Windows phone apps. You definitely need .NET here. 3) Streaming video apps for desktop. Html5 can't do it, and Silverlights video streaming beats flash every day of the week. Neither is excellent, and SL has terrible linux support. But still, SL is the least bad one. Re: ) Re:People stay with what they know (Score:4, Informative) None of Microsoft's own products are written in .NET. Visual Studio is. Re:People stay with what they know (Score:4, Informative) None of Microsoft's own products are written in .NET. Most Microsoft development tools are written partially or wholly in .NET (guess what I do for a living...). FWIW, Windows itself has bits and pieces written in .NET as well. Nothing major, but it's there. Re.
http://developers.slashdot.org/story/11/10/09/163218/net-programmers-in-demand-despite-ms-moves-to-metro?sdsrc=prevbtmprev
CC-MAIN-2013-48
refinedweb
1,914
75.71
State There are two types of data that control a component: props and state. props are set by the parent and they are fixed throughout the lifetime of a component. For data that is going to change, we have to use state. In general, you should initialize state in the constructor, and then call setState when you want to change it. For example, let's say we want to make text that blinks all the time. The text itself gets set once when the blinking component gets created, so the text itself is a prop. The "whether the text is currently on or off" changes over time, so that should be kept in state. import React, { Component } from 'react'; import { AppRegistry, Text, View } from 'react-native'; class Blink extends Component { constructor(props) { super(props); this.state = { isShowingText: true }; // Toggle the state every second setInterval(() => ( this.setState(previousState => ( { isShowingText: !previousState.isShowingText } )) ), 1000); } from the server, or from user input. You can also use a state container like Redux or Mobx to control your data flow. In that case you would use Redux or Mobx to modify your state rather than calling setState directly. When setState is called, BlinkApp will re-render its Component. By calling setState within the Timer, the component will re-render every time the Timer ticks. State works the same way as it does in React, so for more details on handling state, you can look at the React.Component API. At this point, you might be annoyed that most of our examples so far use boring default black text. To make things more beautiful, you will have to learn about Style.
http://facebook.github.io/react-native/docs/0.21/state
CC-MAIN-2018-51
refinedweb
274
72.66
In addition to sending emails, you can also receive email with Amazon Simple Email Service (SES). Receipt rules enable you to specify what SES does with email it receives for the email addresses or domains you own. A rule can send email to other AWS services including but not limited to Amazon S3, Amazon SNS, or AWS Lambda. For more information, see Managing Receipt Rule Sets for Amazon SES Email Receiving and Managing Receipt Rules for Amazon SES Email Receiving. The following examples show how to: To set up and run this example, you must first complete these tasks: A receipt rule set contains a collection of receipt rules. You must have at least one receipt rule set associated with your account before you can create a receipt rule. To create a receipt rule set, provide a unique RuleSetName and use the CreateReceiptRuleSet operation. Control your incoming email by adding a receipt rule to an existing receipt rule set. This example shows you how to create a receipt rule that sends incoming messages to an Amazon S3 bucket, but you can also send messages to Amazon SNS and AWS Lambda. To create a receipt rule, provide a rule and the RuleSetName to the CreateReceiptRule operation. import boto3 # Create SES client ses = boto3.client('ses') response = ses.create_receipt_rule( RuleSetName = 'RULE_SET_NAME', Rule = { 'Name' : 'RULE_NAME', 'Enabled' : True, 'TlsPolicy' : 'Optional', 'Recipients': [ 'EMAIL_ADDRESS', ], 'Actions' : [ { 'S3Action' : { 'BucketName' : 'S3_BUCKET_NAME', 'ObjectKeyPrefix': 'SES_email' } } ], } ) print(response) Remove a specified receipt rule set that isn't currently disabled. This also deletes all of the receipt rules it contains. To delete a receipt rule set, provide the RuleSetName to the DeleteReceiptRuleSet operation. To delete a specified receipt rule, provide the RuleName and RuleSetName to the DeleteReceiptRule operation.
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/ses-rules.html
CC-MAIN-2019-35
refinedweb
285
53.71
Points in Python This guide provides an overview of the RhinoScriptSytntax Point Geometry in Python. Points In Python, a Rhino 3D point is represented as a Point3d structure. Conceptually, Point3d exist in memory as a zero-based list containing three numbers. These three numbers represent the X, Y and Z coordinate values of the point. point3D contains [1.0, 2.0, 3.0] Creating Points A Point3D structure can be constructed in a number of different ways. Two common ways are: import rhinoscriptsyntax as rs pnt = rs.CreatePoint(1.0, 2.0, 3.0) pnt = rs.CreatePoint(1.0, 2.0) # This creates a point with the Z coordinate set to 0 A point list can also be constructed one element at a time: The ‘CreatePoint()’ function is very flexible. It can take a list or tuple of two or 3 numbers and return a Point3d. The function can also extract the coordinates of a Rhino GUID to return a Point3D. It is not always necessary to construct a point before passing it to a function that requires a point. It is possible to construct points directly as an argument to a function. A Point is a list like structure. Wrap coordinates in brackets [] when passing them directly to a function. For instance the rs.addline(point, point) function requires two points. Use the following syntax to construct the points on the fly: point2 = [] point2.append(1.0) point2.append(2.0) rs.AddLine([45,56,32],[56,47,89]) Like 3-D points, Python represents a single 2-D point as a zero-based list of numbers. The difference being that 2-D points contain only X and Y coordinate values. Passing coordinates in [] to a function is very common with RhinoScriptSyntax. Accessing Point Coordinates A Point3D list can be accessed like a simple python list, one element at a time: import rhinoscriptsyntax as rs pnt = rs.CreatePoint(1.0, 2.0, 3.0) print(pnt[0]) #Prints the X coordinate of the Point3D print(pnt[1]) #Print the Y coordinate of the Point3D print(pnt[2]) #Print the Z coordinate of the Point3D The coordinates of a Point3d may also be accessed through its .X, .Y and .Z property. import rhinoscriptsyntax as rs pnt = rs.CreatePoint(1.0, 2.0, 3.0) print(pnt.X) # Prints the X coordinate of the Point3D print(pnt.Y) # Print the Y coordinate of the Point3D print(pnt.Z) # Print the Z coordinate of the Point3D Using the Python’ ability to assign values to multiple variables at one, here are is a way to create x, y, and z variables all at once: x, y, z = rs.CreatePoint(1.0, 2.0, 3.0) # or # x, y, z = rs.PointCoodinate(point) Editing Points To change an individual coordinate of a Point3d simply assign a new value to the correct coordinate through the index location or coordinate property: import rhinoscriptsyntax as rs pnt = rs.CreatePoint(1.0, 2.0, 3.0) pnt[0] = 5.0 # Sets the X coordinate to 5.0 pnt.Y = 45.0 # Sets the Y coordinate to 45.0 print(pnt) #Print the new coordinates Using the Python for function it is quite easy to walk through each coordinate in succession: for c in pnt: print c # This will loop through each coordinate in the point3d Rhinoscriptsyntax contains a number of methods to manipulate points. See RhinoScript Points and Vectors Methods for details. For those familiar with RhinoScript, which represents points as a pure list, the Python representation is a little different and offers a few more options. Adding a point to display in Rhino It is important to understand the difference between a Point3d and a point object added to Rhino’s document. Any geometry object that exists in Rhino’s database is assigned an object identifier. This is represented as a Guid. The Guid’s object is something that can be drawn, belongs to a layer, is saved to a Rhino file and is counted as a Rhino object. A Point3d is simply a structure of 3 numbers that exists in memory. It can be manipulated in memory, but will not be seen in Rhino or saved. A Point3d is added to the Rhino document through the rs.AddPoint() command. To create a Point3d from a Guid, use the rs.PointCoordinates(guid) or the rs.CreatePoint(Guid) function.
http://developer.rhino3d.com/guides/rhinopython/python-rhinoscriptsyntax-points/
CC-MAIN-2017-51
refinedweb
731
56.45
We’re building a hybrid mobile app out of an existing web app. The front end will run out of a webview, and we’re adapting the back end to run on-device in React Native. One of the challenges we encountered was how to share code between our existing monorepo built with webpack and our new React Native app (which bundles with Metro). Though it’s all TypeScript, this wasn’t super straightforward. In this post, I’ll describe how we handled it. Plan A Let’s call the existing web app web-monorepo and the new React Native app offline-rn-app. We initially attempted to reference sources directly, so that from offline-rn-app, you could import { foo } from "web-monorepo/bar";. We encountered a lot of friction with this approach. For example: - Though React Native now has TypeScript support out of the box (😃), it uses Babel’s built-in TypeScript handling, which has some known limitations, and it couldn’t handle some patterns in our existing code (😩). - Our application code uses a handful of webpack-specific features, like requiring .graphql or .yml files with special loaders, and importing whole directories of files with require.context. With enough time and research, it would probably be possible to work around issues like these, but it felt like we were swimming upstream. As we struggled with the particulars of our webpack configuration, we thought, “Why not let webpack do its thing, and then consume its output?” Plan B We already had two webpack entry points in the web-monorepo project: one for the React front end, and one for the Express backend. Our idea was is to add a third, producing a library that can be consumed by the React Native app. This way, we get the real TypeScript compiler plus all of the webpack loaders, and a bunch of dead code can get shaken out. Spoiler: It worked. Details Here are the important parts of the configuration we’ve settled on. package.json The first step toward building an installable library out of our existing Node project was to drop a few additions in its package.json: // package.json "main": "lib/index.js", "types": "lib/entry/mobile-server.d.ts", "files": [ "lib/*" ], "scripts": { "build:library": "webpack --config ./webpack/library.config.js" } The “main” and “types” fields are used by the consuming app upon installing our package. The “files” globs specify what we want consuming apps to get when they install the library. Lastly, the “build:library” convenience script leads us to our new webpack config. Webpack and tsconfig Once we declared the files we intend to distribute, it was time to build them. Here’s the new webpack config: // webpack/library.config.js const path = require("path"); module.exports = { mode: "development", entry: { "mobile-server": "./entry/mobile-server.ts" }, devtool: "source-map", output: { path: path.resolve(__dirname, "../lib"), filename: "index.js", library: "my-lib", libraryTarget: "umd" }, module: { rules: [ { test: /\.tsx?$/, use: [ { loader: "ts-loader", options: { configFile: "tsconfig.library.json" } } ] } ] }, resolve: { extensions: [".ts", ".tsx", ".js"], modules: [path.resolve(__dirname, "../modules"), "node_modules"] }, externals: ["cheerio", "config"] }; From an input of entry/mobile-server.ts, this produces the lib/index.js that package.json’s main is expecting and a source map to go with it. While most third-party code is bundled in, we can add externals for packages that we want the consuming app to provide. (More on this later.) We’ve also supplied a custom tsconfig: // tsconfig.library.json { "extends": "./tsconfig", "compilerOptions": { "module": "es6", "target": "es5", "allowJs": false, "noEmit": false, "declaration": true, "declarationMap": true, "lib": [], "outDir": "lib", "resolveJsonModule": true }, "include": ["entry/mobile-server.ts"], "exclude": ["node_modules", "dist", "lib"] } There’s nothing too interesting here except for that empty lib list, which prevents the library build from inadvertently using APIs that aren’t available in React Native, like browser DOM or Node.js filesystem access. With these in place, we can now yarn build:library to produce lib/. Packaging We don’t intend to publish our new library to a package repository, so we’ll need to reference it via one of the other patterns you can yarn add, like a file path or a GitHub repo. After a little experimentation, we settled on producing a tarball with yarn pack. This makes for a nice single artifact to share between jobs in our CircleCI workflow. Consuming the library From offline-rn-app, we reference the tarball like this: // package.json "my-lib": "../web-monorepo/my-lib-1.0.0.tgz" On the React Native side, using this feels about like using any other third-party library. Recall the “externals” specified in the webpack config above? Some of the code we’re sharing depends on libraries that aren’t quite compatible with React Native. We may eventually migrate away from them, but for now, we have a decent workaround. On the library side, we externalize the problematic modules to make the consuming app deal with it. In the React Native app, we deal with it by swapping in alternate implementations. To do this, we added babel-plugin-module-resolver, which allows you to alias modules arbitrarily: // babel.config.js module.exports = { presets: [ "module:metro-react-native-babel-preset", "@babel/preset-typescript" ], plugins: [ [ require.resolve("babel-plugin-module-resolver"), { alias: { cheerio: "react-native-cheerio", config: "./src/mobile-config.ts" } } ] ] }; ..and voila! We have code from our Express server running in React Native. Future Improvement: Editor Experience One rough patch I hope to smooth out in the future is the editor experience when working in the monorepo. VS Code only knows about one of our tsconfigs. So when I’m editing foo.ts, I’ll get squiggles according one of my build targets, but I may introduce errors that I won’t see until next time I compile the other target from the command line. Another tradeoff we made with the move to Plan B is that we can no longer F12 from offline-rn-app’s sources into web-monorepo’s; instead, when you go to definition across the boundary, you land on the library’s type definitions. Could source maps improve on this? Conclusion Our solution involves a couple of compromises and a fair amount of complexity, but overall, this approach is working well. Have you shared code between browser, server, and mobile? How’d it go?
https://spin.atomicobject.com/2019/09/24/typescript-web-react-native/
CC-MAIN-2019-51
refinedweb
1,055
57.77
MirageJS to increase developer productivity October 10, 2019 A few months ago I wrote about how mocked apis can help in the real world where we frequently build frontends for with APIs that are not ready yet. In ember, ember-cli-mirage was the solution, but outside the ember world, there was no go to solution to develop frontends without a finished API. Funny thing was that the ember-cli-mirage team was also thinking about something similar: Mirage currently has 0️⃣ bugs which will allow us to focus on new features.— Sam Selikoff (@samselikoff) May 22, 2019 On our shortlist is getting Mirage's core able to work in non-Ember environments like node. This paves the way for - Speed improvements - Persistence - Real HTTP responses - FastBoot testing and more 😍 They were starting to extract the core of ember-cli-mirage to @miragejs/server! At the time I replied to this tweet showing my excitment. Ended up having a few chats with Sam because he wanted to understand what were people’s painpoints and how could Mirage help solve them. I ended up helping them with the extraction to @miragejs/server, learned a lot and had a very nice opportunity to work with Sam and Ryan, and they are awesome 🙏. They were always very keen to help and discuss whatever topics I needed, even reviewing this post that you’re reading! A few months latter @miragejs/server is in beta! v0.1.25 is out, as well as the new website! Back to the problem Developing frontends without a finished API… That’s a pain right? And why? The Miragejs website explains it better than I ever could: Have you ever worked on a React or Vue app that needed to talk to a backend API before it was ready? If so, how’d you handle it? Maybe you created some mock data directly in your app just to keep you moving: export function App() { let [users, setUsers] = useState([]) useEffect(() => { // API not ready // fetch('/api/users') // .then(response => response.json()) // .then(json => setUsers(json.data)); // Use dummy data for now setUsers([ { id: "1", name: "Luke" }, { id: "2", name: "Leah" }, { id: "3", name: "Anakin" }, ]) }, []) return ( <ul> {users.map(user => ( <li key={user.id}>{user.name}</li> ))} </ul> ) } Seems harmless enough. Weeks later, the server’s ready and you wire up your app — but nothing works like it did during development. Some screens flash with missing data, others are broken entirely, and worst of all, you have no idea how much of your code needs to be rewritten. What went wrong? You ignored the network for too long. Networking code isn’t the kind of thing you can just tack onto the end of a sprint. Think about everything that comes along with making network requests: loading and error states, fetching partial data, caching… not to mention the fact that asynchronous APIs like network requests add tons of new states to each one of your app’s existing user flows. If you don’t grapple with this crucial part of your application up front, you’ll have to rewrite a ton of code when it comes time to deploy your app. You’re not doing yourself any favors by writing code that pretends the network doesn’t exist. You’re just poking holes in reality. And code that ignores reality isn’t ready for production. (source:) Mocked APIs v2 - A much better version I’ll be showing how to do the same thing (and a little more) than I my last blog post about mocked APIs but now with @miragejs/server. If you prefer to be looking at the code while following, here you have it Imagine you want to build a page that lists the posts in your blog. You don’t have an API but you know how the contract will look like, and you write your fetching code. const [posts, setPosts] = useState([]) useEffect(() => { fetch("") .then(response => response.json()) .then(json => setPosts(json)) }, []) Then, on Mirage side, to mock that specific URL: import { Server } from "@miragejs/server" // Create a new server instance - Intercepts the requests const server = new Server({ urlPrefix: "", namespace: "api", routes() { this.get(".", }, ]) }, }) With just this code Mirage will intercept your requests and start answering with the defined response. What about the other endpoints? If you’re not starting an app from scratch and you want to use @miragejs/server, you will most likely not be able to write all the mirage routes at once. To help you that and to cover the case that you have routes that you don’t want to mock, mirage has passthrough. // mirage route definitions this.passthrough() And with this, all the requests for the current urlPrefix and namespace that don’t have a Mirage route will be mocked. A common use case is to passthrough calls to external services, or authentication. this.passthrough("") What if I want to do more? I’ve just demonstrated the simplest use case possible. Let’s take a little more advantage of what Mirage provides us. import { Server } from "@miragejs/server" const server = new Server({ urlPrefix: "", namespace: "api", seeds({ db }) { db.loadData({.", }, ], }) }, routes() { this.get("/posts", schema => schema.db.posts) }, }) By doing this, Mirage stores posts in a database that you can then access and modify later. Talking about modifying stuff… Now that we have posts persisted, let’s add the endpoint that enables to edit them: this.put("/posts/:id", (schema, request) => { schema.db.posts.update(request.params.id, { title: request.requestBody.title, }) }) After we do a PUT /posts/1 with the body { "title": "test-edit" }, our post will be edited. Now if we do our GET /posts here’s how the post with id: 1 is going to look like. { id: 1, title: "test-edit", author: "asantos00", createdAt: 1557937282, body: "Lorem ipsum dolor sit amet, consectetur.", } // ... rest of the posts By having the posts stored into a database, we can now manipulate them in the route handlers, for instance to create a delete and a creation route. Useful features Mirage offers lots of features, since serializers to models (you can check in the docs). Besides those complex ones, there are a couple of simple features that end up being very useful daily: - Custom responses - Useful for things like developing error scenarios or returning the right code after creation/edition. import { Response } from "@miragejs/server" // ... this.get("/posts", () => { return new Response( 400, { "Content-Type": "application/json" }, { message: "Title not valid" } ) }) - API latency - Useful to test how your app deals with loading const server = new Server({ timing: 2000, // applies to all routes }) this.get("/posts", handlePosts, { timing: 3000 }) // only applies to single route Another great use of @miragejs/server is testing. You can start the server before the tests with the provided data and then assert that the endpoints where called and that the right data was mutated (more on this on a next blogpost). Conclusion Now that Mirage is out there is no more reason to be mocking data locally in your application or to spin up your whole infrastructure just to develop a single page. Mirage enables you to develop your frontend with the same exact concerns you would have if you would be developing against a server, but it makes it easier to simulate states. More important than that you’re not ignoring the network. Have you tried @miragejs/server? Are you interested? I would love to hear what you have to say and answer any question that may arise, either about this blogpost or Mirage. Feel free to reach out to me!
https://alexandrempsantos.com/mirage-to-increase-developer-productivity/
CC-MAIN-2019-43
refinedweb
1,260
61.97
package Chemistry::Bond; our $VERSION = '0.38'; # VERSION # $Id$ =head1 NAME Chemistry::Bond - Chemical bonds as objects in molecules =head1', ); =head1 DESCRIPTION This module includes objects to describe chemical bonds. A bond is defined as a list of atoms (typically two), with some associated properties. =head2 Bond Attributes In addition to common attributes such as id, name, and type, bonds have the order attribute. The bond order is a number, typically the integer 1, 2, 3, or 4. =cut use 5.006; use strict; use Scalar::Util 'weaken'; use base qw(Chemistry::Obj); my $N = 0; =head1 METHODS =over 4 =item Chemistry::Bond->new(name => value, ...) Create a new Bond object with the specified attributes. Sensible defaults are used when possible. =cut sub new { my $class = shift; my %args = @_; my $self = bless { id => $class->nextID(), type => '', atoms => [], order => 1, } , $class; $self->$_($args{$_}) for (keys %args); $self; } sub nextID { "b".++$N; } sub reset_id { $N = 0; } =item $bond->order() Sets or gets the bond order. =cut Chemistry::Obj::accessor('order'); =item $bond->length Returns the length of the bond, i.e., the distance between the two atom objects in the bond. Returns zero if the bond does not have exactly two atoms. =cut sub length { my $self = shift; if (@{$self->{atoms}} == 2) { my $v = $self->{atoms}[1]{coords} - $self->{atoms}[0]{coords}; return $v->length; } else { return 0; } } =item $bond->aromatic($bool) Set or get whether the bond is considered to be aromatic. =cut sub aromatic { my $self = shift; if (@_) { ($self->{aromatic}) = @_; return $self; } else { return $self->{aromatic}; } } =item $bond->print Convert the bond to a string representation. =cut sub print { my $self = shift; my ($indent) = @_; $indent ||= 0; my $l = sprintf "%.4g", $self->length; my $atoms = join " ", map {$_->id} $self->atoms; my $ret = <<EOF; $self->{id}: type: $self->{type} order: $self->{order} atoms: "$atoms" length: $l EOF $ret .= " attr:\n"; $ret .= $self->print_attr($indent); $ret =~ s/^/" "x$indent/gem; $ret; } =item . =cut sub atoms { my $self = shift; if (@_) { $self->{atoms} = ref $_[0] ? $_[0] : [@_]; for my $a (@{$self->{atoms}}) { weaken($a); $a->add_bond($self); } } else { return (@{$self->{atoms}}); } } sub _weaken { my $self = shift; for my $a (@{$self->{atoms}}) { weaken($a); } weaken($self->{parent}); } # This method is private and should only be called from $mol->delete_bond sub delete_atoms { my $self = shift; for my $a (@{$self->{atoms}}) { # delete bond from each atom $a->delete_bond($self); } } =item $bond->delete Calls $mol->delete_bond($bond) on the bond's parent molecule. Note that a bond should belong to only one molecule or strange things may happen. =cut sub delete { my ($self) = @_; $self->parent->_delete_bond($self); $self->{deleted} = 1; } sub parent { my $self = shift; if (@_) { ($self->{parent}) = @_; weaken($self->{parent}); $self; } else { $self->{parent}; } } 1; =back =head1 SOURCE CODE REPOSITORY L<> =head1 SEE ALSO L<Chemistry::Mol>, L<Chemistry::Atom>, L<Chemistry::Tutorial> =head1 AUTHOR Ivan Tubert-Brohman E<lt>itub@cpan.orgE<gt> =head1 COPYRIGHT Copyright (c) 2005 Ivan Tubert-Brohman. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. =cut
https://web-stage.metacpan.org/dist/Chemistry-Mol/source/lib/Chemistry/Bond.pm
CC-MAIN-2022-05
refinedweb
521
62.98
Django models outside of models.py By default, Django models are placed in models.py files inside apps. However, this single file can outgrow the needs of large apps that require storing dozens or hundreds of models. There are three techniques you can use to deallocate Django models from models.py files inside apps. Django models inside apps in the models folder The first technique to store Django models outside of models.py files is to create a folder named models -- inside the same app -- declare class models in standalone files in this folder and import the classes through this new folder's __init__ file. Listing 7-42 shows an app folder layout for this type of model deployment. Listing 7-42. Django apps with models stored under models directory +---+ | +-stores(app)-+ +-__init__.py +-models.py +-tests.py +-views.py +-apps.py +-models-+ | +-__init__.py +-menus.py +-equipment.py +-personnel.py Notice in listing 7-42 that alongside the standard models.py file is a models folder. Next, inside the models folder are multiple .py files with Django models declared as they would typically be done inside models.py. You can have as many models as needed in each .py file and as many .py files as you deem necessary (e.g. one model per file). However, the __init__ file inside this new models folder does require additional attention. While __init__ files are typically left empty, in this case, the __init__ file must make a relative import for each of the models -- inside .py files -- to make them visible to the app. For example, if the menus.py file contains the Breakfast model class, the __init__ file must declare the line from .menus import Breakfast. This one-line syntax -- from .<file> import <model_class> -- must be used in __init__.py for every model declared in .py files inside the models folder. With this layout you're able to place Django models outside of a single models.py file, but the following points apply to this first technique to relocate Django models: - The sub-folder must be named models.- Because the models.pyfile is inspected by default (as the Python path <app>.models), it requires an identically named Python path <app>.modelsto detect models -- with the __init__file doing the rest of the import work. So beware any other folder name different than modelswon't work with this configuration -- the next technique to configure models outside models.pya solution to using a different folder name. - Declaring an app as part of INSTALLED_APPSis sufficient, so long as the __init__file performs the correct relative imports.- So long as an app is declared as part of INSTALLED_APPSin settings.py, it's sufficient for Django to detect any models declared inside the modelsfolder as described in listing 7-42. Just take care of relatively importing all models in the __init__.pyfile. - The app name for every model is determined automatically.- Since the modelsfolder is nested inside an app, all the models inside this folder receive the app name configured in the apps.pyfile for the app. Although you can use the meta class app_labeloption to explicitly assign an app to a model -- as described earlier in the Meta class options section -- it's redundant in this case because the models receive the same app name they're in, including the placement of their migrations files. - The models are accessible as if they were in models.py.- Athough the models are placed in different files, the Python access path remains app.models, so the models are accessible as if they were in models.py(e.g. to access the Breakfastmodel class inside the menus.pyfile, you would still use from <app>.models import Breakfastfrom other parts of an application). Django models inside apps in custom folders A second technique to declare models outside models.py files is to use custom folders inside an app. This requires using the main models.py file as the import mechanism and also requires using longer access paths for models. Listing 7-43 shows an app folder layout with custom folders for models. Listing 7-43. Django apps with models stored under custom directories +---+ | +-stores(app)-+ +-__init__.py +-models.py +-tests.py +-views.py +-apps.py +-menus+ | +-__init__.py | +-breakfast.py | +-equipment+ +-__init__.py +-kitchen.py As you can listing 7-43 the app now has multiple sub-folders, where each folder contains multiple .py files with Django models declared as they would typically be done inside models.py. (i.e. breakfast.py and kitchen.py in listing 7-43 contain model classes). Since Django only looks for models under the Python path <app>.models, you must declare a relative import in the main models.py file -- for each of the models inside sub-folders -- to make them visible to the app. For example, if the Breakfast model class is inside the menus sub-folder file and breakfast.py file, the main models.py file must declare the line from .menus.breakfast import Breakfast. This one-line syntax -- from .<sub_folder>.<file> import <model_class> -- must be used in models.py for every model declared in sub-folders inside an app. Because the models.py file uses a relative import path to the models themselves, this alters the standard Django Python path to access models -- from <app>.models... -- and requires a longer path: from <app>.models.<sub_folder>.<file>....Other than this change in import paths, the remaining configuration options for models have no change in behavior (i.e. INSTALLED_APPS, model app name). Django models outside apps & model assignment to other apps A third technique available for Django models is to declare them completely outside of apps or inclusively assign Django models to a different app than the one in which they're declared. So what does it means to 'assign models to an app' ? It means just that, you can declare models outside of apps or in a given app, but change the app a model belongs to. Although I don't recommend this technique because it can lead to confusion, it does provide a different solution that I'll describe for the sake of completeness. This technique requires you provide models an explicit app name through the meta class app_label option so they're assigned to an app. When Django detects a model declares the meta app_label option, this takes the highest precedent to assign a model its app name. So even if a model is declared inside a random folder named common or an app named items, if a model's meta app_label value is set to stores, the model is assigned to the stores app, irrespective of its location. The confusing aspect of using the app_label option is due to the influence an app name has on Django models. For example, an app name is used to give a a model's database table name prefix and it's also used to determine the location of a model's migration files. So if you define a model inside a folder named common with the meta class app_label='stores', the model will end up up belonging to the stores app -- along with its migration files and a prefix table app name stores -- even though it's declared in the common folder. This last technique although flexible, as I've just explained can also lead to unintuitive outcomes in the naming and placement of Django model constructs.
https://www.webforefront.com/django/modelsoutsidemodels.html
CC-MAIN-2021-31
refinedweb
1,233
66.94
Hey all, I recently started learning C++ off the tutorials on this site and they are great. I've started making my own simple program which takes in information and trying to make it display it in a structured format. I first used int name, and int alias. When i ran the program it would work fine but when it came to cin>>name; and cin>>alias; it would close the program because the user was inputting letters and instead of numbers. So i changed some stuff around and this is what i have so far. #include <iostream.h> #include <string.h> int main() { char name[50], alias[50], filename[50]; int age; int yearsexp; int yesorno; cout<<"Please enter your real name: "; cin.getline(name, 50, '\n'); //Gets Name cout<<"Please enter your age: "; cin>>age; //Gets Age cout<<"How long have you been playing in years: "; cin>>yearsexp; //Gets years cout<<"Whats your in-game name: "; cin.getline(alias, 50, '\n'); //Gets In-Game Name cout<<"This is all the info you have entered"<<endl; cout<<"NAME:"<<name<<endl; cout<<"AGE:"<<age<<endl; cout<<"Years Playing:"<<yearsexp<<endl; cout<<"ALIAS:"<<alias<<endl; //Displays information inputted from user cout<<"Is this information correct? Press 1(Yes) or 2(No) then enter: "; cin>>yesorno; if(yesorno==1) { cout<<"Please enter a 1-word name for the info to be saved in: "; cin.getline(filename, 50, '\n'); cout<<"Cannot save file "<<filename<<endl; //Added the "Cannot save file" msg to inform user it cant do anything yet } else if(yesorno==2) { cout<<"Please restart program"; //Loop function not implemented yet } return 0; } Excuse the untidyness. I compile and build the program just fine. But when i run it (via msdos) it takes in name part, then the age, years playing... and then "skips" the ALIAS part and goes right into the "do you want to save" part, if i pick yes it "skips" the filename part and closes the program. Could someone tell me why is it doing this? And if possible explain it in a way a beginner would understand. Thanks for taking the time to read this!
http://cboard.cprogramming.com/cplusplus-programming/56185-program-skipping-some-input.html
CC-MAIN-2015-35
refinedweb
358
69.92
We are excited to announce the availability of a new storage version 2013-08-15 that provides various new functionalities across Windows Azure Blobs, Tables and Queues. With this version, we are adding the following major features: 1. CORS (Cross Origin Resource Sharing): Windows Azure Blobs, Tables and Queues now support CORS to enable users to access/manipulate resources from within the browser serving a web page in a different domain than the resource being accessed. CORS is an opt-in model which users can turn on using Set/Get Service Properties. Windows Azure Storage supports both CORS preflight OPTIONS request and actual CORS requests. Please see for more information. 2. JSON (JavaScript Object Notation): Windows Azure Tables now supports OData 3.0’s JSON format. The JSON format enables efficient wire transfer as it eliminates transferring predictable parts of the payload which are mandatory in AtomPub. JSON is supported in 3 forms: More information about JSON for Windows Azure Tables can be found at 3. Minute Metrics in Windows Azure Storage Analytics: Up till now, Windows Azure Storage supported hourly aggregates of metrics, which is very useful in monitoring service availability, errors, ingress, egress, API usage, access patterns and to improve client applications and we had blogged about it here. In this new 2013-08-15 version, we are introducing Minute Metrics where data is aggregated at a minute level and typically available within five minutes. Minute level aggregates allow users to monitor client applications in a more real time manner as compared to hourly aggregates and allows users to recognize trends like spikes in request/second. With the introduction of minute level metrics, we now have the following tables in your storage account where Hour and Minute Metrics are stored: Please note the change in table names for hourly aggregated metrics. Though the names have changed, your old data will still be available via the new table name too. To configure minute metrics, please use Set Service Properties REST API for Windows Azure Blob, Table and Queue with 2013-08-15 version. The Windows Azure Portal at this time does not allow configuring minute metrics but it will be available in future. In addition to the major features listed above, we have the following below additions to our service with this release. More detailed list of changes in 2013-08-15 version can be found at: We are also releasing an updated Windows Azure Storage Client Library here that supports the features listed above and can be used to exercise the new features. In the next couple of months, we will also release an update to the Windows Azure Storage Emulator for Windows Azure SDK 2.2. This update will support “2013-08-15” version and the new features. In addition to the above changes, please also read the following two blog posts that discuss known issues and breaking changes for this release:. Please let us know if you have any further questions either via forum or comments on this post. Jai Haridas and Brad Calder HI, I was trying to set the content-disposition property of an existing blob that I have in an existing storage account, but the content-disposition property does not appear in the portal when editing a blob. Is it currently not supported in the portal? or is it not showing because this is an old blob/storage? @Ido, the portal does not support it yet. If you do a GET on the blob - do you see it? You can use fiddler or better yet, just download from browser and you should see the content disposition in play if it has been set. Thanks, Jai You mentioned a dependency on WCF Data Services for Azure Tables, but the NuGet package does not have a package dependency. What I found was that if I upgrade the storage library to 3.0, my table storage code breaks complaining about a missing assembly (Microsoft.Data.Services.Client.dll). If I add the WCF Data Services Client package (aka Microsoft.Data.Services.Client) all is fixed. If you have this dependency why isn't it part of the NuGet package? It also seems like I need OData, Spacial & EDM packages when I just want to use the blob client library. Why is that? @ Paul You are correct that the WCF Data Services dependency is not explicitly referenced by the nuget package, installing this package should fix your issue (. I will look into adding this going forward. We are also recommending clients utilize the service layer provided in the Storage.Table namespace as it provides many significant performance and extensiblity improvements over the legacy implementation. (You can read more about the various table features here : blogs.msdn.com/.../announcing-storage-client-library-2-1-rtm.aspx). If you do that then there is no dependency your code would have on WCF data services, and you would not need that package. To your second question, the Spatial, Odata, EDM, and JSON dependencies are to support the core table functionality. If you are only leveraging blobs or queues you can remove these to keep deployment size smaller. Historically the storage client has been a single package exposing all three storage abstractions (Blobs, Tables, and Queues). From your question it seems some clients may appreciate a more segregated design allowing them to utilize only the specific storage services they require. This is good feedback to receive, and we will look into various options to address it going forward. Hi, can somebody tell me how I can make if work? E.g. Table.CreateIfNotExistsAsync returns "400 Bad Request", code: "InvalidInput" (in the storage emulator). Any ideas? regards Yann Great updates. But are the new client libraries not compatible at all with the current emulator? Or is it just the new features? Because I'm getting a 400 Bad Request error directly after updating the libraries and without changing any code. It seems you put the release on the stable feed on Nuget but it won't work with the current emulator there? The local emulator is vital when developing a solution based on Azure Storage. So if I only want the release of the client library that works with the emulator, then how do I seamlessly do that via Nuget (especially if I'm a new user)? @Robert - the problem is the emulator released a few months back did not know about the new version and hence does not support 2013-08-15 version. The new client lib only supports one version - 2013-08-15 and hence unsupported by emulator until we release update to the emulator. @Yann - are you trying new lib with existing emulator? It is not supported. We are working on updating the emulator and will try to have a release in the next month or so. However, this will not be a SDK release but release of just the required dlls. Do you have any examples of setting Content-Disposition via the SAS? I poked around at the API but I couldn't figure it out. I did get it working at the blob level, however, and that is brilliant. I get 'Could not load file or assembly 'Microsoft.Data.Services.Client, Version=5.6.0.0' after upgrading. I am only uploading to blob storage and not using table storage or queues in any way. It worked perfectly before the update. Do I need the WCF package as well? Seems rather odd. @Simon: Here is a sample code using Windows Azure Storage Client Library 3.0. SharedAccessBlobHeaders sasHeaders = new SharedAccessBlobHeaders() { ContentDisposition = "Attachment; filename=anotherName.txt" }; SharedAccessBlobPolicy sasBlobPolicy = new SharedAccessBlobPolicy() Permissions = SharedAccessBlobPermissions.Read, SharedAccessExpiryTime = DateTime.UtcNow.AddDays(1) string sasQueryParam = cloudBlockBlob.GetSharedAccessSignature(sasBlobPolicy, sasHeaders); Uri fullSasUri = new Uri(cloudBlockBlob.Uri, sasQueryParam); @Manny Thanks for reporting this issue. For the 3.0.0 release we had to move from the System.Data.Services.Client in the GAC to the Microsoft.Data.Services.Client on nuget (). Currently there is a piece of shared code during exception translation that checks for WCF exceptions which is why you may hit this while doing blob or queue traffic. We will be updating the package to add the nuget dependency, and working on decoupling this logic to allow blob and queue users to run with out delay loading the WCF or Odata dependencies. In the interim please add a reference to the nuget package mentioned above. Hi, great news. Is Json format supported by OData table client? How can I switch to using Json transfer in existing sdk client? @George: You need to use the latest Windows Azure Client Library 3.0 mentioned in the blog. For more details regarding JSON and client SDK support, please refer to the following blog post: blogs.msdn.com/.../windows-azure-tables-introducing-json.aspx We're now at mid jan 2014, any news about a new storage emulator? One of my projects could really use the new features that have been released (Content-Disposition and CORS) but since we don't have a new emulator we're a bit stuck with this atm. (The size of the files and our internet bandwidth prevent us from developing against the cloud's storage services.) @Jeroen: We will provide a Storage Emulator preview release with support for the new version and features by end of month. Michael
http://blogs.msdn.com/b/windowsazurestorage/archive/2013/11/27/windows-azure-storage-release-introducing-cors-json-minute-metrics-and-more.aspx
CC-MAIN-2014-52
refinedweb
1,557
64.41
SUSI WebChat now has a search feature. Users now have an option to filter or find messages. The user can enter a keyword or phrase in the search field and all the matched messages are highlighted with the given keyword and the user can then navigate through the results. Lets visit SUSI WebChat and try it out. - Clicking on the search icon on the top right corner of the chat app screen, we’ll see a search field expand to the left from the search icon. - Type any word or phrase and you see that all the matches are highlighted in yellow and the currently focused message is highlighted in orange - We can use the up and down arrows to navigate between previous and recent messages containing the search string. - We can also choose to search case sensitively using the drop down provided by clicking on the vertical dots icon to the right of the search component. - Click on the `X` icon or the search icon to exit from the search mode. We again see that the search field contracts to the right, back to its initial state as a search icon. How does the search feature work? We first make our search component with a search field, navigation arrow icon buttons and exit icon button. We then listen to input changes in our search field using onChange function, and on input change, we collect the search string and iterate through all the existing messages checking if the message contains the search string or not, and if present, we mark that message before passing it to MessageListItem to render the message. let match = msgText.indexOf(matchString); if (match !== -1) { msgCopy.mark = { matchText: matchString, isCaseSensitive: isCaseSensitive }; } We alse need to pass the message ID of the currently focused message to MessageListItem as we need to identify that message to highlight it in orange instead of yellow differentiating between all matches and the current match. function getMessageListItem(messages, markID) { if(markID){ return messages.map((message) => { return ( <MessageListItem key={message.id} message={message} markID={markID} /> ); }); } } We also store the indices of the messages marked in the MessageSection Component state which is later used to iterate through the highlighted results. searchTextChanged = (event) => { let matchString = event.target.value; let messages = this.state.messages; let markingData = searchMsgs(messages, matchString, this.state.searchState.caseSensitive); if(matchString){ let searchState = { markedMsgs: markingData.allmsgs, markedIDs: markingData.markedIDs, markedIndices: markingData.markedIndices, scrollLimit: markingData.markedIDs.length, scrollIndex: 0, scrollID: markingData.markedIDs[0], caseSensitive: this.state.searchState.caseSensitive, open: false, searchText: matchString }; this.setState({ searchState: searchState }); } } After marking the matched messages with the search string, we pass the messages array into MessageListItem Component where the messages are processed and rendered. Here, we check if the message being received from MessageSection is marked or not and if marked, we then highlight the message. To highlight all occurrences of the search string in the message text, I used a module called react-text-highlight. import TextHighlight from 'react-text-highlight'; if(this.props.message.id === markMsgID){ markedText.push( <TextHighlight key={key} highlight={matchString} text={part} markTag='em' caseSensitive={isCaseSensitive} /> ); } else{ markedText.push( <TextHighlight key={key} highlight={matchString} text={part} caseSensitive={isCaseSensitive}/> ); } Here, we are using the message ID of the currently focused message, sent as props to MessageListItem to identify the currently focused message and highlight it specifically in orange instead of the default yellow color for all other matches. I used ‘em’ tag to emphasise the currently highlighted message and colored it orange using CSS attributes. em{ background-color: orange; } We next need to add functionality to navigate through the matched results. The arrow buttons are used to navigate. We stored all the marked messages in the MessageSection state as `markedIDs` and their corresponding indices as `markedIndices`. Using the length of this array, we get the `scrollLimit` i.e we know the bounds to apply while navigating through the search results. On clicking the up or down arrows, we update the currently highlighted message through `scrollID` and `scrollIndex`, and also check for bounds using `scrollLimit` in the searchState. Once these are updated, the chat app must automatically scroll to the new currently highlighted message. Since findDOMNode is being deprecated, I used the custom scrollbar to find the node of the currently highlighted message without using findDOMNode. The custom scrollbar was implemented using the module react-custom-scrollbars. Once the node is found, we use the inbuilt HTML DOM method, scrollIntoView() to automatically scroll to that message. if(this.state.search){ if (this.state.searchState.scrollIndex === -1 || this.state.searchState.scrollIndex === null) { this._scrollToBottom(); } else { let markedIDs = this.state.searchState.markedIDs; let markedIndices = this.state.searchState.markedIndices; let limit = this.state.searchState.scrollLimit; let ul = this.messageList; if (markedIDs && ul && limit > 0) { let currentID = markedIndices[this.state.searchState.scrollIndex]; this.scrollarea.view.childNodes[currentID].scrollIntoView(); } } } Let us now see how the search field was animated. I used a CSS transition property along width to get the search field animation to work. This gives the animation when there is a change of width for the search field. I fixed the width to be zero when the search mode is not activated, so only the search icon is displayed. When the search mode is activated i.e the user clicks on the search field, I fixed the width as 125px. Since the width has changed, the increase in width is displayed as an expanding animation due to the CSS transition property. const animationStyle = { transition: 'width 0.75s cubic-bezier(0.000, 0.795, 0.000, 1.000)' }; const baseStyles = { open: { width: 125 }, closed: { width: 0 }, } We also have a case sensitive option which is displayed on clicking the rightmost button i.e the three vertical dots button. We can toggle between case sensitive option, whose value is stored in MessageSection searchState and is passed along with the messages to MessageListItem where it is used by react-text-highlight to highlight text accordingly and render the highlighted messages. This is how the search feature was implemented in SUSI WebChat. You can find the complete code at SUSI WebChat.
https://blog.fossasia.org/implementing-search-feature-in-susi-web-chat/
CC-MAIN-2017-39
refinedweb
1,010
57.57
Equivalent Features On Windows And Unix This page is meant to capture the current state of agreement on UnixFunctionalityVsWindowsFunctionalityDiscussion . It is simply a list of features on each operating system and its associated tools, and then its suggested equivalent on the other system. The hope is that we might learn from each other how to do our jobs on each others' systems, not hear arguments on how much the other sucks. In a way this is supposed to be like a pretrial hearing, to reduce the length of the ensuing court battle. Please don't have your argument here - if you take umbrage, take your umbrage elsewhere. If you have a question about an equivalent feature though, add it to the list and hopefully someone will take the time to put in an answer. Features on Windows WTS X + ssh COM Corba / PlainText protocols IIS Apache Threads processes sharing address space and OS resources, proprietary thread API, POSIX threads (usually implemented with one of the first two). Named Pipes (don't have the same functionality, but a common subset) You can make a special file that acts as a pipe. Features on Unix or LinuxLikeOperatingSystems CLI with pipes & job control - Cygwin (for unix commands) plus DOS shell emulator (for native commands from cmd.exe/command.com) Network transparent GUI WTS Rooted file-system hierarchy You can refer to anything on the local machine using the UNC syntax, eg \\mypc\C$\whatever or \\mypc\foo, for shares. Symbolic links See SymbolicLinkOnWindows . The explorer has "link" files but these are not fully understood by other parts of the operating system, such as command line tools or the open/save dialogs. Shared folders can act like a flat namespace of symlinks. Few knows, but NTFS supports true symbolic links. Try to use the ln command on cygwin and you might be surprised by the results. Asynchronous signals Threads waiting on binary semaphores signaled by the operating system This 'real windows' would be what? The one released less than a year ago? The one we used to call NT? The one with the POSIX shell CLI? Orignal author replies: and the 'real unix' is... IRIX? AIX? Solaris? HPUX? To answer would be to miss the point. This pages title should more accurately be 'roughly equivalent features and tools in the set of operating systems and tools which people would generally see as belonging to the windows world or the unix world'. However, brevity is the soul of wit. And the intention of this page is that people can say how they would get roughly the same effect in the 'windows world' or the 'unix world', so limiting to one version of windows or unix, or excluding tools that get layered on top of the OS, would prevent people from describing what they would actually do. Actually the version of Windows is often important information, since really big features can change in major releases (e.g. multiple times that further Unix shell-like functionality was announced for some Windows or another). The corresponding nitpick about different versions of Unix is largely irrelevant, because for the most part the major features are identical across vendors and even on completely different codebases such as Linux; mostly it is system administration issues (like location of configuration files) that vary, not system calls or shell availability. So let's just say: mention the version of either Windows or of Unix whenever there may be a question as to whether the feature varies with release/vendor, and if someone asks, just tell them. [The versions matter less and less as the operating systems stabilize. In the early 1980s, there were important differences between System V and BSD. System V didn't support asynchronous I/O. Shell commands had completely different syntaxes. As Windows ages, the difference between versions seem to decrease, just as they did with Unix. And as they both age, they grow more and more alike.] The Unix issue you mentioned was twenty years ago . Windows continues to add major features on every major release. So what was the point of your comment??? [My point is that Unix is older than Windows, hence the difference between releases are smaller. For instance, the difference between Win2000 and WinXP was significantly less than the difference between Win3.1 and WinNT.] I should hope so. Win3.1 and WinNT were completely different operating systems, not just different releases of the same operating system. But the context of this is someone arguing "and the 'real unix' is... IRIX? AIX? Solaris? HPUX?", and I'm just saying what you just said: it doesn't really matter with Unix, but it often does matter with Windows, so for crying out loud, don't argue, just speak up about the Windows version, and that should be the end of it. But I phrased it better, above. I really cannot see what further point there is here that is relevant to the context. [It matters for old versions of Unix. Speak up about those as well.] See the article by DonBox about the five stages of Microsoft development: ( ) Porting tools from "the old country" to Windows comes in at stage 3 ("anger"). I've actually used vi for Windows (ha!) so maybe he's right. SmugSelloutWeenies ? , eh? You know, not everyone gets past the anger stage, and maybe that's not a bad thing. There's always CygWin Differences Capitalization recognition Windows has no (simple) directory links Book: Unix for the MS-DOS User See UnixFunctionalityVsWindowsFunctionalityDiscussion CategoryOperatingSystem CategoryComparison ? EditText of this page (last edited April 10, 2012 ) or FindPage with title or text search
http://c2.com/cgi-bin/wiki?EquivalentFeaturesOnWindowsAndUnix
CC-MAIN-2014-10
refinedweb
938
62.78
Chapter Table of Contents a similar way as the DIV. For example you can set a form style: form = SQL)}} This is a regular HTML form that asks for the user's name. When you fill the form and click the submit button, the form self-submits, and the variable request.vars.name along with its provided value is displayed at the bottom. You can generate the same form using helpers. This can be done in the view or in the action. Since web2py processed the form in the action, it is better to define the form in the action itself. Here is the new controller:indicates where to redirect the user after the form is accepted. onsuccess and onfailure can be functions like lambda form: do_something(form). form.validate(...) is a shortcut for form.process(...,dbio=False).accepted Hidden fields When the above form object is serialized by {{=form}}, and because of the previous call to the accepts method, it now looks like this: >. This function has multiple purposes: for example, to perform additional checks on the form and eventually add errors to the form, or to compute the values of some fields based on the values of other fields, or to trigger some action (like sending an email) before a record is created/updated. Here is an example:(). Adding buttons to FORMs Usually a form provides a single submit button. It is common to want to add a "back" button that instead of submitting the form, directs the visitor to a different page. This can be done with the add_button method: form.add_button('Back', URL('other_page')) You can add more than one button to form. The arguments of add_button are the value of the button (its text) and the url where to redirect to. (See also the buttons argument for SQLFORM, which provides a more powerful approach) More about manipulation of FORMs As discussed in the Views chapter, a FORM is an HTML helper. Helpers can be manipulated as Python lists and as dictionaries, which enables run-time creation and modification. SQLFORM We now move to the next level by providing the application with a model file:>-downs, and "upload" fields with links that allow users to download the uploaded files. It hides "blob" fields, since they are supposed to be handled differently, as discussed later. For example, consider the following model:) - for: fields = ['name']: labels = {'name':'Your Full Name:'} col3is a dictionary of values for the third column. For example: col3 = {'name':A('what is this?', _href='')} buttonsis a list of INPUTs or TAG.buttons (though technically could be any combination of helpers) that will be added to a DIV where the submit button would go. For example, adding a URL-based back-button (for a multi-page form) and a renamed submit button: buttons = [TAG.button('Back',_type="button",_onClick = "parent.location='%s' " % URL(...), TAG.button('Next',_type="submit")] -) always show all fields in readonly mode, and they cannot be accepted. Marking a field with writable=False prevents the field from being part of the form, and causes the form processing to disregard the value of request.vars.field when processing the form. However, if you assign a value to form.vars.field, this value will be part of the insert or update when the form is processed. This enables you to change the value of fields that for some reason you do not wish to include in a form.:>, accessing: Other types of Forms :') Changing the table_name is necessary if you need to place two factory generated forms in the same table and want to avoid CSS conflicts. Uploading files with SQLFORM.factory One form for multiple tables It often happens that you have two tables (for example 'client' and 'address' which are linked together by a reference and you want to create a single form that allows to insert info about one client and its default address. Here is how: model: db.define_table('client', Field('name')) db.define_table('address', Field('client','reference client', writable=False,readable=False), Field('street'),Field('city')) controller: def register(): form=SQLFORM.factory(db.client,db.address) if form.process().accepted:. Confirmation Forms Often you need a form with a confirmation choice. The form should be accepted if the choice is accepted and none otherwise. The form may have additional options that link other web pages. web2py provides a simple way to do this:) The form will display one INPUT field for each item in the dictionary. It will use dictionary keys as INPUT names and labels and current values to infer types (string, int, double, date, datetime, boolean). This works great but leave to you the logic of making the config dictionary persistent. For example you may want to store the config in a session.' sets the log message on successful record deletion. Notice that crud.messagesbelongs to the class gluon.storage.Messagewhich is similar to gluon.storage.Storagebut it automatically translates its values, without need for the Toperator. Log messages are used if and only if CRUD is connected to Auth as discussed in Chapter 9. The events are logged in the Auth table "auth_events". Methods The behavior of CRUD methods can also be customized on a per call basis. Here are their signatures:}} Image name: <div>{{=form.custom.widget.name}}</div> Image file: <div>{{=form.custom.widget.source}}</div> Click here to upload: {{=form.custom.submit}} {{=form.custom.end}} where form.custom.widget[fieldname] gets serialized into the proper widget for the field. If the form is submitted and it contains errors, they are appended below the widgets, as usual. The above sample form is show in the image below. A similar result could have been obtained without using a custom form: SQLFORM(...,formstyle='table2cols') or in case of CRUD forms with the following parameter: crud.settings.formstyle='table2cols'. If you form has deletable=True you should also insert {{=form.custom.delete}} to display the delete checkbox.}} {{=form}} The errors will displayed as in the image shown below. This mechanism also works for custom forms. Validators Validators are classes used to validate input fields (including forms generated from database tables). Here is an example of using a validator with a FORM:!') For the full description on % directives look under the IS_DATETIME validator.. IS_DECIMAL_IN_RANGE INPUT(_type='text', _name='name', requires=IS_DECIMAL_IN_RANGE(0, 10, dot=".")). requires = IS_EMAIL(error_message='invalid email!') IS_EQUAL_TO Checks whether the validated value is equal to a given value (which can be a variable): requires = IS_EQUAL_TO(request.vars.password, error_message='passwords do not match')_FLOAT_IN_RANGE Checks that the field value is a floating point number within a definite range, 0 <= value <= 100 in the following example: Checks that the field values are in a set: requires = IS_IN_SET(['a', 'b', 'c'],zero=T('choose one'), error_message='must be a or b or c'): requires = [IS_INT_IN_RANGE(0, 8), IS_IN_SET([2, 3, 5, 7], error_message='must be prime and less than 10')] For a form checkbox, use this: requires=IS_IN_SET(['on']) You may also use a dictionary or a list of tuples to make the drop down list more descriptive: #### Dictionary example: requires = IS_IN_SET({'A':'Apple','B':'Banana','C':'Cherry'},zero=None) #### List of tuples example: requires = IS_IN_SET([('A','Apple'),('B','Banana'),('C','Cherry')]):_LIST_OF(IS_INT_IN_RANGE(0, 10)) It never returns an error and does not contain an error message. The inner validator controls the error generation. IS_LOWER This validator never returns an error. It just converts the value to lower case. requires = IS_LOWER()_MATCH('...', extract=True) filters and extract only the first matching substring rather than the original value. IS_NOT_EMPTY This validator checks that the content of the field value is not an empty string. requires = IS_NOT_EMPTY(error_message='cannot be empty!') IS_TIME This validator checks that a field value contains a valid time in the specified format. requires = IS_TIME(error_message='must be HH:MM:SS!'):_STRONG(min=10, special=2, upper') IS_UPPER This validator never returns an error. It converts the value to upper case. requires = IS_UPPER(): requires = IS_EMPTY_OR(IS_DATE()) CLEANUP This is a filter. It never fails. It just removes all characters whose decimal ASCII codes are not in the list [10, 13, 32-127]. requires = CLEANUP() CRYPT This is also a filter. It performs a secure hash on the input and it is used to prevent passwords from being passed in the clear to the database. requires = CRYPT() By default, CRYPT uses 1000 iterations of the pbkdf2 algorithm combined with SHA512 to produce a 20-byte-long hash. Older versions of web2py used "md5" or HMAC+SHA512 depending on whether a key was was specified or not. If a key is specified, CRYPT uses the HMAC algorithm. The key may contain a prefix that determines the algorithm to use with HMAC, for example SHA512: requires = CRYPT(key='sha512:thisisthekey') This is the recommended syntax. The key must be a unique string associated with the database used. The key can never be changed. If you lose the key, the previously hashed values become useless. By default, CRYPT uses random salt, such that each result is different. To use a constant salt value, specify its value: requires = CRYPT(salt='mysaltvalue') Or, to use no salt: requires = CRYPT(salt=False) The CRYPT validator hashes its input, and this makes it somewhat special. If you need to validate a password field before it is hashed, you can use CRYPT in a list of validators, but must make sure it is the last of the list, so that it is called last. For example: requires = [IS_STRONG(),CRYPT(key='sha512:thisisthekey')] CRYPT also takes a min_length argument, which defaults to zero. The resulting hash takes the form alg$salt$hash, where alg is the hash algorithm used, salt is the salt string (which can be empty), and hash is the algorithm's output. Consequently, the hash is self-identifying, allowing, for example, the algorithm to be changed without invalidating previous hashes. The key, however, must remain the same. Database validators IS_NOT_IN_DB Consider the following example: db.define_table('person', Field('name')) db.person.name.requires = IS_NOT_IN_DB(db, 'person.name'): db.define_table('person', Field('name', unique=True)) db.person.name.requires = IS_NOT_IN_DB(db, 'person.name'): import datetime now = datetime.datetime.today() db.define_table('person', Field('name'), Field('registration_stamp', 'datetime', default=now)) recent = db(db.person.registration_stamp>now-datetime.timedelta(10)) db.person.name.requires = IS_NOT_IN_DB(recent, 'person.name') IS_IN_DB Consider the following tables and requirement: db.define_table('person', Field('name', unique=True)) db.define_table('dog', Field('name'), Field('owner', db.person) db.dog.owner.requires = IS_IN_DB(db, 'person.id', '%(name)s', zero=T('choose one')) It is enforced at the level of dog INSERT/UPDATE/DELETE forms. It requires that a dog.owner be a valid id in the field person.id in the database db. Because of this validator, the dog.owner field is represented as a drop-down list. The third argument of the validator is a string that describes the elements in the drop-down list. list. In this example, we use IS_IN_DB in a controller to limit the records dynamically each time the controller is called:')] Occasionally you want the drop-down : subset=db(db.person.id>100) db.dog.owner.requires = IS_IN_DB(db, 'person.id', '%(name)s', _and=IS_NOT_IN_DB(subset,'person.id')) IS_IN_DB has a boolean distinct argument which defaults to False. When set to True it prevents repeated values in the drop-down.:) re-coded as follows:) Attention: grid and smartgrid were experimental prior web2py version 2.0 and were vulnerable to information leakage. The grid and smartgrid are no longer experimental, but we are still not promising backward compatibility of the presentation layer of the grid, only of its APIs. These are two high level objects that create complex CRUD controls. They provide pagination, the ability to browse, search, sort, create, update and delete records from a single object. Because web2py's HTML objects build on the underlying, simpler objects, the grids create SQLFORMs for viewing, editing and creating its rows. Many of the arguments to the grids are passed through to this SQLFORM. This means the documentation for SQLFORM (and FORM) is relevant. For example, the grid takes an onvalidation callback. The processing logic of the grid ultimately passes this through to the underlying process() method of FORM, which means you should consult the documentation of onvalidation for FORMs. As the grid passes through different states, such as editing a row, a new request is generated. request.args has information about which state the grid is in. SQLFORM.grid object will provide access to records matching the query. Before we dive into the long list of arguments of the grid object we need to understand how it works. The object looks at request.args in order to decide what to do (browse, search, create, update, delete, etc.). Each button created by the object links the same function ( manage_users in the above case) but passes different request.args. login required by default for data updates. Multiple grids per controller function Because of the way grid works one can only have one grid per controller function, unless they are embedded as components via LOAD. To make the default search grid work in more than one LOADed grid, please use a different formnamefor each one. Using requests.args safely Because the controller function that contains the grid may itself manipulate the URL arguments (known in web2py as response.args and response.vars), the grid needs to know which args should be handled by the grid and which not. object. In our case request.args[:1] is the name of the table we want to manage and it is handled by the manage function itself, not by the object. SQLFORM.grid signature The complete signature for the grid is the following:' ) fieldsis a list of fields to be fetched from the database. It is also used to determine which fields to be shown in the grid view. However, it doesn't control what is displayed in the separate form used to edit rows. For that, use the readable and writable attribute of the database fields. For example, in a editable grid, suppress updating of a field like this: before creating the SQLFORM.grid, set db.my_table.a_field.writable = False db.my_table.a_field.readable = False field_idmust be the field of the table to be used as ID, for example db.mytable.id. leftis an optional left join expressions used to build ...select(left=...). headersis a dictionary that maps 'tablename.fieldname' into the corresponding header label, e.g. {'auth_user.email' : 'Email Address'} orderbyis used as default ordering for the rows. groupbyis used to group the set. Use the same syntax as you were passing in a simple select(groupby=...). searchable, sortable, deletable, editable, details, createdetermine whether one can search, sort, delete, edit, view details, and create new records respectively. selectablecan be used to call a custom function on multiple records (a checkbox will be inserted for every row) e.g. selectable = lambda ids : redirect(URL('default', 'mapping_multiple', vars=dict(id=ids))) paginatesets the max number of rows per page. csvif set to true allows to download the grid in various format (more on that later).. links_in_gridif set to False, links will only be displayed in the "details" and "edit" page (so, not on the main grid) uploadsame as SQLFORM's one. web2py uses the action at that URL to download the file maxtextlengthsets the maximum length of text to be displayed for each field value, in the grid view. This value can be overwritten for each field using maxtextlengths, a dictionary of 'tablename.fieldname':length e.g. {'auth_user.email' : 50} onvalidation, oncreate, onupdateand ondeleteare callback functions. All but ondeletetake a form object as input, ondelete takes the table and the record id Because the edit/create form is an SQLFORM which extends FORM, these callbacks are essentially used in the same way as documented in the sections for FORM and SQLFORM. Here is skeleton code: onupdate and oncreate are the same callbacks available to SQLFORM.process():)')) ExporterCSV, ExporterXML, ExporterHTML and ExporterTSV are all defined in gluon/sqlhtml.py. Take a look at those for creating your own exporter. If you pass a dict like dict(xml=False, html=False) you will disable the xml and html export formats. formargsis passed to all SQLFORM objects used by the grid, while createargs, editargsand viewargsare passed only to the specific create, edit and details SQLFORMs formname, ignore_rwand formstyleare passed to the SQLFORM objects used by the grid for create/update forms. buttons_placementand links_placementboth take a parameter ('right', 'left', 'both') that will affect where on the row the buttons (or the links) will be placed deletable, editableand detailsare usually boolean values but they can be functions which take the row object and decide whether to display the corresponding button or not. Virtual fields in SQLFORM.grid and smartgrid In versions of web2py after 2.6, virtual fields are shown in grids like normal fields: either shown alongside all other fields by default, or by including them in the fields argument. However, virtual fields are not sortable. In older web2py versions, showing virtual fields in a grid requires use of the links argument. This is still supported for more recent versions. If table db.t1 has a field called t1.vfield which is based on the values of t1.field1 and t1.field2, do this: grid = SQLFORM.grid(db.t1, ..., fields = [t1.field1, t1.field2,...], links = [dict(header='Virtual Field 1',body=lamba row:row.vfield),...] ) In all cases, because t1.vfield depends on t1.field1 and t1.field2, these fields must be present in the row. In the example above, this is guaranteed by including t1.field1 and t1.field2 in the fields argument. Alternatively, showing all fields will also work. You can suppress a field from displaying by setting the readable attribute to False. Note that when defining the virtual field, the lambda function must qualify fields with the database name, but in the links argument, this is not necessary. So for the example above, the virtual field may be defined like: db.define_table('t1',Field('field1','string'), Field('field2','string'), Field.Virtual('virtual1', lambda row: row.t1.field1 + row.t1.field2), ...) SQLFORM.smartgrid A SQLFORM.smartgrid looks a lot like a grid, in fact it contains a grid but it is designed to take as input not a query but only one table and to browse said table and selected referencing tables. For example consider the following table structure:() which looks like this: Notice the extra "children" links. One could create the extra links using a regular grid but they would point to a different action. With a smartgrid they are created automatically and handled by the same object. object). The value of this field can be overwritten. We can prevent this by making it readonly: () smartgrid signature. dividerallows to specify a character to use in the breadcrumb navigator, breadcrumbs_classwill apply the class to the breadcrumb element -). grid and smartgrid access control')) smartgrid plurals The smartgrid is the only object:.
http://www.web2py.com/books/default/chapter/29/07/forms-and-validators
CC-MAIN-2013-48
refinedweb
3,166
58.08
Question Why do managers focus on the effect that an investment will have on reported earnings rather than on the investments cash flow consequences? Answer to relevant QuestionsWhat factors determine whether the annual ac-counting rate of return on a given project will be high or low in the early years of the investments life? In the latter years? For each of the projects shown in the following table, calculate the internal rate of return (IRR). Why do we consider changes in net working capital associated with a project to be cash inflows or out-flow rather than consider the absolute level of net working capital? Why must manager intuition be part of the investment- decision process regardless of a project NPV or IRR? Why is it helpful to think about real options when making an investment decision? Assuming that there are no corporate income taxes, how can the costs of preferred stock and debt be estimated without finding a preferred stock and a bond beta? Post your question
http://www.solutioninn.com/why-do-managers-focus-on-the-effect-that-an-investment
CC-MAIN-2017-13
refinedweb
169
61.06
LIBPFM(3) Linux Programmer's Manual LIBPFM(3) pfm_get_event_next - iterate over events #include <perfmon/pfmlib.h> int pfm_get_event_next(int idx); Events are uniquely identified with opaque integer identifiers. There is no guaranteed order within identifiers. Thus, to list all the events, it is necessary to use iterators. Events are grouped in tables within the library. A table usually corresponds to a PMU model or family. The library contains support for multiple PMU models, thus it has multiple tables. Based on the host hardware and software environments, tables get activated when the library is initialized via pfm_initialize(). Events from activated tables are called active events. Events from non-activated tables are called supported events. Event identifiers are usually retrieved via pfm_find_event() or when encoding events. To iterate over a list of events for a given PMU model, all that is needed is an initial identifier for the PMU. The first event identifier is usually obtained via pfm_get_pmu_info(). The pfm_get_event_next() function returns the identifier of next supported event after the one passed in idx. This iterator stops when the last event for the PMU is passed as argument, in which case the function returns -1. void list_pmu_events(pfm_pmu_t pmu) { struct pfm_event_info info; struct pfm_pmu_info pinfo; int i, ret; memset(&info, 0, sizeof(info)); memset(&pinfo, 0, sizeof(pinfo)); info.size = sizeof(info); pinfo.size = sizeof(pinfo); ret = pfm_get_pmu_info(pmu, &pinfo); if (ret != PFM_SUCCESS) errx(1, "cannot get pmu info"); for (i = pinfo.first_event; i != -1; i = pfm_get_event_next(i)) { ret = pfm_get_event_info(i, &info); if (ret != PFM_SUCCESS) errx(1, "cannot get event info"); printf("%s Event: %s::%s\n", pinfo.present ? "Active" : "Supported", pinfo.name, info.name); } } The function returns the identifier of the next supported event. It returns -1 when the argument is already the last event for the PMU. No error code, besides -1, is returned by this function. pfm_find_event(3)ptember, 2009 LIBPFM(3)
http://man7.org/linux/man-pages/man3/pfm_get_event_next.3.html
CC-MAIN-2017-30
refinedweb
313
52.05
Project: WPF using Entity Framework & MVVM Architecture with MEF. This article is about how could we implement length check (MaxLength) validation in WPF, based on the actual allowed length of the column in database (through Entity Framewrok). We will be using style and tag property to achive this effect in a generic way. The implementation is a little tricky but not impossible. style tag When I say MVVM, I mean no code behind your XAML file. I have seen lots of MVVM examples use two classes by just naming “ViewModel” and “View” in the same project. Since MVVM is much more than naming two files, so I decided to use MVVM sample kit from CodePlex. If you need complete MVVM with MEF framework, you can download it from here. The download provides a working sample of MVVM using MEF, and thus gives a base for demonstrating the solution. ViewModel View As compared to WinForms, implementation of MaxLength in WPF is not generic enough. I have looked around and read many articles, but could not find a generic way to implement MaxLength checking for text boxes. MaxLength Since this looks challenging, I gave a try to implement MaxLength in WPF in a generic way so that we don’t need to go to each and every form and implement MaxLength validation. MaxLength Before starting, we need to understand... Textbox Solution, in a nuthsell is to store the entity name somewhere during the binding, and refer to this entity name to fetch information about the field's properties like MaxLength etc. This could be achieved in many ways, by keeping global list of items used ina dictionary(key/value) format, etc. But for sake of optimization I've tried to reuse existing property "Tag" for the purpose. This will store the broken piece of the puzzle, i.e. the Entity name using View Model so that our incomplete pair of Entity Name/ Field name would be completed. Tag Tag is an age old property in .Net that is available for most of the controls. This property is rarely used by developer. At least I never had to use/manipulate this property in any of the projects that I've done so far in my career. In the current development project too, this property stays untouched throughout the modules. When I researched within our .NET community, I found similar conclusions for the Tagproperty, and its limited use. Since my project has not used this property, I have decided to use this property for our Textbox max length implementation so that I can fill the gap of EntityName. Textbox EntityName I have created one string property named “EntityTableKey” in my view model base class. This property would be set for Tag in Textbox style. One important point to be noted here is if needed you can declare this property as an "object" type also (to pass an array / list or some collection) because Tag property is an Object type property. string EntityTableKey Tag object Object public string EntityTableKey { get; set; } Now you must be getting an idea what we are going to do further. This EntityTableKey property needs to be set with the Entity Name in constructor of our view model. In the below code, my entity table name is Book so "EntityTableKey" property is set to “Book” this is an indication that in EDMX file the code would search textbox bound property in entity whose name is Book. EntityTableKey Book Book [ImportingConstructor] //Forget about this codeline if you are not use MEF. internal BookViewModel(IBookView view) : base(view) { EntityTableKey = "Book"; } Now, we will move to our control's resource file where we have defined all our style. Here, we are going to bind this EntityTableKey property to tag in our textbox base style. So till here, we have created our mapping of entity name to the textbox control, but how do we use this and utilize this EntityTableKey property for our max length? Pretty simple… <Style TargetType="TextBox"> <Setter Property="Width" Value="Auto"/> <Setter Property="MinHeight" Value="20"/> <Setter Property="Margin" Value="2"/> <Setter Property="Tag" Value="{Binding EntityTableKey}"/> </Style> Do not give x:Key for your textbox style, this way all these styling changes would be defaulted to textbox control. x:Key textbox Now the extraction part, we need to use some kind of converter where we can pass the whole control and then extract the tag value and bind property. Since simple Binding property do not pass control. bind Binding So here is the main implementation of our whole article, we need to use multibinding to pass our control (textbox) to our code class. For this, we will use a converter which would be called and process the current control to extract the bound field and tag property. Once we have bound field name and the entity name, then it is very easy to search in our entity and get the max length. textbox So let’s see how this can be achieved. <Setter Property="MaxLength"> <Setter.Value> <MultiBinding Converter="{StaticResource maxlen}"> <Binding RelativeSource="{RelativeSource Mode=Self}" /> <Binding Path="Tag" RelativeSource="{RelativeSource Mode=FindAncestor, AncestorType={x:Type TextBox}}"/> </MultiBinding> </Setter.Value> </Setter> public class MaxLenSetter : IMultiValueConverter { public object Convert(object[] value, Type targetType, object parameter, CultureInfo culture) { int maxLen = 0; var currentControl = (value[0] as TextBox); if (currentControl != null && currentControl.Tag != null) { try { var propertyName = currentControl.GetBindingExpression (TextBox.TextProperty).ParentBinding.Path.Path; var boundproperty = propertyName.Split('.'); if (boundproperty.Length > 0) propertyName = boundproperty[boundproperty.Length - 1]; else propertyName = boundproperty[0]; maxLen = Flite.Domain.Entities.GetMaxLengths (currentControl.Tag.ToString(), propertyName); } catch { //nothing to do. } } return maxLen; } public static BookLibraryEntities MaxLenEntity; public static int GetMaxLengths(String entitySetName, string propertyName) { int maxLength = 0; //sometime control rendering is earlier than entity //initializing specially in Release mode due to code optimization. while (MaxLenEntity == null) System.Threading.Thread.Sleep(1000); MetadataWorkspace workspace = MaxLenEntity.MetadataWorkspace; System.Collections.ObjectModel.ReadOnlyCollection<EntityType> entities = workspace.GetItems<EntityType>(DataSpace.CSpace); foreach (EntityType entity in entities) { if (entity.Name.Equals(entitySetName)) { foreach (EdmProperty property in entity.Properties) { if (property.TypeUsage.EdmType.Name.Equals("String")) { if (property.Name == propertyName && property.TypeUsage.Facets["MaxLength"].Value != null) { int res = 0; if (int.TryParse(property.TypeUsage.Facets ["MaxLength"].Value.ToString(), out res)) { maxLength = res; break; } } } } break; } } return maxLength; } Now that we have established our credibility by defining a problem and a solution, we can start talking benefits. Making a generic property, we can set its value in constructor of each ViewModel class. This class exposes the properties which are used in binding the text controls. Since MVVM with MEF works on one to one mapping mechanism, this means every view is having one view model. We just need to set this “EntityTableKey” property with our entity table name and the rest will be done automatically. ViewModel Now what if your ViewModel is exposing properties from two different tables? Let’s say a complex view is having 10 textbox controls and two textboxes are using some field which belongs to other table? It is very simple, just explicitly set the textbox tag property with your entity name. XAML always overrides all style properties with the properties used explicitly on control. textbox tag <TextBox Text="{Binding InputPath}"/> (Using Tag property by default) (Explicitly setting entity name for searching max length when more than two tables used in View Model.) <TextBox Text="{Binding InputPath}" Tag="MyTable"/> Though the MaxLength validation is not straight forward, the above solution brings a step closer to implement it using Textbox tag property. There is a way to use some dependency property but I wanted to keep this implementation as simple as possible. Few technical guys may not agree on using tag property for this purpose and I am open to hear from them. Textbox tag At last before finishing my article, I would like to thank CodePlex community for their great work on MVVM & MEF solution. After using their MEF solution, I am now a die hard fan of MVVM & MEF. Thanks for reading
http://www.codeproject.com/Tips/716350/How-to-Implement-Max-Length-in-WPF-Entity-Framewor
CC-MAIN-2014-10
refinedweb
1,329
55.13
I'm using webbrowser, so I can open a html to an performance test I'm currently doing. This small piece of code is the begin of the automation. The goal of the function perf_measure url import webbrowser def perf_measure(url=""): try: webbrowser.open(url) except webbrowser.Error, e: print "It couldn't open the url: ", url url = "" open_browser(url) Total time to load page in (secs): 2.641 Do you need to use the web browser? As in do you need to view the result? Otherwise you could do this. import urllib2 from time import time stream = urllib2.urlopen('') start_time = time() output = stream.read() end_time = time() stream.close() print(end_time-start_time) If you want a more human-readable result you can use round. print(round(end_time-start_time, 3)) Output 0.865000009537 # Without Round 0.865 # With Round
https://codedump.io/share/pKjGotidfeE5/1/measuring-the-time-to-load-a-page---python
CC-MAIN-2018-13
refinedweb
138
71.92
You have to understand the basics of DirectShow. Please do these things first: You should download the code before reading this article. Imports System Imports System.Diagnostics Imports System.Drawing Imports System.Runtime.InteropServices Imports System.Windows.Forms Imports DirectShowLib Imports System.Runtime.InteropServices.ComTypes Nothing special here. "imports" , does what it says , imports "namespaces" , mostly it is used to not have to type so much. imports imports If you import system.diagnostics , you don't have to type system.diagnostics.blabla everytime. system.diagnostics system.diagnostics.blabla Namespace Capture_The_Webcam Public Class Form1 Inherits System.Windows.Forms.Form End Class End Namespace Now we create our own namespace (maybe we want to import our VB in a different project or something later. Inside we create our program in public class Form 1. Public means that outside this namespace , this class is still reachable. Inherits means something like in MSDN : causes the current class or interface to inherit the attributes, variables, properties, procedures, and events from another class or set of interfaces. So, go and let this class feel like system.windows.forms.form. public class Form 1 Public Inherits system.windows.forms.form Inside the class, there are four parts: Enum PlayState Stopped = 0 Paused = 1 Running = 2 Init = 3 End Enum Dim currentState As PlayState = PlayState.Stopped So what does this do ? Let's give a example : If Me.CurrentState = PlayState.Paused Then Me.CurrentState = PlayState.Running This looks a lot nicer than: If Me.CurrentState = 0 Then Me.CurrentState = 3 Dim D As Integer = Convert.ToInt32("0X8000", 16) Public WM_GRAPHNOTIFY As Integer = D + 1 What I think happens here is that we want to let WM_GRAPHNOTIFY create a ordinary windows message holder in the format of a string. And that it has to start in place 0X8000, because that place is where the filtergraph events start. Filtergraph events are events that DirectShow gives you when you use inputs, filters and outputs. WM_GRAPHNOTIFY 0X8000 Filtergraph Dim videoWindow As IVideoWindow = Nothing What you do here is create an object with the format of a videowindow that starts out empty (with nothing in it). Generally, this is a video renderer that draws the video onto a window on the display. Dim mediaControl As IMediaControl = Nothing This creates an empty object/interface that function likes tape-deck buttons. The filter graph exposes the IMediaControl interface to allow applications to control the streaming of media through the filters in the graph. The interface provides methods for running, pausing, and stopping the streaming of data. IMediaControl Dim mediaEventEx As IMediaEventEx = Nothing This creates a empty object for event messages. "This interface derives from IMediaEvent and adds a method that allows registration of a window to receive messages when events occur. This interface is used by applications to receive notification that an event has occurred. Applications can then avoid using a separate thread that waits until an event is set." This means that you can receive events without the rest of your program stopping to wait for a new message.. IMediaEvent Dim graphBuilder As IGraphBuilder = Nothing This is our main object that will create filters. It will put the input filter (webcam), through a conversion filter, so that it can show it on a Ivideowindow.This interface provides methods that enable an application to build a filter graph. The Filter Graph Manager implements this interface. Ivideowindow Dim captureGraphBuilder As ICaptureGraphBuilder2 = Nothing This is a object that we create to help us out with building stuff that has to do with capturing of video and/or audio. (That's called a helper object) Dim rot As DsROTEntry = Nothing This makes a helper object for the program "graphedit" . When you start up "graphedit" you can make a connection with your program! graphedit What you can do to see this is the following: What would happen is that graphedit makes a graphical representation of how it creates your filters to show "the webcam capture device" to the "render video window". This is a short one: <STAThread()> Shared Sub <place w:Main</place>() Application.Run(New Form1) End Sub What STAThread does is protect our program against using objects at the same time in our program. That's rather handy , because streaming can be rather more complicated if you can't think about this step by step. Think about what could happen between the window and the capture preview if they both live their own life. Then moving the main window could leave your capture preview behind because the capture preview is still busy with its own "message solving". STAThread Sub main is the main/head sub-routine that will start when we write code.. Shared Sub Main means that if our class is used multiple times in a new program/object, we still only use one Sub Main that will be shared by all. There are no extra copies of the sub main in memory for each instance, there is only one. Application.Run() starts an application , a "New Form1" .. (Form1 is our class code, see paragraph 2)… Application.Run() New Form1 Form1 Private Sub Form1_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load InitializeComponent() CaptureVideo() End Sub When we do application.run, it automatically tries to find the "load" sub routine , and starts it (handles me.load is pointing in this direction). load me.load Private Sub means that this sub routine is not publicly available outside this class.InitializeComponent() and CaptureVideo are routines that do what they say.. Private Sub InitializeComponent() CaptureVideo Private Sub InitializeComponent() Dim resources As System.Resources.ResourceManager = New System.Resources.ResourceManager(GetType(Form1)) Me.AutoScaleBaseSize = New System.Drawing.Size(5, 13) Me.ClientSize = New System.Drawing.Size(320, 320) Me.Icon = CType((resources.GetObject("$this.Icon")), System.Drawing.Icon) Me.Name = "Form1" Me.Text = "Video Capture Previewer (PlayCap)" Debug.WriteLine("I started Sub InitializeComponent") End Sub This Sub Routine is also private to this class (private sub). private sub First we create an object called resources that will have the feel of a resourceManager. This use of the resourceManager gives access to the resource management of the form.Next interesting line is the Debug.Writeline. It writes a line of text to your debug output. If you don't know where that is , go to the Visual Basic menu, go to debug, and select output. When you test run your program, you will see these messages appearing in this output window. resourceManager resourceManager Debug.Writeline Public Sub CaptureVideo() Dim sourceFilter As IBaseFilter = Nothing Try Big chunk of code Catch ex As Exception MessageBox.Show("An unrecoverable error has occurred.With error : " & ex.ToString) End Try End Sub Dim hr As Integer = 0 A try / catch thing is really handy. We will try to implement the code "you write" , but when there is an "exception error", we will show a "messageBox" which will translate the exception code (ex) to normal readable text.A exception error could be all kinds of errors we create. When an error occurs, either the system or the currently executing application reports it by throwing an exception containing information about the error. "hr" is a placeholder for (mainly error) messages. It will keep them as numbers. So you have to use a error number translator. I have shown how to do this later in the code.Next we create an object that works like a "DirectShow filter object" . These objects have a input-pin and output-pin, and do filtering in between. DirectShow graphs are (in the most simplest way) a chain of filter-objects , each doing their own type of conversion or filtering. So all in all a "chain of filters" always has a source and a target. Now we use this filter object creation to create the source (our webcam). Filter objects do more, but I won't tell you now to keep it simple. So, what is in the "Big chunk of code" ? Let's check it out. messageBox ex hr GetInterfaces() This means we will start a subroutine to create the building blocks of our interface. Here is what "getinterfaces()" says : "hr" is for keeping the error code. "Me" is used to point out that we are talking about our own created objects , not stuff made by something else. CTYPE is a function to convert a object type to another object type. getinterfaces() Me CTYPE Me.graphBuilder = CType(New FilterGraph, IGraphBuilder) Lets fill the "graphBuilder"-object with a new FilterGraph object, (a type of "Igraphbuilder").The FilterGraph is our chain of filters that we will build. graphBuilder new FilterGraph Igraphbuilder Me.captureGraphBuilder = CType(New CaptureGraphBuilder2, ICaptureGraphBuilder2 Let's fill the "CaptureGraphBuilder" object with a CaptureGraphBuilder2 object(a type of "ICaptureGraphBuilder2").We use this to help us build the FilterGraph (helper-object). CaptureGraphBuilder CaptureGraphBuilder2 ICaptureGraphBuilder2 Me.mediaControl = CType(Me.graphBuilder, IMediaControl) Let's fill the "mediaControl" object with a mediaControl object, and we use the just created "graphBuilder" object for this to decide how this mediaControl type looks/feels like. mediaControl mediaControl Me.videoWindow = CType(Me.graphBuilder, IVideoWindow) Let's fill the "videoWindow" object with a videoWindow object , using our graphBuilder object . videoWindow videoWindow graphBuilder Me.mediaEventEx = CType(Me.graphBuilder, IMediaEventEx) Let's fill the "mediaEventeX" (message) object with messaging capabilities , using our graphBuilder object . mediaEventeX hr = Me.mediaEventEx.SetNotifyWindow(Me.Handle, WM_GRAPHNOTIFY, IntPtr.Zero) DsError.ThrowExceptionForHR(hr) ThrowExceptionForHR is a wrapper for Marshal.ThrowExceptionForHR, but additionally provides descriptions for any DirectShow specific error messages. If the " hr" value is not a fatal error, no exception will be thrown. ThrowExceptionForHR Marshal.ThrowExceptionForHR Debug.WriteLine("I started Sub Get interfaces , the result is : " & DsError.GetErrorText(hr)) I already explained what this does before.Now that getinterfaces() has been completed, let us go back to our try-code. Remember that when there is an error , the "dserror" code will throw an exception that directly starts up the catch code to tell us we had a error. getinterfaces() dserror hr = Me.CaptureGraphBuilder.SetFiltergraph(Me.GraphBuilder) Debug.WriteLine("Attach the filter graph to the capture graph : " & DsError.GetErrorText(hr)) DsError.ThrowExceptionForHR(hr) We use the "CaptureGraphBuilder" (helper) object to "set up" the "GraphBuilder" – Filtergraph. When there is an error we throw a exception. GraphBuilder Filtergraph sourceFilter = FindCaptureDevice() What this code does is to use the system device enumerator and class enumerator to find a video capture/preview device, such as a desktop USB video camera. Let's look at the function "FindCaptureDevice()" code: FindCaptureDevice() Debug.WriteLine("Start the Sub FindCaptureDevice") Dim hr As Integer = 0 Easy to understand by now. Dim classEnum As IEnumMoniker = Nothing Dim moniker As IMoniker() = New IMoniker(0) {} This is what I think it means : moniker Dim source As Object = Nothing Create a empty object called "source". source Dim devEnum As ICreateDevEnum = CType(New CreateDevEnum, ICreateDevEnum) The ICreateDevEnum interface creates an enumerator for a category of filters, such as video capture devices or audio capture devices. It puts them in devEnum. ICreateDevEnum devEnum hr = devEnum.CreateClassEnumerator(FilterCategory.VideoInputDevice, classEnum, 0) Debug.WriteLine("Create an enumerator for the video capture devices : " & DsError.GetErrorText(hr)) DsError.ThrowExceptionForHR(hr) Create an enumeration for the video capture devices .The CreateClassEnumerator method creates an enumerator for a specified device category meaning that we are going to fill classEnum with VideoInputDevices. The zero at the end means that we will check up every sort of filter (to find VideoInputDevices). CreateClassEnumerator classEnum VideoInputDevice Marshal.ReleaseComObject(devEnum) The device enumerator is no more needed so we dump it. Marshal provides a collection of methods for allocating unmanaged memory, copying unmanaged memory blocks, and converting managed to unmanaged types, as well as other miscellaneous methods used when interacting with unmanaged code. Marshal If classEnum Is Nothing Then Throw New ApplicationException("No video capture device = was detected.\r\n\r\n" & "This sample requires a video capture device, such as a USB WebCam,\r\n" & _ "to be installed and working properly. The sample will now close.") End If I think it is clear to understand that when a enumeration goes well , it doesn't mean we have a device in the enumeration list. So, let's check if we have one, otherwise we don't have a webcam. If classEnum.Next(moniker.Length, moniker, IntPtr.Zero) = 0 Then Dim iid As Guid = GetType(IBaseFilter).GUID moniker(0).BindToObject(Nothing, Nothing, iid, source) Else Throw New ApplicationException("Unable to access video capture device!") End If Let's go to the first line. An "if then else" function is easy to understand. Imonkier.next(a,b,c) retrieves a specified number of items in the enumeration sequence. if then else Imonkier.next(a,b,c) null So Classenum.next let moniker fill with only one element of classenum (that will be the first videoinputdevice). And if this classenum gives back a value 0 , then…By the way if it returns 0 (S_OK) it means that there is a moniker. If it returns 1 (S_False) it means that there is no moniker.So if there is a videocapturedevice then : Classenum.next moniker classenum videoinputdevice 0 (S_OK) 1 (S_False) videocapturedevice Dim iid As Guid = GetType(IBaseFilter).GUID Let "iid" be a global unique identifier , based on IbaseFilter. iid IbaseFilter moniker(0).BindToObject(Nothing, Nothing, iid, source) Use the reference to the capturedevice as the source, and give it a global unique identifier. capturedevice Marshal.ReleaseComObject(moniker(0)) Marshal.ReleaseComObject(classEnum) Dump the objects… Return CType(source, IBaseFilter) Return "Source" to the asker who started this "FindCaptureDevice" function, but let it feel like a IbaseFilter. Source FindCaptureDevice Ok, so everything went fine in finding a capture device, and now that we have a source-filter inside "Sourcefilter", we go back to the CaptureVideo() code. Sourcefilter CaptureVideo() hr = Me.GraphBuilder.AddFilter(sourceFilter, "Video Capture") Debug.WriteLine("Add capture filter to our graph : " & DsError.GetErrorText(hr)) DsError.ThrowExceptionForHR(hr) This means that we add the sourcefilter (the videocapturedevice) to our filtergraph by letting the filtergraph know it's a "Video Capture" filter sourcefilter videocapturedevice filtergraph Video Capture 'Render the preview pin on the video capture filter use this instead of 'me.graphBuilder.RenderFile hr = Me.CaptureGraphBuilder.RenderStream(PinCategory.Preview, MediaType.Video, sourceFilter, Nothing, Nothing) Debug.WriteLine("Render the preview pin on the video capture filter : " & DsError.GetErrorText(hr)) DsError.ThrowExceptionForHR(hr) This means that we use the helper object "CaptureGraphBuilder" to finish the rest of our filtergraph. We let it build the rest of our filtergraph for us.There are five variables as you see. filtergraph Marshal.ReleaseComObject(sourceFilter) The "sourceFilter" has been added to the filtergraph and has been used to build the rest of the filtergraph automatically , so we release the reference to the "sourcefilter". sourceFilter sourcefilter SetupVideoWindow() Let's check that sub routine out…. Public Sub SetupVideoWindow() Dim hr As Integer = 0 hr = Me.VideoWindow.put_Owner(Me.Handle) DsError.ThrowExceptionForHR(hr) hr = Me.VideoWindow.put_WindowStyle( WindowStyle.Child Or WindowStyle.ClipChildren) DsError.ThrowExceptionForHR(hr) ResizeVideoWindow() 'Make the video window visible, now that it is properly positioned 'put_visible : This method changes the visibility of the video window. hr = Me.VideoWindow.put_Visible(OABool.True) DsError.ThrowExceptionForHR(hr) End Sub Put_Owner Meaning Put_WindowStyle ResizeVideoWindow Put_Visible directShow true Lets go back to the "CaptureVideo()" code.. CaptureVideo() rot = New DsROTEntry(Me.GraphBuilder) This will add our graph to the running object table, which will allow the GraphEdit application to "spy" on our graph. GraphEdit Or said differently , we fill our helper object Rot with the our filtergraph, so the external GraphEdit application can read it. Rot hr = Me.MediaControl.Run() Debug.WriteLine("Start previewing video data : " & DsError.GetErrorText(hr)) DsError.ThrowExceptionForHR(hr) Start previewing video data! We use the "mediacontrol" helper object to start our filtergraph. mediacontrol Me.CurrentState = PlayState.Running Debug.WriteLine("The currentstate : " & Me.CurrentState.ToString) currentstate playstaterunning Lets start out with: Public Sub ResizeVideoWindow() If Not (Me.VideoWindow Is Nothing) Then 'if the videopreview is not nothing Me.VideoWindow.SetWindowPosition(0, 0, Me.Width, Me.ClientSize.Height) End If End Sub If the videowindow is not nothing, then resize the videopreviewwindow to match owner window size: left, top, width, height. videowindow videopreviewwindow width height But what to do when the form resizes ? Private Sub Form1_Resize1(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Resize If Me.WindowState = FormWindowState.Minimized Then ChangePreviewState(False) End If If Me.WindowState = FormWindowState.Normal Then ChangePreviewState(True) End If ResizeVideoWindow() End Sub If the form resizes, there will be different situations. For us, it is interesting what to do when the form is minimized or normal. Because the state of the directshow preview could better change in those situations. That's why we start out a function called ChangePreviewState(). The code of ResizeVideoWindow is already explained above. ChangePreviewState() ResizeVideoWindow I will not explain ChangePreviewState() because it is really simple. ChangePreviewState() Protected Overloads Sub WndProc(ByRef m As Message) Select Case m.Msg Case WM_GRAPHNOTIFY HandleGraphEvent() End Select If Not (Me.VideoWindow Is Nothing) Then Me.VideoWindow.NotifyOwnerMessage(m.HWnd, m.Msg, m.WParam.ToInt32, m.LParam.ToInt32) End If MyBase.WndProc(m) End Sub The "protected overloads sub" means that we also want to have control over something. That something is WndProc, the message interface so that we can see messages that we need to know for our program.It is used byRef because we make a reference to that message.In case the message is a WM_Graphnotify one, we will handle this. protected overloads sub WndProc byRef WM_Graphnotify WM_Graphnotify was the holder of directshow messages. But if it is not the case, we want to restore the message and sent it back to where it belongs. That's what the rest means. WM_Graphnotify Public Sub HandleGraphEvent() Dim hr As Integer = 0 Dim evCode As EventCode Dim evParam1 As Integer Dim evParam2 As Integer If Me.MediaEventEx Is Nothing Then Return End If While Me.MediaEventEx.GetEvent(evCode, evParam1, evParam2, 0) = 0 '// Free event parameters to prevent memory leaks associated with '// event parameter data. While this application is not interested '// in the received events, applications should always process them. hr = Me.MediaEventEx.FreeEventParams(evCode, evParam1, evParam2) DsError.ThrowExceptionForHR(hr) '// Insert event processing code here, if desired End While End Sub If Me.MediaEventEx Is Nothing Then Return End If First we check if MediaEventEx message queue has something to say, otherwise it is no use to start the rest: MediaEventEx While Me.MediaEventEx.GetEvent(evCode, evParam1, evParam2, 0) = 0 End While Otherwise we use a while loop. As long as the while loop condition is still true we do another loop. The .GetEvent method retrieves the next event notification from the event queue from MediaEventEx. It fills efCode, EvParam2 and evParam2 with info about that. The 0 means that we wait infinitely for that message. while .GetEvent MediaEventEx efCode EvParam2 evParam2 As said before , if the result is 0 (meaning S_OK), there is a message. So as long there are messages in the queue, execute this while loop. 0 S_OK hr = Me.MediaEventEx.FreeEventParams(evCode, evParam1, evParam2) DsError.ThrowExceptionForHR(hr) We fill "hr" with this message and directly clean this message in the queue. If Hr gives a bad value, we will give a exception. Hr Protected Overloads Overrides Sub Dispose(ByVal disposing As Boolean) If disposing Then '// Stop capturing and release interfaces closeinterfaces() End If MyBase.Dispose(disposing) End Sub Like the routine above , we also want to have some control in ending our program, because we use references in memory that cannot be killed automatically by itself. You could keep crap in memory. Like for me, if I don't stop the program correctly (pressing stop using the debugger stop button), I cannot use the webcam the second time. So if disposing is true, then start the subroutine closeinterfaces(), after this , do closing stuff that is normally done by the program itself. closeinterfaces() Public Sub closeinterfaces() Also closeinterfaces() is now is easy to understand. closeinterfaces() So that's it. Hope you liked this stuff. If you see errors or have any questions/hints/corrections, please post them here. If I stop the program using the debugger instead of closing the form itself, the filtergraph doesn't close properly and the webcam does not become useable anymore..
http://www.codeproject.com/Articles/18511/Webcam-using-DirectShow-NET?fid=410456&df=90&mpp=10&sort=Position&spc=None&tid=3570428
CC-MAIN-2014-42
refinedweb
3,358
50.43
In this module of the Python tutorial, we will learn about Python JSON. We will know about Python JSON module and converting the Python object into JSON data and vice versa. Further, we will also learn how to format the resultant JSON data after converting the Python object into JSON. JSON is an acronym for JavaScript Object Notation. Python has a built-in package named ‘json’ to support JSON in Python. JSON is basically used for encoding and decoding data. The process of encoding the JSON data is referred to as serialization as it involves converting data into a series of bytes that can be stored and transmitted between servers and web application. Since serialization is encoding of the data, we can guess the term used for decoding. Yes, it is deserialization. In this module, we will learn about JSON in Python and cover the following topics: So, without further delay, let’s get started. Become a master of Python by going through this online Python Course in Toronto! Reading JSON data from a file is very easy. json.load() method reads the string from a file, parses the JSON data. Then it populates a Python dictionary with the parsed data and returns it back to us. Example: #import json in Python import json with open('Intellipaat.txt') as json_file: #load json file python data = json.load(json_file) for p in data['Course']: print('Name: ' + p['name']) print('Website: ' + p['website']) print('From: ' + p['from']) print('') If we have a JSON string or JSON data, we can easily parse it using the json.loads() method found in the json package. To make use of this method, we have to import the json package offered by Python. As discussed above, this method is called deserialization, as we are converting the JSON encoded data into Python objects. Deserialization takes place as per the following table. For example, the JSON data of the object type will be converted into a Python dictionary. Example of Parsing Json in Python: import json intellipaat = '{"course":"python", "topic":"Python JSON"}' #parse a JSON string using json.loads() intellipaat_dict = json.loads(intellipaat) print(intellipaat_dict) print(type(intellipaat)) print(type(intellipaat_dict)) Output: {'course': 'python', 'topic': 'Python JSON'} <class 'str'> <class 'dict'> In the above example, intellipaat is a JSON string and intellipaat_dict is a Python dictionary. Get certified from this top Python Course in Singapore today! We can also convert Python data types into JSON format using the json.dumps() method. Let us take a look at the example and understand how to convert Python object to Json in Python programming. import json intellipaat = {"course":"python", "topic":"Python JSON"} intellipaat_json = json.dumps(intellipaat) print(intellipaat_json) print(type(intellipaat)) print(type(intellipaat_json)) Output: {"course": "python", "topic": "Python JSON"}<class 'dict'><class 'str'> While converting Python objects into JSON, these objects get converted into the equivalent JSON type as per the following table: Even though we have learned how to convert the Python object into JSON data, it can still be very hard to read the converted JSON data, with no indentation and no line breaks. To make it more readable, there are various parameters of the json.dumps() method that we can use. #using indent parameter to provide indentation json.dumps(b, indent = 3) We can also include various separators, such as, comma and colon. json.dumps(b, indent = 3, separators = (“.”,”=”)) Learn end-to-end Python concepts through the Python Course in Hyderabad to take your career to a whole new level! In certain cases where we might want to sort the resultant JSON data after converting it from Python to JSON, we can simply use another parameter of the json.dumps() method, the sort_keys. With the help of this parameter, we can define if we want the result to be sorted or not, using values such as true and false. json.dumps(b, indent = 3, sort_keys=True) With this, we come to an end of this module in python Tutorial. You can also go through this blog on Python for Data Science if you want to know why python is the most preferred language for data science. Further, check out our offers for Python training Course and also refer to the trending Python interview questions prepared by the industry experts. Course Schedule 1 thought on “Python JSON” thanks for the article Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment Name * Browse Categories
https://intellipaat.com/blog/tutorial/python-tutorial/python-json/
CC-MAIN-2021-31
refinedweb
742
63.9
- in what position is a file opened? - web page loading ... - OOP with windows message loops problem - getting system time with better than a second resolution - Editing a text file - Making an installer? - Problem with #import - grab text from a wepage? - Decimal To Binary Algorithm - html object library - problem linked list? - Find largest and second largest number (help) - dll issue - int to parameter anyway? casting? - Int, Char mix up - changing word length - WinAble.h help - MS Visual C++2003.net - How can i make a normal application - ?? - Outputting an Array, what's wrong...? - Prime Prime Prime Help Help Help - Working out leap years. - using namespace std; - sun - ShellExecute Function - Hiding programs ... - cout a fixed length string? - Looping error - HELP! What's wrong with my code! - a program that will call an execute but with arguments - Good function libraries? - DEV-C++ made program crashes when run from outside the ide - Enum or Const Ints? - error LNK2001: unresolved external symbol - please help - Need the services of a programmer for two very small programs. - SEt Visual Studio 2005 set .dll locations? - Editor Dev C++ - What is this code doing? - Variables in array declarations - template metaprogramming - Writing For Other Platforms - Writing different bases in C - ios::binary - Freezer - strlen probs - new line - Compile via makefile? - Linked Lists...
http://cboard.cprogramming.com/sitemap/f-3-p-517.html
CC-MAIN-2015-27
refinedweb
212
69.99
- 12 Jul, 2013 1 commit - 22 May, 2013 1 commit - 21 May, 2013 1 commit - 11 Mar, 2013 1 commit They save only little bit of work to the CPU and are suspicious to casual reader, even when they should be safe in theory. - 08 Mar, 2013 1 commit - Jeremy C. Reed authored Some slight grammar changes. Mostly in comments, but some for api docs. Changes a isc_throw output too. Reviewed via jabber. - 22 Feb, 2013 1 commit Add tests for the fact and fix it. Also, add log from the rpcCall itself, so it can be seen in logs. - 20 Feb, 2013 1 commit They currently fail, as the method is empty. Provide at least an empty implementation, to make it compile. - 31 Jan, 2013 1 commit - Curtis Blackburn authored - 28 Jun, 2012 1 commit as suggested in #2084. not sure why 'using namespace bind' didn't work (and the signature is so special that there wouldn't be ambiguity) but in any case this namespace is too large so it's better to not try to specify the entire space. with this change we don't need to declare using the entire space, so I removed it, too. okayed on jabber, directly committing. - 18 May, 2012 1 commit - 17 May, 2012 8 commits We check that the groupRecvMsgAsync can handle different situations, like when there are multiple messages and multiple callbacks registered and when the messages come later than the callbacks. The fake session needs to include the reply element in the response envelope. And the is_reply flag was set to the wrong value for replies (it was false as well). Check that we can wildcard match commands and replies. Check that we don't match what we shouldn't. The queued messages still didn't work well. Removed a problem with logging error triggered in older parts of code. The fake session didn't want to support hasQueuedMessages properly. Also, we need to be subscribed to the correct group to actually get the messages through the session. * Needed to extend the FakeSession::addMessage to hold the sequence number for representing answers. * Clarified the recipient parameter - it is the group, instance is ignored, since it is not used anywhere in the code anyway. * The tests simply register a callback and check it is called. More complex tests to come later. There was an anonymous namespace inside another anonymous namespace. This brings no further advantage and is just confusing, so it got removed. - 03 Feb, 2012 2 commits - 01 Feb, 2012 1 commit When closing a ModuleCCSession, a 'i am stopping' message is now sent to the ConfigManager, which in turn informs Cmdctl (which then causes bindctl to update its list). This causes stopped modules to disappear from bindctl until such time that they are enabled or started again, so that config updates and most importantly, commands, do not cause weird timeouts (bindctl will immediately inform you of modules not running) - 15 Aug, 2011 1 commit - Naoki Kambe authored and modify message string to be compared with in EXPECT_EQ - 22 Jul, 2011 1 commit - 15 Jul, 2011 1 commit Also needed to initialize logging for zonemgr, else the logging calls within ccsession.cc caused it to fall over with a "logging not initialized" exception. - 13 Jul, 2011 1 commit so that *not* using it must be specified explicitely changed most test cases to not set it (as they are testing other things) - 26 Jun, 2011 2 commits also do a few fixes and address a couple of comments; - made getRelatedLoggers public (so we can test it) - use set instead of vector - don't allow '*foo' etc. - added tests - added tests for the cfgmgr b10logging plugin - 06 Jun, 2011 2 commits - 03 Jun, 2011 1 commit (so that we can check spec and get default values) - 25 May, 2011 4 commits Also added a note to the addRemoteConfig() document that when it's called for a module the module cc session must not have been "started". Small refactoring, one more test Revert "[trac931_2] make sure the workaround -Wno-unused-parameter follows -Wextra" Revert "[trac931] Disable warning on one needed file" Revert "[trac931] Missing includes" Revert "[trac931] Test reproducing the double read bug" This reverts commit ec644604. This reverts commit 990a0cff. This reverts commit 54f86d77. This reverts commit 37ded0b3. - 24 May, 2011 2 commits added some comments about how the remote config value is retrieved in a test case. - 23 May, 2011 1 commit - 20 May, 2011 1 commit - 19 May, 2011 1 commit - 16 May, 2011 1 commit
https://gitlab.isc.org/adwol/kea/-/commits/87c3781cf964f63b044153a39b05518f72d8b80e/src/lib/config/tests/ccsession_unittests.cc
CC-MAIN-2022-33
refinedweb
758
69.52
Today we're going to take a closer look at collections in Python, and two special features (iterators and generators) that let you create your own collection-like objects with special properties. 1. The Collection Protocol (again) You probably recall from an earlier lab exercise that there are some special methods you can implement to emulate the behaviour of sequences. Here they are once more: len(x) can be implemented by x.__len__() x[i] can be implemented by x.__getitem__(i) x[i] = y can be implemented by x.__setitem__(i, y) del x[i] can be implemented by x.__delitem__(i) y in x can be implemented by x.__contains__(y) This time, i'll actually ask you to give this a whirl. Have a look at the following, which is just the beginning of a class whose instances represent a collection of files in a directory, using dictionary-like behaviour. import os class DictDir: def __init__(self, path): self.path = path if not os.path.exists(path): os.makedirs(path) # see help(os.makedirs) def keys(self): return os.listdir(self.path) ... A DictDir instance should be keyed on the names of files in its directory; getting or setting the value associated with a key should get or set the contents of the file. For example, if the directory /tmp/spam already contains a text file named Manuel, and the file contains the string “Barcelona”, we would want the following behaviour: >>> dd = DictDir('/tmp/spam') >>> dd['manuel'] 'Barcelona' >>> 'basil' in dd 0 >>> dd['basil'] = 'Fawlty' >>> 'basil' in dd 1 >>> dd['basil'] 'Fawlty' >>> del dd['manuel'] >>> dd.keys() ['basil'] >>> (And at this point, /tmp/spam/manuel would be gone, and there would be a new text file at /tmp/spam/basil containing the string “Fawlty”.) Q1. Fill in the DictDir class by implementing the methods of the collection protocol. Tip: Instead of manipulating strings yourself to assemble the directory name and filename into a path, use the handy os.path.join() function, which works on a variety of platforms. 2. The Default Iteration Protocol What happens when you try to loop over a DictDir? If your implementation is anything like mine, you'll probably get an error. >>> dd = DictDir('/tmp/spam') >>> for filename in dd: ... print filename ... Traceback (most recent call last): File "<stdin>", line 2, in ? File "dictdir.py", line 13, in __getitem__ path = os.path.join(self.path, name) File "/usr/lib/python2.2/posixpath.py", line 48, in join if b[:1] == '/': TypeError: unsubscriptable object >>> What's happening here? Q2. What is causing this error? Tip: Try adding __getitem__ method. Check back at the bottom of Explore 6 for a description of what the for loop is doing here. 3. Iterators For situations like this, Python provides a special iterator protocol to give you more control over how for loops operate on your objects. The iterator protocol also affects iteration performed by other built-in functions, including list(), map(), filter(), and reduce(). In Python, an iterator is an object whose job is to obtain the next item from a collection of items. Iterators pretty much have to be mutable, because every time an iterator returns an item, it has to move forward to the next item. And iterators are usually a separate object from the collection being iterated over, since you usually want to be able to iterate over a collection without destroying it. When Python starts a loop over an object, it looks for a method named __iter__() on the object. It calls this method to obtain an iterator. Then, each time through the loop, it calls the next() method on the iterator to get the next item. Iteration stops when the iterator raises a StopIteration exception. In order to be a proper iterator, an object must implement just two methods: __iter__(), which should just return self (since an iterator is its own iterator), and Here's an example of an iterator that would work for the above DictDir object. class ListIterator: def __init__(self, items): self.items = items[:] # make a copy of the list self.index = -1 def __iter__(self): return self def next(self): self.index += 1 if self.index < len(self.items): return self.items[self.index] else: raise StopIteration DictDirobject like this: class DictDir: ... def __iter__(self): return ListIterator(self.keys()) ... for filename in ddwhere ddis a DictDirobject, and thereby loop over the filenames in the directory. I showed you ListIterator so you would see an example of how iterators are implemented. But note that in real life, ListIterator is actually unnecessary, because it implements exactly the normal iteration behaviour of a list. Python has a built-in function iter() that already implements the default iteration behaviour. So it would also work to add just this to DictDir: class DictDir: ... def __iter__(self): return iter(self.keys()[:]) ... In general, there are two ways to use any iterator. - Loop over it: for item in iterator: .... - Call its next()method repeatedly: while blah: ... iterator.next() ... The first way is simpler; the second way gives you more control. You might want to use the second way if you have to go off and do something else before coming back to deal with the next item. Okay. Now here's a little iterator exercise. Recall that the built-in range() function produces a list of numbers, which you can then iterate over. But this can be wasteful of time and space; if you write for i in range(1000000): print i, Python will actually construct a list with a million numbers in it before starting the loop. Iterators are one way to improve the situation. If you have an object that stores only the necessary information about the range ( range() takes three arguments: start, stop, and step), it can easily compute the numbers one by one as they are needed, instead of wasting a lot of space on the entire list. Here's a start. class Range: def __init__(self, start, stop, step): self.start, self.stop, self.step = start, stop, step def __iter__(self): ... class RangeIterator: def __init__(self, ...): ... def __iter__(self): return self def next(self): ... Q3. Fill in the parts marked with ... in the above example, so that for i in Range(x, y, z) has the same effect as for i in range(x, y, z), only without ever constructing the entire list of numbers. Q4. Write an iterator that takes any sequence or iterator as an argument, and produces every other item of the given sequence (starting with the first item, then the third, and so on). Generators Sometimes when you're trying to write an iterator, it isn't practical to design a next() method that can return after each item. There may be too much going on to store and remember in the iterator object. So Python recently gained the ability to do something truly remarkable: it can suspend functions in the middle of execution, and resume them later. This really changes what you can do with an iterator. It also means you can have multiple suspended functions waiting around, and you can pick which one you want to resume. This ability is triggered by the yield keyword. It is aptly named, for it both yields a value to its caller, and yields control back to its caller. A function which generates values in this way is called a generator. Generators are defined using def just like normal functions, but if the keyword yield appears anywhere in the function, you get a generator object instead of a normal function. And generator objects can be used as iterators. For example, here is a generator that produces the sequence 1, -1, 2, -2, 3, -3, 4, -4, 5, -5. def alternate(): for i in range(1, 6): yield i yield -i Writing the same thing without generators would be much more tedious, in part because the class now has to remember whether the next number should be positive or negative: class alternate(): def __init__(self): self.i = 1 self.negative = 0 def __iter__(self): return self def next(self): if self.negative: item = -self.i self.i += 1 else: if self.i >= 6: raise StopIteration item = self.i self.negative = not self.negative return item The generator gains a lot of simplicity because the interpreter remembers where it was in the generator each time the generator pauses; it remembers that it was inside a loop, and it remembers which yield statement it was on. In the non-generator version, both of these things have to be explicitly stored as variables. Note that generators are such a new feature that they are enabled by default only in Python 2.3. To use generators in Python 2.2, you need to say from __future__ import generators Q5. Write an generator that takes any sequence or iterator as an argument, and produces every other item of the given sequence (starting with the first item, then the third, and so on). Generators are particularly powerful when you want to do iteration over a hierarchy instead of a list, since generators can call other generators recursively. Q6. Write a generator that returns the list of all files in a subdirectory tree. Here's a start: import os def allfiles(directory): for filename in os.listdir(directory): path = os.path.join(directory, filename) if os.path.isdir(path): ... if os.path.isfile(path): ... Fill in the parts marked with .... There's no assignment this week. Please spend the time working out your project's design and completing the test suite, which will be due on April 7. On April 7, the following are due: - Outline of the modules and classes in your project. - Documentation explaining what each function and method is supposed to do. (It doesn't have to be a lot. But write enough so that I can see how all the pieces work and fit together.) - A test suite (in unitteststyle) containing tests for any functions and methods that can be tested independently of the whole project. (You don't have to write tests for things that can only be tested by a user. But try to separate as much functionality as possible out of the user interface and into modules.) There should be one test file corresponding to each module (if the module is called spam.py, put the tests in test_spam.py).
http://zesty.ca/bc/explore-11.html
CC-MAIN-2017-13
refinedweb
1,731
65.32
Reimplement a QGraphicsView for Stellarium. More... #include <StelMainView.hpp> It is the class creating the singleton GL Widget, the main StelApp instance as well as the main GUI. This is useful for e.g. chinese. This depends on the time the last user event happened. The format of the file, and hence the filename extension depends on the architecture and build type. doScreenshot() does the actual work (it has to do it in the main thread, where as saveScreenShot() might get called from another one. Usually this minimum will be switched to after there are no user events for some seconds to save power. However, if can be useful to set this to a high value to improve playing smoothness in scripts.
http://stellarium.org/doc/0.18/classStelMainView.html
CC-MAIN-2018-39
refinedweb
122
73.17
#include <fcntl.h> /* For O_* constants */ #include <sys/stat.h> /* For mode constants */ #include <semaphore.h>. On success, sem_open() returns the address of the new semaphore; this address is used when calling other semaphore-related functions. On error, sem_open() returns SEM_FAILED, with errno set to indicate the error. The semaphore exists, but the caller does not have permission to open it. Both O_CREAT and O_EXCL were specified in oflag, but a semaphore with this name already exists. value was greater than SEM_VALUE_MAX. name consists of just "/", followed by no other characters. The per-process limit on the number of open file descriptors has been reached. name was too long. The system-wide limit on the total number of open files has been reached. The O_CREAT flag was not specified in oflag and no semaphore with this name exists; or, O_CREAT was specified, but name wasn't well formed. Insufficient memory. For an explanation of the terms used in this section, see attributes(7).
http://manpages.courier-mta.org/htmlman3/sem_open.3.html
CC-MAIN-2021-17
refinedweb
163
67.96
After spending quite a bit of time typing out the code to assign values returned from a DataReader to the properties of my business object I began to think that there must be a better way. Then I started to play with DotNetNuke. I began reading some of the documentation for DNN and ran into a rather interesting helper class that is explained in the DotNetNuke Data Access document. The Custom Business Object Helper class that they used to populate their business objects was a code saver. You write the code to retrieve a DataReader with the information you need to set the properties on your object. Then you call FillObject or FillCollection on the helper class, pass in your DataReader and the type of your business object, and it gives you an instance of that object fully populated with the data. It uses reflection and the names of your business object's properties to match the DataReader field to your object's property. For instance, if you have a table in your database named Customers and you execute a query to return one of Customer's rows in a DataReader you can populate your Customer object's properties based on the names of the Fields in the DataReader. If your business object has a property called FirstName and your table has a field called FirstName, then the value of the FirstName field in the DataReader is assigned to the FirstName property on your business object. This can save quite a bit of development time when you have an object with many properties. It also helps when you add fields to your table because all you need to do is add a property with the same name to your business object and the helper class takes care of populating it. Along with the ability to populate one instance of your object with values it also gave you the ability to populate a collection of objects if the DataReader returns more than one row. Despite the benefits of the DNN helper class, there were several issues that I felt needed to be solved. The first one was that the properties on the business object and the fields in the database had to have the same name. Sometimes the database and queries have already been constructed and the field names wouldn't make very good property names for your business object (imagine having to prefix all you business object properties with 'fld', which was a fairly common way to prefix field names in a database). The second issue was that the FillCollection method returned an ArrayList. Many developers prefer to create their own collection classes for their objects. This way they can stick any methods that operate on the group of business objects inside that collection class. Using the DNN helper class, you could only get back an ArrayList. The last issue was that if the database field value was null, the value would be set by the Null helper class, this class took a string representation of the objects type and assigned the default value for the type (ex. 0 for System.Int32, false for System.Boolean, etc.). If you wanted to change the default value, you had to edit this class and then recompile it. Thanks to Generics in .Net 2.0, Custom Attributes, and Generic Constraints, these issues can be resolved. The problem with the property names having to be the same as the database field names, and the assignment of a default value when the field contained a DBNull value, can be solved using custom attributes. Creating a custom attribute is easy, just create a class that derives from System.Attribute. There are a few rules to follow when creating a custom attribute. Its name must end with 'Attribute' and it should be marked with the AttributeUsage attribute (Yep, you need an attribute for your attribute). The AttributeUsage attribute tells the compiler to which program entities the attibute can be applied: classes, modules, methods, properties, etc.. A custom attribute class can only have fields, properties and methods that accept and return values of the following types: bool, byte, short, int, long, char, float, double, string, object, System.Type, and public Enum. It can also receive and return one deminsional arrays of the preceeding types. A custom type must expose one or more public constructors, and typically the constructors will accept arguments that are mandatory for the attribute. In the code below the mandatory attribute field is the NullValue field, which is the value the property will take if the datareader contains a null value for the field. There is also an overloaded version of this constructor that takes the field name as well. The custom attribute class that BusinessObjectHelper exposes is DataMappingAttribute (in the Attributes.cs file). Its code is shown below. namespace BusinessObjectHelper { [AttributeUsage(AttributeTargets.Property)] public sealed class DataMappingAttribute : System.Attribute { #region Private Variables private string _dataFieldName; private object _nullValue; #endregion #region Constructors public DataMappingAttribute(string dataFieldName, object nullValue) : base() { _dataFieldName = dataFieldName; _nullValue = nullValue; } public DataMappingAttribute(object nullValue) : this(string.Empty, nullValue){} #endregion #region Public Properties public string DataFieldName { get { return _dataFieldName; } } public object NullValue { get { return _nullValue; } } #endregion } } This class is a very simple implementation of a custom attribute. As you can see it inherits from System.Attribute and has the AttributeUsage attribute applied to it. The AttributeTargets.Property value passed in to the AttributeUsage attribute tells the compiler that this attribute can only be used on Properties. If you attempt to apply this attribute to a method you will get the following error: Attribute 'DataMapping' is not valid on this declaration type. It is valid on 'property, indexer' declarations only. This is good because this property is used to set the field name of the value to be read from the datareader and the default value if the value read from the datareader is equal to DBNull. This attribute would not do us any good if it was applied to a method. You appy the DataMapping attribute to a property on your business object as shown below. public class MyData { private string _firstName; [DataMapping("FirstName", "Unknown")] public string FirstName { get { return _firstName; } set { _firstName = value; } } } The first argument passed to the DataMapping attribute is the field name in the database. The second argument is the default value for the property if the datareader contains a DBNull value for the field. If the FirstName field in the database is null, then the FirstName property of the business object will be set to 'Unknown'. So, now that we know how to create a custom attribute lets move on and see how to use them in our code. private static List<PropertyMappingInfo> LoadPropertyMappingInfo(Type objType) { List<PropertyMappingInfo> mapInfoList = new List<PropertyMappingInfo>(); foreach (PropertyInfo info in objType.GetProperties()) { DataMappingAttribute mapAttr = (DataMappingAttribute)Attribute.GetCustomAttribute(info, typeof(DataMappingAttribute)); if (mapAttr != null) { PropertyMappingInfo mapInfo = new PropertyMappingInfo(mapAttr.DataFieldName, mapAttr.NullValue, info); mapInfoList.Add(mapInfo); } } return mapInfoList; } The PropertyMappingInfo type is another simple class that exposes properties for saving the field name of the matching database field, the default value of the property, and a PropertyInfo object that contains properties and methods that allow us to work with the business object's type. Inside the foreach loop we iterate through each one of the properties in the business object type (objType). GetProperties returns a collection of PropertyInfo objects that we can use to find out all we need to know about the business object's properties. The Attribute.GetCustomAttribute method returns a reference to the instance of DataMappingAttribute that was applied to the property we are currently working with. If the attribute was not applied to the property the method returns null and we simply ignore it. There are several ways to get a reference to an attribute, this is called reflecting on an attribute. If you just need to check whether an attribute is associated with an element use the Attribute.IsDefined static method or the IsDefined instance method associated with the Assembly, Module, Type, ParamerterInfo, or MemberInfo classes. This technique doesn't instantiate the attribute object in memory and is the fastest. If you need to check whether a single-instance attribute is associated with an element and you also need to read the attributes fields and properties (which is the case in the method above), we use the Attribute.GetCustomAttribute static method (don't use this technique with attributes that can appear multiple times on an element, because you might get an AmbiguousMatchException). If you want to check whether a multiple instance attribute is associated with an element and you need to read the field and properties of the attribute, use the Attribute.GetCustomAttributes static method or the GetCustomAttributes instance method exposed by the Assembly, Module, Type, ParameterInfo, MemberInfo classes. You must use this method when reading all the attributes associated with an element, regardless of the attribute type. Now that we have a collection of PropertyMappingInfo objects for the type's properties, we will store this collection in a cache, because reflection is expensive, and there is no reason the PropertyMappingInfo should change while the application is running. The cache can be cleared in code, however, if you want to refresh the PropertyMappingInfo collections. The class that implements the cache, wraps a dictionary object that actually holds the cached data. Its code is shown below namespace BusinessObjectHelper { internal static class MappingInfoCache { private static Dictionary<string, List<PropertyMappingInfo>> cache = new Dictionary<string,List<PropertyMappingInfo>>(); internal static List<PropertyMappingInfo> GetCache(string typeName) { List<PropertyMappingInfo> info = null; try { info = (List<PropertyMappingInfo>) cache[typeName]; } catch(KeyNotFoundException){} return info; } internal static void SetCache(string typeName, List<PropertyMappingInfo> mappingInfoList) { cache[typeName] = mappingInfoList; } public static void ClearCache() { cache.Clear(); } } } Below is the code that retrieves the PropertyMappingInfo collection for the type passed in to the method. First, we check to see if we have a cached version of the PropertyMappingInfo object collection using the type's name as the key. If one cannot be found we call the method described above to create the PropertyMappingInfo collection and then add it to the cache. Finally we return a PropertyMappingInfo collection. private static List<PropertyMappingInfo> GetProperties(Type objType) { List<PropertyMappingInfo> info = MappingInfoCache.GetCache(objType.Name); if (info == null) { info = LoadPropertyMappingInfo(objType); MappingInfoCache.SetCache(objType.Name, info); } return info; } There is one more method that we need to look at quickly. The GetOrdinals method. This method is used to get the ordinal position of the field in the datareader using the fields name. Having an array of indexes for the fields that corresponds with the PropertyMappingInfo collection avoids having to search through the datareader's fields, which is what we need to do if we used the GetValue("fieldName") method from the datareader instead of the GetValue(index) method. private static int[] GetOrdinals(List<PropertyMappingInfo> propMapList, IDataReader dr) { int[] ordinals = new int[propMapList.Count]; if (dr != null) { for (int i = 0; i <= propMapList.Count - 1; i++) { ordinals[i] = -1; try { ordinals[i] = dr.GetOrdinal(propMapList[i].DataFieldName); } catch(IndexOutOfRangeException) { // FieldName does not exist in the datareader. } } } return ordinals; } Now that we know how to create a custom attribute and reflect on that attribute, we are ready to move on to the good stuff. Generics enable developers to define a class that takes a type as an argument, and depending on the type of argument, the generic definition will return a different concrete class. Generics are similar to templates in C++. However, generics have some benefits that templates do not, mainly, constraints. In the CBO static class located in CBO.cs we have the FillObject generic method. <PropertyMappingInfo> public static T FillObject<T>(Type objType, IDataReader dr) where T : class, new()This method takes a Typeobject (the type of your business object), a DataReader (the datareader that contains the values for your business object), and returns T, huh? T's value depends on how you call the method. The T is simply a placeholder for the real type that will be specified when the method is called. So to call this method to populate a custom object of type MyData you would write the following. MyData data = CBO.FillObject<MyData>(typeof(MyData), dr); The value between the < and > is the type that T will become. So, everywhere we specify an object of type T in the method it will now be an object of type MyData. The return value would be an instance of the MyData type populated with the data from the datareader. By making this a generic method we do away with the need to cast the object to type MyData from type System.Object. In the DNN implementation, the method would return an object. This was the only way to return any type of object from the method prior to generics, since all classes in .Net inherit from System.Object. Now, by using a generic method instead of getting back an object that needs to be cast to MyData, we get back an object of type MyData, and no longer need to cast it. You may be wondering what the where T : class, new() stuff is all about at the end of the FillObject method. These are generic constraints, and I'll get to those shortly. Lets look at the workhorse of the CBO class, the CreateObject method. This method is called by FillObject and FillCollection, and it is responsible for actually assigning the values to the corresponding properties in the business object. It follows the DNN implementation almost exactly, except that it has been translated to C#, uses a generic return type instead of Object, and works with the PropertyMappingInfo class instead of directly with the object's type. private static T CreateObject<T>(IDataReader dr, List<PropertyMappingInfo> propInfoList, int[] ordinals) where T : class, new() { T obj = new T(); // iterate through the PropertyMappingInfo objects for this type. for (int i = 0; i <= propInfoList.Count - 1; i++) { if (propInfoList[i].PropertyInfo.CanWrite) { Type type = propInfoList[i].PropertyInfo.PropertyType; object value = propInfoList[i].DefaultValue; if (ordinals[i] != -1 && dr.IsDBNull(ordinals[i])== false) value = dr.GetValue(ordinals[i]); try { // try implicit conversion first propInfoList[i].PropertyInfo.SetValue(obj, value, null); } catch { // data types do not match try { // need to handle enumeration types differently than other base types. if (type.BaseType.Equals(typeof(System.Enum))) { propInfoList[i].PropertyInfo.SetValue( obj, System.Enum.ToObject(type, value), null); } else { // try explicit conversion propInfoList[i].PropertyInfo.SetValue( obj, Convert.ChangeType(value, type), null); } } catch { // error assigning the datareader value to a property } } } } return obj; } The method really isn't that complicated. All we are doing is looping through all the PropertyMappingInfo objects we created using the attributes we added to the business object's properties. For each one of these objects we check that the property can be written to. If it can be written to we check to see if there is a matching field in the ordinals array and if there is a value in the datareader. If there is, we set value to the value of the datareader, otherwise we leave the value set to the default. Then we first try to set the property using an implicit conversion. If the value cannot be implicitly converted to the property's type, then we try explicitly converting it. If the property is an enumeration, then we need to use the Enum.ToObject method to convert the value. Otherwise, we use the ChangeType static method on the Convert object. If that fails, then we give up and move on to the next property. You may notice that the first line in this method creates an instance of an object of type T using the normal instantiation method instead of reflection. How can we know that the type passed in can be instantiated and has a public constructor with no parameters. Well, this is where constraints come in. You'll notice after the method's parameter list we have where T : class, new(). This is a generic constraint that says the type passed in that T represents has to be a reference type and must declare a public, parameterless constructor. C# supports five different constraints: To add generic constraints you use the syntax where T : [constraint]. You can enforce more than one constraint on the same or on different generic paramters using the same syntax: where T : [contraint 1], [constraint 2] where V : [contraint 1], [constraint 2]. The following method signature shows multiple constraints being applied. By specifying the class and new() constraints, we know that the type argument is a reference type and it exposes a default constructor. So, we can safely use new on this object to create an instance of the specified type. This is what allows us to return a specific type of object instead of just object, and it assures us that we won't try to create an instance of a value type or a type that does not expose a default (parameterless) constructor. Using constraints gives us the ability to write generic methods and classes that can use specific behavior depending on the constraint. For instance, if we wrote a generic method to returns the maximum of several values, we would need to be sure that the type specified for T implements the IComparable interface, assuring us that we can safely compare the values. Let's look at the FillCollection method for an example of multiple generic types and constaints. public static C FillCollection<T, C>(Type objType, IDataReader dr) where T : class, new() where C : ICollection<T>, new() { C coll = new C(); try { List<PropertyMappingInfo> mapInfo = GetProperties(objType); int[] ordinals = GetOrdinals(mapInfo, dr); while (dr.Read()) { T obj = CreateObject<T>(dr, mapInfo, ordinals); coll.Add(obj); } } finally { if (dr.IsClosed == false) dr.Close(); } return coll; } Here we are declaring a generic method that takes two generic types. Both of these generic types have constraints applied to them. If you look to the right of the method name you see <T, C>. This means that we expect this method to be called with two types specified, in this case, the type of your business object (T) and the type of the collection that will hold the business objects (C). Also notice that this method returns an object of type C (our business object collection type). We use the same constraints for T here that we used in the FillObject and CreateObject methods. The constraints on the collection type (C) are that it must implement the ICollection<T> interface, which is the base interface for classes in the System.Collections.Generic namespace. This means that any business object collection class that inherits from one of the generic collection classes or implements this interface from scratch, can be used as the collection object. We also need to be sure that it exposes a public default constructor so that we can create an instance of the type in our method. A reference to this object will be returned from the method. Let's walk through what's happening here. First, we create an instance of the collection type specified. We know we can call new on this because of our new() constraint. After we have an instance of the collection class we get the PropertyMappingInfo collection and our ordinal array. Then all we need to do is loop through the rows in the datareader and call CreateObject (which will instantiate and populate the object). Once we have a reference to our populated business object, we add it to the collection. We know that we can call Add on this collection because the interface constraint ( ICollection<T>) was specified, so the object must implement this interface, and that includes the Add method. Finally, we close the datareader and return the collection. Here is some code that shows how to call the FillCollection method: IDataReader dr = cmd.ExecuteReader(); MyCustomList dataList = CBO.FillCollection<MyData, MyCustomList>(typeof(MyData), dr); I've been a member of the CodeProject for several years now, many of the articles on this site have been a great help to me at work and at play. So, now I hope I can add to that and make a contribution of my own. I really wanted to write this article not only to contribute to the many useful peices of software on the CodeProject, but also to try and explain a bit about Generics, Generic Constraints, Reflection and Custom Attributes, as well as give an example of how these technologies can be used together to create (what I hope) is a useful helper class that can be reused in many different projects. I hope that I have managed to help those who have helped me so much. Using the code is fairly straight forward. Add a reference to the BusinessObjectHelper.dll assembly to your project and add the DataMapping attribute to each property of your business object that you want to assign from the datareader. Below is an example of a business object class marked with DataMapping attributes. public class MyData { private string _firstName; [DataMapping("Unknown")] public string FirstName { get { return _firstName; } set { _firstName = value; } } private MyEnum _enumType; [DataMapping("TheEnum", MyEnum.NotSet)] public MyEnum EnumType { get { return _enumType; } set { _enumType = value; } } private Guid _myGuid; [DataMapping("MyGuid", null)] public Guid MyGuid { get { return _myGuid; } set { _myGuid = value; } } private double _cost; [DataMapping("MyDecimal", 0.0)] public double Cost { get { return _cost; } set { _cost = value; } } private bool _isOK; [DataMapping("MyBool", false)] public bool IsOK { get { return _isOK; } set { _isOK = value; } } } The first argument passed in to the DataMapping attribute is the name of the field in the database that corresponds to the property. The second argument is the default value for the property if the value is null in the database. If you do not specify the field name, then the property name will be used instead. So, if you have fields in the database that do have the same name as the property, then you do not need to include the field name in the attribute. You must specify a default value, however. After you have setup your business object, to populate it all you need to do is call the FillObject or FillCollection static methods on the CBO class, and pass in the type of your business object and the datareader. You also need to specify the type for the generic method. In this case I would call FillObject as follows: IDataReader dr = cmd.ExecuteReader(); MyData data = CBO.FillObject<MyData>(typeof(MyData), dr); If you need to populate a collection of objects from a datareader call the FillCollection static method on the CBO class. I have a custom collection type called MyCustomList and a business object of type MyData. To fill MyCustomList with MyData objects and get a reference to the populated collection, I would call FillCollection as follows: MyCustomList dataList = CBO.FillCollection<MyData, MyCustomList>(typeof(MyData), dr); If you are interested in learning more about C#, including generics, custom attributes, and reflection, I would recommend the following books: CreateObjectmethod because it was not working well with default values for DateTimefields or Enums. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/database/BusinessObjectHelper.aspx
crawl-002
refinedweb
3,841
53
I need some help in trying to find out how to cast a method that is within an object which is inside a LinkedList. Let me try to explain: I already know that I have to use a wrapper to convert an object back into another state. The think I am confused about is how to use a method from the object. What I am doing is storing Objects called Item into a LinkedList. I have a method from Item that gets a stored variable. Heres the code for my Item class. public class Item implements Comparable { private int myId; private int myInv; public Item(int id, int inv) { myId = id; myInv = inv; } public int getId() { return myId; } public int getInv() { return myInv; } public int compareTo(Object otherObject) { Item right = (Item)otherObject; return myId - right.myId; } public boolean equals(Object otherObject) { return compareTo(otherObject) == 0; } public String toString() { String string = (getId() + " " + getInv()); return string; } } From my experience so far, Arrays with stored objects only need to method call after referencing it. int id = myArray[index].getId() What I tried was: int id = myList.get(index).getId(); I knew that this aproach probably wouldn't work anyway. So right now I need someone to help me to understand how to call a method from an object within a LinkedList. Thanks in advanced, Lord Felix
https://www.daniweb.com/programming/software-development/threads/20326/calling-object-methods-within-a-linkedlist
CC-MAIN-2021-10
refinedweb
223
71.95
Python Programming/Modules and how to use them From Wikibooks, the open-content textbooks collection Modules are libraries that can be called from other scripts. For example, a popular module is the time module, or the time module. You can call it using: import time . Then, open up time.py You will see a bunch of various functions in the file. An example is below: import time def main(): #define the variable 'current_time' as a tuple of time.localtime() current_time = time.localtime() print current_time # print the tuple # if the year is 2008 (first value in the current_time tuple) if current_time[0] == 2008: print 'The year is 2008' # print the year if __name__ == '__main__': # if the function is the main function ... main() # ...call it Modules can be called in a various number of ways. For example, we could import the time module as t: import time as t # import the time module and call it 't' def main(): current_time = t.localtime() print current_time if current_time[0] == 2008: print 'The year is 2008' if __name__ == '__main__': main()
http://en.wikibooks.org/wiki/Python_Programming/Modules_and_how_to_use_them
crawl-001
refinedweb
175
64.51
Working with images is boring. If you were to include an image on a website, you have to assure: - That the images are properly resized for each screen - That we serve the right image based on the device pixel density - That we serve modern image formats when possible - That the images are compressed But also.. - That we don't load all the page at once, consuming bandwidth, when the visitor might not even scroll to see most of the pages - That we don't cause layout movements, as the images abruptly load Thankfully, Gatsby offers some nice utilities, bundling all the steps required into a single tool-chain, letting us focus on other things. To use GraphQL or not? Up to recently, if you didn't want to go the GraphQL route, you were all out of luck. Thankfully, now it's possible with the help of the StaticImage component. But let's take a step back for a moment. Gatsby is a fantastic framework if you want to consume data from various remote sources. Fetch the data (instagram, wordpress, etc), push them into the GraphQL layer, and populate your pages based on that data. To help with the images that these sources include, the Gatsby team & collaborators added a set of utilities to optimize them. But for everything outside the GraphQL layer, there wasn't anything. That resulted in an unnecessary boilerplate. If you had a simple local image but wanted a cool blur effect, you had to go through GraphQL. That said, with the latest gatsby-plugin-image we have the solution, and we can safely go one way or the other depending on the use case. - If the image comes from a remote source, you're already using GraphQL - If the image is part of a collection, like the cover of a blog post, you will have an easier time with GraphQL - If the image is the 404 illustration, the image of a section, or something ephemeral, inlining is the best way to go. Inlining images If your images don't go through the Gatsby GraphQL layer, you can use StaticImage import {StaticImage} from 'gatsby-plugin-image'; // my actual 404 return ( <StaticImage alt="Man looking on the map" className="border-b-4 border-gray-200" layout="constrained" width={400} ); The same configuration object that we declare in the GraphQL schema, can be passed as props. Rejoice. GraphQL Configuration Dependencies First of all, we need to ensure we have all the dependencies in place. We need 4 plugins before we start: - gatsby-source-filesystem, to make the images known the GraphQL data layer. You probably already have this installed. - gatsby-plugin-image, which exposes the StaticImage& GatsbyImagecomponents - gatsby-plugin-sharp, to bridge the gap between Sharp and the rest of the plugins - gatsby-transformer-sharp, to manipulate the images using GraphQL queries // src/gatsby-config.js { resolve: `gatsby-source-filesystem`, options: { path: `${__dirname}/src/images`, name: 'images', }, `gatsby-plugin-sharp`, `gatsby-transformer-sharp`, `gatsby-plugin-image`, }, Types of images Now, this is where it gets interesting - we can have three types of responsive images. - Images with fixed width. When knowing exactly how big the images should be. ( FIXED) - Images that stretch across their fluid parent container. Completely dependent on their parent, who can take many shapes and forms between screen sizes. ( FULL_WIDTH) - Images that stretch across their container but limited to a maximum width ( CONSTRAINED) Ok, this might be confusing. The difference between FULL_WIDTH & CONSTRAINED can be seen in the following table. The FULL_WIDTH image will expand to fill its container, even if it looks blurred. So assuming we want a CONSTRAINED image and we don't want image copies bigger than 200px, here's our query. query { image: file(relativePath: { eq: "image.jpg" }) { childImageSharp { gatsbyImageData( quality: 90 width: 200 layout: CONSTRAINED ) } } } } This setting will make sure to include copies for both jpg and webp, even if we don't specifically request for the latter. Placeholders Everything is in order, but we probably want a smooth fallback. Here are our options: BLURRED: (default) a blurred, low-resolution image, encoded as a base64 data URI TRACED_SVG: a low-resolution traced SVG of the image DOMINANT_COLOR: a solid color, calculated from the dominant color of the image. NONE: no placeholder. Looks better with the backgroundprop set. Here are the first three options side to side. I tend to prefer the first two. Transforms Gatsby allows us to do some transforms too: - grayscale - duotone - rotate - trim - cropFocus - fit Frankly, Gatsby is a bit notorious for its build times, so I would advise to do a pre-processing to avoid transforming the same images again and again. If that's not possible, this option is for you. Consuming the images Contrary to the StaticImage example, GatsbyImage accepts an image prop. import {StaticImage} from 'gatsby-plugin-image'; return ( <GatsbyImage alt={album.title} image={album.cover.childImageSharp.gatsbyImageData} ); Now writing album.cover.childImageSharp.gatsbyImageData is a bit tedious, so we can import getImage from the very same package, and refactor it as follows: import {StaticImage, getImage} from 'gatsby-plugin-image'; return ( <GatsbyImage alt={album.title} image={getImage(album.cover)} // much better ); Referencing images You probably want to fetch the images dynamically, as part of another entity. The best way for that is to link the images with the rest of the metadata. Here's an example from my blog. { "artist": "Joy Division", "title": "Unknown Pleasures", "releasedDate": 1979, "rating": 5, "cover": "./images/unknown-pleasures.jpg", "spotify": "" }, { "artist": "King Crimson", "title": "In the Court of the Crimson King", "releasedDate": 1969, "rating": 5, "cover": "./images/in-the-court.jpg", "spotify": "" }, { "artist": "Radiohead", "title": "OK Computer", "releasedDate": 1997, "rating": 5, "cover": "./images/ok-computer.jpg", "spotify": "" }, And the call with the rest of the metadata export const query = GraphQL` { albums: allAlbumsJson( sort: { fields: [rating, badge, title, artist, releasedDate] order: DESC } ) { edges { node { id releasedDate title artist rating badge spotify cover { childImageSharp { gatsbyImageData( height: 200 width: 200 quality: 100 layout: CONSTRAINED placeholder: DOMINANT_COLOR ) } } } } } } `; Images in markdown files Unfortunately, gatsby-plugin-image doesn't help with markdown files. We can optimize any images we query along with our data, but the images which are referenced inside the markdown file, won't be touched. In order to do this, we have to include a separate plugin gatsby-remark-images. Now to keep things tidy, I like to keep my blog images near the markdown file, and copy them over with gatsby-remark-copy-linked-files. Here's how it looks And here how it's written Here's how it looks. Now having installed the plugins, we add some basic options and we're ready to go. { resolve: `gatsby-transformer-remark`, options: { plugins: [ // .. rest of the plugins 'gatsby-remark-copy-linked-files', { resolve: `gatsby-remark-images`, options: { maxWidth: 900, quality: 90, withWebp: true, }, }, ], }, }, Not as polished as the gatsby-plugin-image, but it does the trick. Fin At the time of writing gatsby-plugin-image is still in beta. That said, I'm using it because life is too short to not live on the edge. If you want to follow the official documentation, you can find it here 👋
https://dnlytras.com/blog/gatsby-images/
CC-MAIN-2022-40
refinedweb
1,188
53.21
E There are several potentially helpful answers here, but I think there are two important points that haven't been made: No, it is not possible to programmatically determine that Windows and all startup programs have finished booting. This is essentially the Halting Problem and no program out there will be able to answer the question "For this arbitrary program, at what point should we say it has been loaded?". What is the actual problem you are trying to solve? All of the answers here attempt to find a solution to your question, but the question itself feels like it might be missing some important information. We want to solve your problem, not just answer the question. Reading your question again and going just by what you've said, my response would be one of: Or: This might not directly answer your current question, but hopefully it is helpful. systemd Why not using Windows Task Scheduler and the Event ID 100 to play a custom sound when Windows is really finished? Under Triggers select "On an event" and Under Actions select "Start a program" with "%ProgramFiles(x86)%\Windows Media Player\wmplayer.exe" Add arguments: "%windir%\Media\Windows Logon Sound.wav" "%windir%\Media\Windows Logon Sound.wav" Event ID: 100 Decription: Windows has started up Source Event ID 100: Windows Diagnostics-Performance I have a free program installed that I have used for a long time, Soluto: I am just a user, not connected. It works for me. It does a count down and allows you to select just what you want to load on boot. It also allows you to delay starts. Windows will treat boot finished if it was 80% idle (excluding low-priority CPU and disk activity) for 10s after reaching the Desktop UI. To see the exact boot time use xbootmgr to trace why Windows boot slowly. This may not be very effective, but it's cheap. I usually look at the hard disk activity led (you can identify it on the case by something like a database icon), until the led stabilizes and flashes infrequently, then I know i can now use the PC without much lag. The hard disk is usually the PC's bottleneck, and if it is not being used heavily, then you have room for work. Hope that helps. You can add a sound to startup. You can delay the startup processes and put a sound effect to be executed first. The tool Startup Delayer does this. Run this Python script on startup. It will play the startup sound once cpu usage has been below 20 percent for 5 consecutive seconds: import subprocess import time # set these to whatever works for you # sound will play when cpu load has been < IDLE_PERCENT for IDLE_TIME consecutive seconds IDLE_TIME = 5 IDLE_PERCENT = 20 # you can execute any program you want by changing the alert function below def get_load(): output = subprocess.check_output('wmic cpu get loadpercentage', shell=True) load = output.split()[1] return int(load) def alert(): subprocess.call([ r"c:\Program Files (x86)\Windows Media Player\wmplayer.exe", r"c:\Windows\Media\Windows Logon Sound.wav"]) idleSeconds = 0 while idleSeconds < IDLE_TIME: load = get_load() if load < IDLE_PERCENT: idleSeconds += 1 else: idleSeconds = 0 time.sleep(1) alert() I've never used it myself but Windows Task Scheduler will allow you to create a task that is triggered to run "At Startup" or "At Logon". You may be able to schedule a task to run a powershell script to access the Console.Beep() method of the .NET Framework. You may even be able to turn it into a decent tune. From Hey Scripting Guy! Blog [console]::beep(500,300) Change the value of the first number to alter the pitch (anything lower than 190 or higher than 8500 can’t be heard), and change the value of the second number to alter the duration Change the value of the first number to alter the pitch (anything lower than 190 or higher than 8500 can’t be heard), and change the value of the second number to alter the duration Edit: As it turns out the Console.Beep() method may not work on 64-bit systems according to the MSDN .NET documentation. Tested beeping in a PowerShell Script and it works fine. You may also wish to check into your startup processes. You can do this by running msconfig and clicking on services. You can also use the Sysinternals tool Autoruns. By windows documentation last in load order is registry currentuser/runOnce (after start-up folder) ... but what ever load order is, some of programs can take long starting-up time. So maybe watching CPU activity in batch file can be good solution: @echo off setlocal ENABLEDELAYEDEXPANSION set cpuLimit=15 set /a lowCount=0 for /l %%i in (1,1,30) do ( for /F %%c in ('wmic cpu get loadpercentage ^| findstr "[0-9]"') do ( echo %%i cpu-Load: %%c ...lowCnt: !lowCount! if %%c gtr %cpuLimit% ( set lowCount=0 ) else ( set /a lowCount+=1 if !lowCount! gtr 10 ( ECHO BEEEEEEP ... mplay32 beep.wav... or something exit ) ) ) ping -n1 127.0.0.1 >NUL ) ... and BEEEPing after spoting that in last 10 cpu-load-checks, there was cpu-load below 15%. this batch file can be started by link in start-up folder, or better in registry HKLM\Software\Microsoft\Windows\CurrentVersion\Run You wrote: «My machine (Windows 7 64-bit) takes about 3-4 mins to completely boot...» «My machine (Windows 7 64-bit) takes about 3-4 mins to completely boot...» A "normal" boot takes around 30 seconds including the access to the user account. There's somethings wrong and this must be fix. In order to fix it, you have to get de data. I suggest you two utilities to troubleshoot this problem: 1) Autoruns With this one you're able to check every process, program, driver loaded during the boot process. Tka care: this is a very powerful tool and DO NOT delete items unless you know what's you're doing. But you may Uncheck (disable) some startup logon programs... 2) System Explorer With this tool you're able to see all process started and the history of them until System Explorer is started. Also it's possible to scan all these process for security purpose (compared to a process database..) Check this first them give us some feedback. You may also check the services at startup with services.msc and the status of devices with devmgmt.msc (yellow triangle for defect devices,Hard disk in PIO mode instead of DMA, and so on...) in the Windows administrative tools... services.msc devmgmt.msc Hope this help. Let us know. I had been searching for something like this, and i found the answer i will post the actual code later on probably, but here it goes just save the contents of the last part as a vbs file and run it, you can poll each system event and extract its message being displayed, simple. what one has to do to find the boot sequence completion is to have this script run only until the event id is '100' that corresponds to the boot completion process in the system events refer: So, when you have this script in startup , it will poll all events and wait for event id to be 100 and alert you . the same can be employed in any scripting tool like autoit Just wanted all to know it. because when i searched for this. all i found was the windows task scheduler idea or soluto at the best. actually soluto stopped working on my x64 pc a while ago for some strange reason. so i wanted to do a script myself which led me to this final solution.... By posting your answer, you agree to the privacy policy and terms of service. asked 9 months ago viewed 4385 times active 2 months ago
http://superuser.com/questions/729738/how-to-tell-when-windows-and-all-startup-programs-have-completed-booting/729746
CC-MAIN-2014-52
refinedweb
1,318
71.44
The previous post looked at random Fibonacci sequences. These are defined by f1 = f2 = 1, and fn = fn-1 ± fn-2 for n > 2, where the sign is chosen randomly to be +1 or -1. Conjecture: Every integer can appear in a random Fibonacci sequence. Here’s why I believe this might be true. The values in a random Fibonacci sequence of length n are bound between –Fn-3 and Fn.[1] This range grows like O(φn) where φ is the golden ratio. But the number of ways to pick + and – signs in a random Fibonacci equals 2n. By the pigeon hole principle, some choices of signs must lead to the same numbers: if you put 2n balls in φn boxes, some boxes get more than one ball since φ < 2. That’s not quite rigorous since the range is O(φn) rather than exactly φn, but that’s the idea. The graph included in the previous post shows multiple examples where different random Fibonacci sequences overlap. Now the pigeon hole principle doesn’t show that the conjecture is true, but it suggests that there could be enough different sequences that it might be true. The fact that the ratio of balls to boxes grows exponentially doesn’t hurt either. Empirically, it appears that as you look at longer and longer random Fibonacci sequences, gaps in the range are filled in. The following graphs consider all random Fibonacci sequences of length n, plotting the smallest positive integer and the largest negative integer not in the range. For the negative integers, we take the absolute value. Both plots are on a log scale. First positive number missing: Absolute value of first negative number missing: The span between the largest and smallest possible random Fibonacci sequence value is growing exponentially with n, and the range of consecutive numbers in the range is apparently also growing exponentially with n. The following Python code was used to explore the gaps. import numpy as np from itertools import product def random_fib_range(N): r = set() x = np.ones(N, dtype=int) for signs in product((-1,1), repeat=(N-2)): for i in range(2, N): b = signs[i-2] x[i] = x[i-1] + b*x[i-2] r.add(x[i]) return sorted(list(r)) def stats(r): zero_location = r.index(0) # r is sorted, so these are the min and max values neg_gap = r[0] // minimum pos_gap = r[-1] // maximum for i in range(zero_location-1, -1, -1): if r[i] != r[i+1] - 1: neg_gap = r[i+1] - 1 break for i in range(zero_location+1, len(r)): if r[i] != r[i-1] + 1: pos_gap = r[i-1] + 1 break return (neg_gap, pos_gap) for N in range(5,25): r = random_fib_range(N) print(N, stats(r)) Proof Update: Nathan Hannon gives a simple proof of the conjecture by induction in the comments. You can create the series (1, 2) and (1, 3). Now assume you can create (1, n). Then you can create (1, n+2) via (1, n, n+1, 1, n+2). So you can create any positive even number starting from (1, 2) and any odd positive number from (1, 3). You can do something analogous for negative numbers via (1, n, n-1, -1, n-2, n-3, -1, 2-n, 3-n, 1, n-2). This proof can be used to create an upper bound on the time required to hit a given integer, and a lower bound on the probability of hitting a given integer during a random Fibonacci sequence. Nathan’s construction requires more steps to produce new negative numbers, but that is consistent with the range of random Fibonacci sequences being wider on the positive side, [-Fn-3, Fn]. *** [1] To minimize the random Fibonacci sequence, you can chose the signs so that the values are 1, 1, 0, -1, -1, -2, -3, -5, … Note that absolute value of this sequence is the ordinary Fibonacci sequence with 3 extra terms spliced in. That’s why the lower bound is –Fn-3. 3 thoughts on “Is every number a random Fibonacci number?” If you can form any sequence including (1, n), then you can form (1, n, n+1, -1, n+2, n+1, 1, n+2) or (1, n, n+1, -1, n, n-1, 1, n-2). Since you can obviously form (1, 1) and (1, 2), you can get any integer. A stronger conjecture might be “The shortest time for an integer n to appear in a random Fibonacci sequence is O(log n).” @Nathan: You cannot form 1,n,n+1,-1 using this rule. -1 is n – (n+1), but this rule only allows that position to equal (n+1) ± n. Sorry, I misread the rule. You can still do the same thing with (1, n, n+1, 1, n+2) (1, n, n-1, -1, n-2, n-3, -1, 2-n, 3-n, 1, n-2)
https://www.johndcook.com/blog/2020/10/31/random-fibonacci-conjecture/
CC-MAIN-2021-10
refinedweb
830
70.13
Asked by: How to disable windows 7 speech recognition COMMANDS I'm developing an application that utilizes MSSR. The program has its own unique grammar system, which works just fine. My problem is this...occasionally while using my application, MSSR misinterprets something said and identifies it as one of the many preprogrammed commands assocated with using the Windows 7 OS...things like 'start', 'file', 'open', etc. This is not a desired outcome when a user is just trying to use my application, and some other window flies open! LOL How in the world do I disable the preprogrammed speech recognition commands in Windows 7? There is another thread associated with this topic with no resolution here: As a side story to all of this...I was on the phone for HOURS with MS tech support, moving through various ranks...and they basically said, "we cannot help you". However, they did give me the general phone line to MS, which connects you to some very nice switchboard ladies. However, the switchboard is only organized by individual, not department so you cannot get a hold of anyone in generallity. Finally, by virtue of the 'interweb', I found a website for the MS research group, which has a special group for speech recognition. I finally got ahold of someone in the SR research group, and this is what they told me, word for word..."you're going to have to figure this out on your own". Seriously. Then, when I asked if he had a number I could call to talk with someone about this problem, dude says..."NO"...then hangs up! Wow huh? Thanks, Mike Question All replies Hi all, My team and I stumbled upon this article yesterday, along with another 1000x sites that can't solve the issue. So we found the solution ourselves. You need to use an InProc version of speech recognition, which gives you a private recognizer. Bad: hr = pRecognizerEngine.CoCreateInstance( CLSID_SpSharedRecognizer ); // Shared, gives Windows the chance to recognise commands. Good: hr = pRecognizerEngine.CoCreateInstance( CLSID_SpInprocRecognizer ); // Private, does NOT allow Windows to process commands. You'll have to do a lot more calls to set up the Audio Input, and manually activte the recognizer, but it will allow you to use your grammar without having to put up with Windows trying to 'help' you. Two important lines you'll need to use before anything will happen: hr = pNavGrammar->SetGrammarState(SPGS_ENABLED); hr = pNavGrammar->SetDictationState(SPRS_ACTIVE); Good Luck! Heya Mike, Not sure if you'll still be interested, but there is a solution to your question. If you've moved on, it'll still be relevant to the rest of us, so I'll post away.... There's two ways to instance a speech recogniser using SAPI. The most common, and the simplest is through simply using the shared recogniser: hr = pRecognizerEngine.CoCreateInstance( CLSID_SpSharedRecognizer ); The more complicated way is to create your own recogniser: hr = pRecognizerEngine.CoCreateInstance( CLSID_SpInprocRecognizer ); The shared recogniser requires that you enable the shared recogniser, which turns on the Windows command recognition grammar. You can't avoid it, this is disappointing and frustrating. The solution is the second option, creating your own recogniser. You do not need to turn on the Windows Speech recognition, instead you end up with your own recognition engine, completely independant of the Windows command recogniser. This MSDN guide is for using WAV -> SR, but most of it applies: Replace the Stream object wit han spAudio object and you're almost there, here's the Audio Object Code(shortened, no HRESULT checks): CComPtr<ISpObjectToken> pAudioToken; CComPtr<ISpAudio> pAudio; // Try the default hr = SpGetDefaultTokenFromCategoryId(SPCAT_AUDIOIN, &pAudioToken); // Connect the Device hr = pRecognizerEngine->SetInput(pAudioToken, TRUE); hr = SpCreateDefaultObjectFromCategoryId(SPCAT_AUDIOIN, &pAudio); hr = pRecognizerEngine->SetInput(pAudio, TRUE); After that you can continue using the MSDN link above, and will be almost finished, except you need to manually activate the Audio Stram from a microphone / Line-In: hr = pGrammar->SetGrammarState(SPGS_ENABLED); hr = pNavGrammar->SetDictationState(SPRS_ACTIVE); Assuming you are careful in checking the errors, you'll have your own voice recognition, and NO windows commands going off! Have funz0r! -Dr Black Adder Hi. I too ran into this problem today. If you are using the .NET framework (System.Speech.Recognition), then I suspect that you are using the SpeechRecognizer class. This class apparently uses the shared engine that Dr Black Adder mentions. The solution, as I found elsewhere after thorough searching, is to use the SpeechRecognitionEngine class instead (which allegedly uses the inproc engine)! After reading that solution, I am certain it will work, as I recall using that very same class about two years ago and not having this problem. It is a little more complicated to set up and use, but well worth it to get rid of the default windows commands. The original solution is posted at the bottom of this page in post #3 from Jan 15, 2007: Also here is what looks like a C# version of Dr Black Adder's solution: Brian Here is the C# code for my simple voice command library. I warn that accurately filtering by confidence % is critical to success, as the SpeechRecognitionEngine will recognize the word "pause" as being "color red" and still give it 85% confidence. Just ignore all recognitions with a confidence % less than 0.90, 0.93, or 0.95 or so. Or, if you can live with windows default commands being included alongside your own, SpeechRecognizer seemed to be considerably more accurate. I hope this code helps someone! public class VoiceHandler { #region SpeechEvent public delegate void SpeechEventHandler(object sender, string recognized); public event SpeechEventHandler SpeechEvent; private void RaiseSpeechEvent(string recognized) { if (SpeechEvent != null) SpeechEvent(null, recognized); } #endregion public float RequiredConfidence = 0.9f; public string[] commands = new string[] { "Color Default", "Screen 1", "Screen 2", "Screen 3", "Screen 4", "Screen 5", "Color Red", "Color Purple", "Color Blue", "Color Orange", "Color White"}; public int FormNumber = 5; SpeechRecognitionEngine rec; public VoiceHandler() { rec = new SpeechRecognitionEngine(); rec.SetInputToDefaultAudioDevice(); Choices c = new Choices(); for (int i = 0; i < commands.Length; i++) c.Add(commands[i]); GrammarBuilder gb = new GrammarBuilder(c); Grammar g = new Grammar(gb); rec.LoadGrammar(g); } public void StartListening() { rec.SpeechRecognized += rec_SpeechRecognized; rec.RecognizeAsync(RecognizeMode.Multiple); } public void StopListening() { rec.RecognizeAsyncStop(); rec.SpeechRecognized -= rec_SpeechRecognized; } void rec_SpeechRecognized(object sender, SpeechRecognizedEventArgs e) { //System.Windows.Forms.MessageBox.Show(e.Result.Text + "\r\nConfidence: " + e.Result.Confidence); if (e.Result.Confidence >= RequiredConfidence) RaiseSpeechEvent(e.Result.Text); } } - Proposed as answer by BorgSmilie Thursday, October 21, 2010 1:07 AM Hi Dr Black Adder, I have same problem in c# and since I'm not familiar with c++ I don't understand the code you've written. Can you explain me what exactly I should do for Audio object or tell me where I can read more about it? I couldn't find anything related to it in c# in the web. Thanks, S.B
https://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/f2f213d6-d801-49a1-b7dd-4139a357c517/how-to-disable-windows-7-speech-recognition-commands?forum=windowsgeneraldevelopmentissues
CC-MAIN-2015-18
refinedweb
1,139
55.84
On Thu, 22 Jun 2000, Raul Miller wrote: > The changes between 2.005-2 and 2.005-4 are trivial, and 2.005-4 was > intended for both potato and woody. > > Here's the changes: > > [1] in perldl.conf, get rid of -lMesaGL and -lMesaGLU Yes, this was part of my patch to get it to compile. > That's it. > > So, really, the only problem with 2.005-2 is that autobuilders won't > deal with it. And, I suppose, if someone can manually build a copy of > pdl for m68k, I'll be content releasing the slightly more warty 2.005-2 > for potato. [Personally, I think -4 is more suitable for potato, but > I very much understand paranoia about accepting changes at this point.] Actually, I'm surprised if that was it. I also had to change some #includes to remove the #include <float.h> and <nan.h> to #include <math.h> instead (which is more proper on alpha). Neither float.h nor nan.h exist in /usr/include, fyi, under glibc 2.1.x and only really exist on Digital UNIX/Tru64 systems. Frankly, I'm a bit stunned that it built at all without that correction (unless it was generated by a part of that perl-generated Makefile). Here's that portion of my patch (and it only affects alpha, fyi): diff -ruN pdl-2.005/Basic/Math/mconf.h pdl-patched/Basic/Math/mconf.h --- pdl-2.005/Basic/Math/mconf.h Sat Jun 10 19:23:18 2000 +++ pdl-patched/Basic/Math/mconf.h Sat Jun 10 19:21:26 2000 @@ -79,9 +79,13 @@ #include <float.h> #define NANARG 1L #endif -#if defined __alpha && ! defined DEBIAN -#include <float.h> -#include <nan.h> +#if defined(__alpha) +# if !defined(__GLIBC__) +# include <float.h> +# include <nan.h> +# else +# include <math.h> +# endif #endif #ifndef NANARG #define NANARG Aside from the removal of the Mesa libs, that's my entire patch for pdl to work on Alpha.... I'd really like to see this in potato and, as it stands, it just isn't proper to include a binary package on Alpha that won't compile with the supplied source that we have in potato. If that's the case, though, I'd rather see pdl not make it into potato for Alpha at all... C
https://lists.debian.org/debian-devel/2000/06/msg01785.html
CC-MAIN-2017-43
refinedweb
389
78.55
In this section, you will learn to convert decimal number into binary. AdsTutorials In this section we will discuss about conversion of Decimal to Binary in Java. You know that number having base 10 is a decimal number, for example 10 is a decimal number which can be converted to binary and written as 1010. Example : Convert 12 into binary number. Using the table below will show how division is done. Now, note down the remainder from bottom to top i.e. 1100(Binary equivalent of 12). Example : Using this example we pass a decimal number in the command line argument and divide that number by 2 and the remainder which we get is noted down. The remainder we get is the binary number. Now here is the code for conversion. class Conversion { public static void main(String args[]) { int n; int i = 0; int b[] = new int[10]; n = Integer.parseInt(args[0]); while (n != 0) { i++; b[i] = n % 2; n = n / 2; } for (int j = i; j > 0; j--) { System.out.print(b[j]); } } } Output from the program: The java.lang. package provide one function to convert into decimal i.e. toBinaryString() method . Syntax: toBinaryString() This method returns a string representation of the integer argument as an integer in base 2. Example : Code to convert Decimal to Binary using toBinaryString() method import java.lang.*; public class ConversionDemo { public static void main(String[] args) { int i = 12; System.out.println("Binary is " + Integer.toBinaryString(i)); } } Output from the program. . Advertisements Posted on: July 25, 2013 If you enjoyed this post then why not add us on Google+? Add us to your Circles Advertisements Ads Ads Discuss: Conversion of Decimal to Binary in Java Post your Comment
http://roseindia.net/java/java-conversion/conversion-DecimalToBinary.shtml
CC-MAIN-2017-43
refinedweb
288
50.73
Next: Local Socket Example, Previous: Local Namespace Concepts, Up: Local Namespace [Contents][Index] is the macro used by POSIX.1g. This is a synonym for PF_LOCAL, for compatibility’s sake. This is a synonym for PF_LOCAL, for compatibility’s sake. to designate the local namespace. See Socket Addresses. char sun_path[108] This is the file name to use. Incomplete: Why is 108 a magic number? RMS suggests making this a zero-length array and tweaking the following example to use alloca to: Preliminary: | MT-Safe | AS-Safe | AC-Safe | See POSIX Safety Concepts. This macro computes the length of the socket address in the local namespace. Next: Local Socket Example, Previous: Local Namespace Concepts, Up: Local Namespace [Contents][Index]
https://www.gnu.org/software/libc/manual/html_node/Local-Namespace-Details.html
CC-MAIN-2017-43
refinedweb
119
51.95
Custom Forms and the Form Monad October 20, 2010 Michael Snoyman I think it's fair to say that forms are one of the most complicated parts of web development. A huge part of this complication simply comes from the tedious, boilerplate style code we need to write. A library such as formlets can do wonders to clean this up. Yesod 0.5 follows the philosophy behind formlets. formlets is great because it lets you forget about so many of the boring details of your form: names, ids, layout. This can all be handled in an automated manner. For example (using Yesod): data Person = Person { name :: String, age :: Int } personForm = fieldsToTable $ Person <$> stringField "Name" Nothing <*> intField "Age" Nothing myHandler = do (res, form, enctype) <- runFormPost personForm ... defaultLayout [$hamlet|^form^|] Sometimes, however, you do want to deal with some of those pesky details. Yesod already allows overriding the automatically generated ids and names, eg: <$> stringField "Name" { ffsId = Just "name-field" } Nothing But Yesod 0.5 doesn't really have a simply way to produce arbitrarily laid out forms. For example, we might want to get fancy with our Person form and produce: Hi, my name is <input type="text" name="name"> and I am <input type="number" name="age"> years old. You see, the forms library keeps all of the information on the view of the form locked up, and only returns the whole thing once you run the form. So in theory, with Yesod 0.5, we could do something like: personForm = Person -- notice no fieldsToTable <$> stringField "Name" Nothing <*> intField "Age" Nothing myHandler = do (res, [nameFieldInfo, ageFieldInfo], enctype) <- runFormPost personForm ... defaultLayout [$hamlet| Hi, my name is ^fiInput.nameFieldInfo^ and I am ^ageFieldInfo^ years old. |] But this is an absolute recipe for disaster: we've completely lost so many of our type-safety benefits by forcing a pattern match on a specific size of a list, if you change the order of our fields the fields will be backwards, we need to remember to update multiple places when the Person datatype changes, and so on. What we'd like is to have to stringField and intField functions give us the HTML directly, something like: personForm = do (name, nameHtml) <- stringField "Name" Nothing (age, ageHtml) <- intField "Age" Nothing return (Person name age, [$hamlet| Hi, my name is ^nameHtml^ and I am ^ageHtml^ years old. |] This doesn't work. The GForm datatype doesn't have a monadic instance. It's easy enough to add one, but that would result in one of two things: We would have an inconsistent definition of our Monad and Applicative instances. You see, Applicatives cannot use the result from a previous action in determining the next course of action, which allows them to collect multiple error values. (I know this is vague, please forgive me, a fully fleshed out explanation would not fit here well.) We would need to cut out the true power of the forms library, but defining a weaker Applicative instance which can't collect multiple failures. Imagine a form validation that only tells you the first validation error. Additionally, the code above will only really return HTML if the stringField and intField functions succeed. It turns out that we need a different approach to handle this problem. GFormMonad Yesod 0.6 will be adding a new datatype, GFormMonad, to complement the GForm datatype. As a high-level overview: GForm automatically handles keeping track of validation failures and HTML for you; GFormMonad gives you direct access to these. As an example: personForm = do (name, nameField) <- stringField "Name" Nothing (age, ageField) <- intField "Age" Nothing return (Person <$> name <*> age, [$hamlet| Hi, my name is ^fiInput.nameField^ and I am ^fiInput.ageField^ years old. |]) This is almost identical to what we wrote as our ideal code above, but not quite. The name variable above has a datatype of FormResult String. In other words, when using GFormMonad, you have to deal with the possibility of validation errors directly. Fortunately, you are still free to use the Applicative instance of FormResult, as we did here. Also notice how I'm still using stringField and intField; field functions are all now polymorphic, using the IsForm typeclass. In order to unwrap a GFormMonad, we use runFormMonadGet and runFormMonadPost, like so: myHandler = do ((personResult, form), enctype) <- runFormMonadPost personForm case personResult of FormSuccess person -> ... defaultLayout [$hamlet|^form^|] Making the run functions non-polymorphic helps the type system figure out what you're trying to do. Other changes As I had anticipated, there are not going to be many other changes in Yesod 0.6. Haskellers.com discovered a bug in MonadCatchIO which screwed up the database connection pool, so I had to yank all of the MonadCatchIO code out and replace it with something else. I've moved the Yesod.Mail code to a separate package, as well as Yesod.Helpers.Auth. I'm going to spend a few more days going through the Haddocks and make sure names are consistent and the docs are comprehendable, and will probably make the Yesod 0.6 and Persistent 0.3 release some time early next week. It's already powering Haskellers.com and the Hackage dependency monitor, so I think it's solid enough. This is a last call for change requests! Given the API stability between 0.5 and 0.6, I feel pretty confident that the next release of Yesod after this will be 1.0. My goal for that release is all about documentation: I want to get as much content as possible into the book, and have it polished. If you see something in the book you don't understand, please ask, and if you see a topic I've skipped, bring it up. We're really building a great community for Yesod, and I need everyone's help to make it the best it can be.
http://www.yesodweb.com/blog/2010/10/custom-forms
CC-MAIN-2016-18
refinedweb
973
63.09
Law of Demeter Law of Demeter Join the DZone community and get the full member experience.Join For Free Learn more about how CareerBuilder was able to resolve customer issues 5x faster by using Scalyr, the fastest log management tool on the market. Reduce. public class Item { private final Map<String, Set<String>> attributes; public Item(Map<String, Set<String>> attributes) { this.attributes = attributes; } public Map<String, Set<String>> getAttributes() { return");High Coupling. The Next Step Improvement The solution above will sometimes (usually?) be enough. As pragmatic programmers, we need to know when to stop. However, let’s see how we can even improve the first solution. Create a class Attributes:); } }. Find out more about how Scalyr built a proprietary database that does not use text indexing for their log management tool. Published at DZone with permission of Eyal Golan , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/law-demeter
CC-MAIN-2018-30
refinedweb
167
58.89
Dutch National Flag Problem Given an array with 0s,1s and 2s, sort array in increasing order. Another way to phrase this problem is sort balls with three different colors : red, blue and white, where balls of each color are placed together. This is typically know as Dutch national flag problem and algorithm to solve it is called Dutch national flag problem. Example: A = [0,1,0,1,1,2,0,2,0,1] Output = [0,0,0,0,1,1,1,1,2,2] A = [R,B,B,W,B,R,W,B,B] Output = [R,R,W,W,B,B,B,B] This problem can be asked as design question, as let’s say you have to design a robot. All this robot does is : it see three empty buckets and a bucket full of colored balls (red, blue and white). Design an instruction set for this robot that it fill each empty buckets with one color. It’s the same problem as Dutch National Flag problem. Count to sort an array of 0s,1s and 2s We have already seen a similar problem before as Segregate 0s and 1s in an array. We explored how to count elements and re-write them again on to the array. Let’s apply the same method for this problem. Take an array of three integers, index store corresponding count for that number. E.g count[0] stores count of 0 and count[1] stores count of 1. Scan through the array and count each element. At last, re-write those numbers back on to array starting from index 0, according to their count. For example, if there are 4 zeros, then starting from 0 indexes, write 4 zeros, followed by 1s and then by 2. Complexity of counting method is O(n), notice that we scan array twice, first time for counts and second time for writing on array. Dutch national flag problem : algorithm - Start with three pointers : reader, lowand high. readerand loware initialized as 0and highis initialized as last element of array as size-1. - reader will be used to scan the array while low and high will be used to swap elements to their desired positions. - Starting with current position of reader, follow below steps, keep in mind we need 0 at start of array - If element at index readeris 0, swap element at readerwith element at lowand increment low and reader by 1. - If element at readeris 1, do not swap and increment reader by 1. - If element at readeris 2, swap element at readerwith element at highand decrease high by 1 Actually, three pointers divide array into four parts. Red, white, unknown and Blue. Every element is taken from unknown part and put into its right place. So all three other parts expand while unknown part shrinks. Let’s take an example and see how dutch national flag algorithm works. First step, initialize reader, low and high. Element at reader is 0, hence swap element at reader and low,also increment reader and low. Follow the same step, check element at reader again, it’s 1, hence, just move reader by one. Element at reader is now 2, swap element at reader with element at high and decrease high by 1. Element at reader is 1, just increment reader. Element at reader is now 2, swap element at reader with element at high and decrease high by 1. Element at reader is 1, just increment reader. Element at reader is 1, just increment reader. Element at reader 0, hence swap element at reader and low,also increment reader and low. Element at reader 0, hence swap element at reader and low,also increment reader and low. Element at reader is now 2, swap element at reader with element at high and decrease high by 1. Here, high becomes less than reader, we can stop as array is already sorted. Dutch national flag problem implementation package com.company; /** * Created by sangar on 5.1.18. */ public class DutchNationalFlag { public static void swap(int[] input, int i, int j){ int temp = input[i]; input[i] = input[j]; input[j] = temp; } public static void dutchNationalFalgAlgorithm(int [] input){ //initialize all variables int reader = 0; int low = 0; int high = input.length - 1; while(reader <= high){ /* input always holds a permutation of the original data with input(0..(lo-1)) =0, input(lo..(reader-1))=1, input(reader..hi) is untouched, and input((hi+1)..(input.length-1)) = 2 */ if(input[reader] == 0){ /*When element at reader is 0, swap element at reader with element at index low and increment reader and low*/ swap(input, reader, low); reader++; low++; } else if(input[reader] == 1){ /* if element at reader is just increment reader by 1 */ reader++; } else if(input[reader] == 2){ /* If element at reader is 2, swap element at reader with element at high and decrease high by 1 */ swap(input, reader, high); high--; } else{ System.out.println("Bad input"); break; } } } public static void main(String[] args) { int[] input = {2,2,1,1,0,0,0,1,1,2}; dutchNationalFalgAlgorithm(input); for(int i=0; i<input.length; i++){ System.out.print(" " + input[i]); } } } Complexity of Dutch National Flag algorithm is O(n), however, we scan the array only once. Please share if you have some suggestions or comments. Sharing is caring.
https://algorithmsandme.com/tag/dutch-flag-problem/
CC-MAIN-2020-05
refinedweb
891
64
Storage for 3 collinear points to serve as 1-D projective basis. More... #include <vgl_1d_basis.h> Storage for 3 collinear points to serve as 1-D projective basis. This class is (the unit point) (1,1), and the third one (point at infinity) (1,0).,1) and (1,0).!) Definition at line 92 of file vgl_1d_basis.h. Definition at line 100 of file vgl_1d_basis.h. Construct from three collinear points (projective basis). It will serve as origin (0,1), unity (1,1) and point at infinity (1,0). The points must be collinear, and different from each other. Note that there is no valid default constructor, since any sensible default heavily depends on the structure of the point class T, the template type. Note that there is no way to overwrite an existing vgl_basis_1d; just create a new one if you need a different one. Hence it is not possible to read a vgl_basis_1d from stream with >>. Definition at line 10 of file vgl_1d_basis.txx. Construct from two points (affine basis). It will serve as origin (0,1) and unity point (1,1). The points must be different from each other, and not at infinity. This creates an affine basis, i.e., the point at infinity of the basis will be the point at infinity of the line o-u in the source space. Definition at line 17 of file vgl_1d_basis.txx. Definition at line 106 of file vgl_1d_basis.h. Definition at line 105 of file vgl_1d_basis.h. Definition at line 103 of file vgl_1d_basis.h. Projection from a point in the source space to a 1-D homogeneous point. Definition at line 24 of file vgl_1d_basis.txx. Definition at line 107 of file vgl_1d_basis.h. Definition at line 104 of file vgl_1d_basis.h. Write "<vgl_1d_basis o u i> " to stream. normally false; if true, inf_pt_ is not used: affine basis Definition at line 98 of file vgl_1d_basis.h. The point to be mapped to homogeneous (1,0) Definition at line 97 of file vgl_1d_basis.h. The point to be mapped to homogeneous (0,1) Definition at line 95 of file vgl_1d_basis.h. The point to be mapped to homogeneous (1,1) Definition at line 96 of file vgl_1d_basis.h.
http://public.kitware.com/vxl/doc/release/core/vgl/html/classvgl__1d__basis.html
crawl-003
refinedweb
370
77.33
Red Hat Bugzilla – Bug 443257 firstboot does not run on KDE installation Last modified: 2008-08-02 19:40:37 EDT On a fresh KDE installation of 20080418 rawhide, firstboot did not run. Unfortunately, I just blew away that VM for some preupgrade testing, but the steps to reproduce are essentially install with KDE and not Gnome, and watch firstboot not run. Let me know if there's more data I need to gather. Do you have a /etc/sysconfig/firstboot file? If so what does it contain? What does chkconfig --list firstboot say? Are there any /tmp/firstboot-* files? If so, please attach them to this bug report. No /etc/sysconfig/firstboot, checked that. I'll have to do another install to figure out the rest, the VM is gone now. Gack, of course I install again and firstboot works. This was 20080421 rawhide, but I don't think there were any relevant changes between the two. I'll experiment some more this evening. Removing from the release blocker for now. I'll try to reproduce it this evening as well. I should still have the 421 rawhide sitting on my laptop from an unused preupgrade. Fresh KDE-only install from today's (20080422) rawhide did not run firstboot. chkconfig --list firstboot shows it being on for runlevels 3 and 5. There's no /etc/sysconfig/firstboot, but there is a /tmp/firstboot-XXXXXX, containing the following traceback: Traceback (most recent call last): File "/usr/sbin/firstboot", line 170, in <module> moduleList = loadModules(config.moduleDir, mode) File "/usr/lib64/python2.5/site-packages/firstboot/loader.py", line 93, in loadModules displayException(module=module) File "/usr/lib64/python2.5/site-packages/firstboot/exceptionWindow.py", line 95, in displayException ExceptionWindow(text, module=module) File "/usr/lib64/python2.5/site-packages/firstboot/exceptionWindow.py", line 32, in __init__ import gtk File "/usr/lib64/python2.5/site-packages/gtk-2.0/gtk/__init__.py", line 79, in <module> _init() File "/usr/lib64/python2.5/site-packages/gtk-2.0/gtk/__init__.py", line 67, in _init _gtk.init_check() RuntimeError: could not open display Oh, excellent. I love when there's errors in the exception handling code so that we get that traceback instead of the real cause of the problem. However, what's really odd here is that there should be a display up and running at this point. Is this running with RHGB or not? Mine was, not sure about Will's This is reproducable outside of KDE-only installs. I just hit it. I'm not seeing this any more. Is anyone else? I just finished a fresh rawhide (Gnome) installation, and firstboot didn't run at first boot. I restarted the system, and it ran at second boot. Deji, that sounds like another problem - there's some weird race condition that we've been trying (and failing) to reproduce reliably. As for KDE-only installs - works fine for me.
https://bugzilla.redhat.com/show_bug.cgi?id=443257
CC-MAIN-2017-26
refinedweb
489
68.87
Tutorial: Charting with Gnuplot from F# Applies to: Functional Programming Authors: Tomas Petricek and Jon Skeet Get this book in Print, PDF, ePub and Kindle at manning.com. Use code “MSDN37b” to save 37%. Summary: This tutorial shows how to create charts using gnuplot from F#. It demonstrates how to call gnuplot directly and how to use a simple open-source F# wrapper. This topic contains the following sections. - Using Gnuplot for F# Charting - Calling Gnuplot Directly - Introducing a Gnuplot Wrapper for F# - Creating an Advanced Chart - Summary - Additional Resources - See Also This article is associated with Real World Functional Programming: With Examples in F# and C# by Tomas Petricek with Jon Skeet from Manning Publications (ISBN 9781933988924, copyright Manning Publications. Using Gnuplot for F# Charting Gnuplot is a cross-platform and open source tool for creating charts. It can be downloaded from a website referenced at the end of this article. Calling gnuplot from F# may be interesting when using F# in a cross-platform environment. It can be also attractive for developers with existing gnuplot skills. Gnuplot runs as a separate process that can be controlled using commands. This tutorial demonstrates how to call gnuplot directly and it briefly looks at wrappers that provide a more natural interface for F#. You will learn how to: Send commands directly to the gnuplot process (running in the background) Create charts using a cross-platform F# wrapper for gnuplot Configure chart styles and show multiple data series in a single chart The first section starts by looking at the direct way of using gnuplot. It uses standard techniques that .NET provides for starting a process and communicating with it using the standard input. Calling Gnuplot Directly The gnuplot application can be controlled using the Process class from the System.Diagnostics namespace. The class can start the process and write to its standard input stream. It can also monitor the standard and error output of the gnuplot process to receive notifications when an invalid command is used, but this tutorial omit this feature to make the example code simpler. Once the process is started, it is possible to give it all the usual commands that can be written at the gnuplot command prompt. The following listing shows a basic example that draws a graph of two functions: open System open System.Diagnostics let gp = new ProcessStartInfo (FileName = "gnuplot", UseShellExecute = false, CreateNoWindow = true, RedirectStandardInput = true) |> Process.Start // Draw graph of two simple functions gp.StandardInput.WriteLine "plot sin(x) + sin(3*x), -x" The snippet first constructs an instance of the ProcessStartInfo, which defines the parameters that are used to start the process. Most importantly, the initialization sets the file name that should be run. This can be either a full path to the executable or just a file name when gnuplot directory is included in the PATH variable. To create a process that can accept the standard input from the calling script or program, it is necessary to set the RedirectStandardInput property to true. This also requires setting the UseShellExecute option to false. The remaining option specifies that no window should be created for the process. The Process.Start method returns a Process object that can be used for controlling gnuplot. Sending commands is done using the StandardInput property. Writing a line to the standard input corresponds to entering a single command. The example above sends the process the “plot” command with two functions as arguments. Figure 1 shows the output as it looks when executed on Mono. The default gnuplot configuration is to use the X11 terminal, which means that it opens the chart as a new window. Figure 1. A graph showing simple linear and trigonometric functions Plotting a graph of a function may be useful in some cases. When using F# for explorative programming, a more common scenario is to create charts from data that were obtained earlier. The gnuplot program can be used in this scenario too. It can load data from an external file or from the standard console input. Unless the data source is extremely large, it is easier to use the second approach. Passing the data directly to the input avoids the need to create a temporary data file. To specify data through standard input, the script needs to specify '-' as the name of the input when calling the “plot” command. The gnuplot process will start reading the input data immediately after the “plot” command is executed. More information about the gnuplot syntax can be found in the documentation (referenced at the end of the article). The following example shows how to create a simple histogram chart with randomly generated values: // Change style and start plotting a histogram gp.StandardInput.WriteLine "set style fs solid" gp.StandardInput.WriteLine "plot '-' lc 6 with histogram" // Provide data for the plot let rnd = new Random() for v in 1 .. 10 do gp.StandardInput.WriteLine(rnd.Next()) gp.StandardInput.WriteLine "e" The snippet starts by sending a command that changes the visual style of the chart. The command instructs gnuplot to fill rectangles drawn as part of the histogram chart so that it appears as a usual column or a bar chart. The second command tells gnuplot to draw data that will be entered on the standard input using the histogram chart with a line color set to a predefined color number 6. After calling the “plot” command, gnuplot expects the data for the plot. The sample dataset consists only of a single series, so the script writes a single numeric value per line. The for loop generates 10 random numbers and writes them to the standard input of gnuplot using the WriteLine method. The method is overloaded, so it converts the numeric value automatically to a string. Once the data is generated, the input is terminated using a special command “e”. After that, the chart displayed in the figure below should appear, and the gnuplot process is ready to handle further commands. Figure 2. A column chart showing data provided by an F# script The image can be also saved into a file using the command set term png. This instructs gnuplot to generate a chart in the PNG format. The desired file name can be set using the set output <file-name> command. When the script finishes working with gnuplot, it is also a good idea to end the process so that it doesn’t stay loaded in memory. This can be done by sending the quit command or by calling the Kill method of the Process object. Introducing a Gnuplot Wrapper for F# Using the gnuplot process directly from F# is quite straightforward, but it doesn’t fit with the usual F# programming style. To use gnuplot in a more usual style, it is possible to use an open source wrapper that is available in the F# Cross Platform project. The wrapper is available as an F# file that can be loaded in F# Interactive or included other projects. Once the file is loaded, gnuplot process can be started by creating an instance of the GnuPlot class (from the namespace FSharp.GnuPlot): // Load the F# wrapper for gnuplot #load "gnuplot.fs" open FSharp.GnuPlot // Generate histogram chart with random values let gp = new GnuPlot() gp.Plot(Series.Histogram [ for i in 0 .. 10 -> float(rnd.Next()) ]) The most important method provided by the GnuPlot type is the Plot method. The method can be called in several different ways. In the example above, it is given a Series object that represents a histogram. The method can also be called with a string argument, which is used to draw graph of a function. For example, sine function can be plotted using gp.Plot("sin(x)"). The Series object represents a single data series that should be drawn on the chart. The above example creates a Histogram type of series and gives it a single argument which is a randomly generated list of numeric values. The example plots a single data series with the default settings of gnuplot. However, the F# wrapper provides a comfortable way for setting many of the common properties. Creating an Advanced Chart This section looks at a more complicated example of calling gnuplot using the F# wrapper. The example combines three types of data series in a single chart—a series generated as a graph of a function, a column chart, and a line chart. Moreover, the example also specifies various properties of both the entire chart as well as individual data series. This section explores another overload of the Plot method. It takes a list of data series that can be generated using static members of the Series type (such as Series.Histogram from the previous section). It is also possible to create a data series directly using the chart type name used in gnuplot. All overloads of the Plot method take numerous named parameters that allow specifying global properties of the chart such as range, style, and titles. The methods for creating a series also provide several named parameters to configure the individual series. The following example first generates three data series and sets their titles, colors, and line weight. Then, it passes the list of data series to the Plot method and also configures the view area of the chart using a named parameter range: // Data for line and column charts let ds1 = [1.0; 5.0; 3.0; 2.0] let ds2 = [2.0; 4.0; 3.0; 3.0] // Create a list of chart data series let charts = [ Series.Lines ( ds1, title="Estimate", lineColor=Color.OliveDrab, weight=3) Series.Histogram ( ds2, title="Data", fill=Solid, lineColor=Color.SteelBlue) Series.Function ( "sin(x*3) + 3", lineColor=Color.Goldenrod, title="Sinus", weight=3) ] // Plot all 3 charts into an area with the specified range gp.Plot(charts, range = Range.[ -0.5 .. , .. 6.0 ]) When creating a data series using Lines or Histogram method, the only required argument (passed as the first one) is the source of data. When creating a function, the required parameter is an expression (passed as a string) that should be plotted. Additional arguments that are provided above include title to set the annotation for the data series, lineColor to specify the color of the item, and weight. The resulting chart is shown in Figure 3. Figure 3. A gnuplot chart combining three data series One aspect of the previous example that deserves an explanation is the specification of the plot area using the range named parameter. By default, gnuplot calculates the minimum and maximum for both of the axes automatically. It is possible to specify all of the limit values of the range, but sometimes it is useful to specify just some of the values. For example, the snippet above explicitly provides just the minimum value for the x-axis and the maximum value for the y-axis. The syntax that makes this possible in F# is called slices. The gnuplot wrapper uses the Range value (as in the previous example) to specify some or all of the four edge values. When specifying the range for only one of the axes, it is possible to use values RangeX and RangeY and use an expression like RangeY.[ .. 9.5 ]. Another important method provided by the gnuplot wrapper is the Set method. It provides a way to specify parameters for plotting for an entire gnuplot session. Most of the parameters can also be passed to the Plot method, but, in that case, they are reverted back to the default setting after plotting. It is also possible to run a command that is not directly exposed by the wrapper using the SendCommand method. It takes a string and sends it directly to gnuplot. The following example shows how to set a plot title directly and how to set the output terminal to a PNG file using the Set method: // Set terminal to a PNG and specify output file gp.Set(output=Png("C:\temp\plot.png")) // Add title by calling gnuplot directly gp.SendCommand("set title \"Sample chart\""); Summary This tutorial looked at working with gnuplot from F# and, in particular, from F# Interactive. It demonstrated two ways of using the gnuplot tool. The first option is to call gnuplot directly using standard .NET objects for starting and communicating with other processes. In this case, commands are written in the format used by gnuplot and sent to gnuplot as strings. This is somewhat inconvenient, especially when generating plots automatically. The second option is to use an F# wrapper for gnuplot. The wrapper provides a more natural F# syntax and supports many gnuplot features directly in a type-checked way. Additional Resources The most important advantage of using gnuplot is that it can be used in a cross-platform environment. It is also an established tool, especially in the scientific See Also. The following MSDN documents and external resources are related to the topic of this article: gnuplot homepage contains gnuplot source code and binaries for various platforms as well as additional resources. Official gnuplot documentation contains a comprehensive reference on using gnuplot. Note that only some functionality is currently exposed by the F# wrapper, but you can add new features by following the existing pattern. F# cross-platform packages and samples (CodePlex) is an open-source project that includes a wrapper for creating gnuplot charts from F#. You're welcome to contribute your extensions to the F# wrapper. Process.Start Method (ProcessStartInfo) describes how to start and control processes on .NET. We used this method to start gnuplot, so that we could send commands to it from F#. Previous article: Tutorial: Charting with Excel from F# Next article: How to: Create Standard Charts in F#
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/hh297126(v=vs.100)
CC-MAIN-2018-51
refinedweb
2,285
54.73
> T. Blazor is a framework that allows you to write C# fullstack. If you are developing a fullstack web application, you usually have to involve JavaScript at some point. You either add it to improve the interaction of your pages or you split between having a backend in .NET and the frontend in JavaScript using for example a SPA framework like React, Angular or maybe Vue. A Blazor app can be compiled into WebAssembly and can thereby be a first-class web citizen and also really fast. If you are completely new to Blazor I recommend reading this intro article. Static Web Apps is an Azure service with which you can deploy fullstack apps within minutes. It can deploy both JavaScript projects as well as Blazor. NET developer here, you have my attention. So, it can deploy a Blazor project, what else can it do? That's a nice featurelist. I care about ease of use, what can you tell me about that? There's not much to fill in, everything revolves around your GitHub repo and once you selected a repo, and a few other things, it starts deploying it. Ok, but how does it work under the hood? It works by creating and running GitHub actions that carries out things like fetching dependent libraries, building your code, and finally deploying it. You end up getting a so-called workflow file pushed to your repo (it's a YAML file). Alright, but I'm likely to update my code quite a lot, does it help me with redeploy? It does, you can define in the workflow file when a redeploy should be trigger, like merging of a PR or a commit to master/main branch for example. This all sounds very promising; can you take me through a deploy? Of course, next thing on my list :) git clone <name of repo URL> dotnet build cd Client dotnet run You should get a terminal output similar to the following:: /path/to/project/blazor-sample/Client At this point you have a working Blazor app that you can deploy using Azure Static functions. How do you do that? Now you are met with a set of dropdowns where you need to fill in some info. Click to be taken to the resource once deployed. The resource page should look something like this: Above you have the resource. You could click the URL from the indicated field, but it would take you to default page. Why is that? Your app hasn't finished building yet. Instead click the link GitHub action runs. This will take you to the GitHub actions of your repo. Once all the actions have finished it should look like so: Now a Blazor app could contain its own backend. The way the Azure Static Web Apps service is constructed though it assumes your backend will be located in an Api directory. So what should be in that directory? Well a function app. Luckily your repo already have a working function app, almost. Let's review our repo quickly. Your solution should look something like this. -| Api -| Data -| Client You already know about the Client directory where your Blazor app lives. The other directory of interest is the Api directory that contains a Function app. It's an almost functioning Function app. What do I mean by almost? Well let's have a look at it, expanding the Api directory there are some files of interest: Client/ Api/ ProductData.cs ProductsDelete.cs ProductsPost.cs ProductsPut.cs The first file ProductData.cs contains an in-memory data store. The remaining three files is just routes for our API. For this API to be a full Create Read Update Delete it needs another file ProductsGet.cs, let's create that file and give it the following content: using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Extensions.Http; using Microsoft.AspNetCore.Http; namespace Api { public class ProductsGet { private readonly IProductData productData; public ProductsGet(IProductData productData) { this.productData = productData; } [FunctionName("ProductsGet")] public async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "products")] HttpRequest req) { var products = await productData.GetProducts(); return new OkObjectResult(products); } } } Now select Run > Start debugging from the top menu in VS Code. At the end of the build output you should have text stating something like this: ProductsPut: [PUT] ProductsGet: [GET] ProductsPost: [POST] ProductsDelete: [DELETE]{productId:int} You are almost there. When testing things out locally you need to instruct the Function to allow requests from a cross domain, i.e our Blazor app. How do we do that? Locate the local.settings.json file and ensure it has the following content: { "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "", "FUNCTIONS_WORKER_RUNTIME": "dotnet" }, "Host": { "CORS": "*" } } Above you added the Host property and made CORS point to allowing all requests. This is just something we do locally, don't worry about this making production. At this point you can run your client Blazor app and it will look like this: The Blazor app is now able to talk to your Function app backend. So how do you deploy this so that the API part is there? You need to do the following: That's it, the way the workflow file is constructed it should pick up the changes on push and redeploy the app. Open up the workflow file. It's a file ending in .yml in your .github sub directory (ensure you have done a git pull before this so you get this file as it's created and added to your repo the first time you deploy). Locate the section called api_location:. Ensure it looks like this api_location: "/Api". This will point out our Api sub directory. Type the following command: git add . git commit -m "adding API" git push The above should push your changes to GitHub and the GitHub actions should be triggered. You should now see the deployed app, this time loading the data correctly Github repo 404 @Mickahappening we are trying to fix it, meanwhile, please clone this repo
https://techcommunity.microsoft.com/t5/apps-on-azure/deploy-your-net-blazor-app-in-minutes-with-azure-static-web-apps/ba-p/1739102?WT.mc_id=DOP-MVP-5003880
CC-MAIN-2021-21
refinedweb
1,007
66.84
Update of /cvsroot/python/python/dist/src/Python In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv24384/Python Modified Files: pystate.c Log Message: Revert rev 2.35. It was based on erroneous reasoning -- the current thread's id can't get duplicated, because (of course!) the current thread is still running. The code should work either way, but reverting the gratuitous change should make backporting easier, and gets the bad reasoning out of 2.35's new comments. Index: pystate.c =================================================================== RCS file: /cvsroot/python/python/dist/src/Python/pystate.c,v retrieving revision 2.36 retrieving revision 2.37 diff -u -d -r2.36 -r2.37 --- pystate.c 10 Oct 2004 02:47:33 -0000 2.36 +++ pystate.c 10 Oct 2004 05:30:40 -0000 2.37 @@ -484,31 +484,24 @@ assert(tcur->gilstate_counter >= 0); /* illegal counter value */ /* If we are about to destroy this thread-state, we must - * clear it while the lock is held, as destructors may run. - * In addition, we have to delete out TLS entry, which is keyed - * by thread id, while the GIL is held: the thread calling us may - * go away, and a new thread may be created with the same thread - * id. If we don't delete our TLS until after the GIL is released, - * that new thread may manage to insert a TLS value with the same - * thread id as ours, and then we'd erroneously delete it. - */ + clear it while the lock is held, as destructors may run + */ if (tcur->gilstate_counter == 0) { /* can't have been locked when we created it */ assert(oldstate == PyGILState_UNLOCKED); PyThreadState_Clear(tcur); - /* Delete this thread from our TLS */ - PyThread_delete_key_value(autoTLSkey); } /* Release the lock if necessary */ if (oldstate == PyGILState_UNLOCKED) PyEval_ReleaseThread(tcur); - /* Now complete destruction of the thread if necessary. This - * couldn't be done before PyEval_ReleaseThread() because - * PyThreadState_Delete doesn't allow deleting the current thread. - */ - if (tcur->gilstate_counter == 0) + /* Now complete destruction of the thread if necessary */ + if (tcur->gilstate_counter == 0) { + /* Delete this thread from our TLS */ + PyThread_delete_key_value(autoTLSkey); + /* Delete the thread-state */ PyThreadState_Delete(tcur); + } } #endif /* WITH_THREAD */
https://mail.python.org/pipermail/python-checkins/2004-October/043608.html
CC-MAIN-2016-36
refinedweb
343
64.81
In C#, a string is an object of System.String class in Dot Net framework. Objects of String class are immutable (once created cannot be changed). Basically, the C# string type is a sequence of characters (text). Creating a string variable using the keyword string is a common practice to do any manipulations to a string. But in C#, Strings can also be used as an array of characters. We can say that the string keyword is an alias name for the System.String class. The string is immutable, and it can be created objects in different ways: - By creating a variable to string and assigning literal to it. - By using the concatenation operator +. - Using the constructor of string class. - Calling a method that returns the string. - By calling a Format method to convert a value or an object to its string representation. The syntax of the C# strings is as shown below //String declaration string str; //initializing to null string str = null; //Initializing an empty string string str = “”; string str = System.String.Empty; //Initializing a string literal string path = “C:\\Program Files\\Microsoft SQL SERVER”; //Initializing a string using Verbatim literal to improve readability string str = @“C:\Program Files\Microsoft SQL SERVER”; C# String Example In case if we want to print the string in double-quotes. For example (“Tutorial Gateway”), then directly, we cannot use them because double quotes have a special meaning in C#. Using Escape sequence \ (backslash), we can print a string in double-quotes. using System; class Program { static void Main() { string str = "\"Tutorial Gateway\""; Console.WriteLine("This is {0}", str); } } OUTPUT The following are the various character Escape sequences in C# Programming language to display strings.
https://www.tutorialgateway.org/csharp-string/
CC-MAIN-2021-43
refinedweb
282
56.25
CodePlexProject Hosting for Open Source Software Hello: I new to Orchard, but not to MVC/development. Setting up an Azure instance and it's working pretty good, though finding a lot of modules are limited. Modifying as needed. So far, Orchard seems like the way to go! I have an SEO issue though with Orchard, and it's not clear on how to change it. I'm hoping someone here can help. Per SEO, you are to have a different H1 per page. Ok, no issue with that as I can modify the parts cshtml. The problem comes from when I want to use the same header on the homepage as an H1 tag, but on all other pages the header should be an H2. Let me give an example: Homepage HTML: <div id="header"> <h1>Eric Duncan</h1> </div> <ul> <li><a href="...">Title of Article 1</a></li> <li><a href="...">Title of Article 2</a></li> <li><a href="...">Title of Article 3</a></li> </ul> And an article page: <div id="header"> <h2>Eric Duncan</h2> </div> <h1>Title of Article 1</h1> <p>Body of content ...</p> Notice how the Header is different between the two? The homepage uses an H1, and all other pages will have it set as H2. I tried to override the Layout from within the Parts, but I get a read-only error. @{ Layout = ""; } If there is a different way I can have the homepage use a different Layout or header, please let me know. THanks! Install the designer tools module, enable the url alternates feature and add a layout-url-homepgage.cshtml file to the views folder of your theme. Bertrand, if i have a custom module, what is the naming convention of the layout I should use in my theme? For example, my module has url's like /shoes/435682/shoe-name (similar to stackoverflow's URL format for questions, the shoe-name part is just for SEO, the ID part is the only part used for finding data). If i want all the /shoes/... pages to use an alternate layout, what would I name the layout in my theme? Thanks in advance. There is no convention but the one you create. In that case, you should create your own alternates. For example, if those are handled from a controller or from a driver, you can get to the layout shape through the work context, and add your own alternates from there. Thanks, I'll have to read more about shapes and layouts to understand what you mean. I haven't yet seen or worked with code that does something like what you mention. Did a a little more reading and i think i should clarify what i was asking. I want to know how to change the main Layout.cshtml to an alternate one if the request is for an url in my /shoes/... folder. I don't want an alternate layout for my custom content types. Thank you -- that answered my question. I first tried it exactly as your post described, got it working for one of my module's Controllers, and then modified it a little. What I wanted was for every page to use Layout.cshtml (as happens by default), and then if the request is going to one of three specific controllers, to use an alternate layout. But I didn't want to create three copies of my alternate layout (one for each Controller that uses the alternate layout), so I defined LayoutFilter.cs like this: using System; using System.Linq; using System.Web.Mvc; using System.Web.Routing; using Orchard; using Orchard.Mvc.Filters; namespace ShoeDepot.Filters { public class LayoutFilter : FilterProvider, IResultFilter { private readonly IWorkContextAccessor _wca; public LayoutFilter(IWorkContextAccessor wca) { _wca = wca; } public void OnResultExecuting(ResultExecutingContext filterContext) { WorkContext workContext = _wca.GetContext(); RouteValueDictionary routeValues = filterContext.RouteData.Values; string controller = (string) (routeValues["controller"] ?? string.Empty); if (controller.Equals("Shoes", StringComparison.OrdinalIgnoreCase) || controller.Equals("AdvancedSearch", StringComparison.OrdinalIgnoreCase) || controller.Equals("Accessories", StringComparison.OrdinalIgnoreCase)) { workContext.Layout.Metadata.Alternates.Add("Layout__LeftRight"); } } public void OnResultExecuted(ResultExecutedContext filterContext) { } } } In my theme I saved my alternate layout under /Views/Layout-LeftRight.cshtml. Now when a request comes in that is routed to the ShoesController, AdvancedSearchController, or AccessoriesController, Orchard uses the /MyTheme/Views/Layout-LeftRight.cshtml, and all other requests use /MyTheme/Views/Layout.cshtml. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://orchard.codeplex.com/discussions/284209
CC-MAIN-2017-26
refinedweb
759
58.79
Debugging your source code is a critical part of any application development process and Android is no exception. A lot of people post comments or email me asking me to help them with their application that isn’t working. The first thing I try to ask for are the log files. I do this because 90% of the time, the log files have an error message that tells us exactly what line the error is on. For Android development, we are lucky enough to have a fantastic debugging tool shipped with the SDK. The Android Debug Bridge (ADB) is a command line interface that has the power to tell us exactly what our app is doing inside a simulator or device. It is not restricted to telling just what our app is doing, but what the whole device and what every app on the device is doing. Everything after this point will assume that you have the Android SDK installed and configured correctly. If you’re using Ubuntu Linux, I have a script that will do this all for you. It can be seen in one of my previous articles. If you’re using Mac or Windows and are not set up, I suggest a few Google searches on the topic. We will be using the following command: adb logcat If you want to keep up with the internet memes, Google included the ability to run the logging software with the following command as well: adb lolcat The above commands are very powerful to developers. You can use it to debug native Android applications as well as hybrid Android applications created with frameworks such as Phonegap or Ionic. To keep things diverse, I’m going to demonstrate how to debug both native and hybrid Android applications. Let’s start by creating a fresh Android application from the command line: android create project --target 19 --name TestProject --path ./TestProject --activity TestProjectActivity --package com.nraboy.testproject Now we need to open the freshly created project and add some terrible code. Java is an unforgiving language so we can’t really add randomness and hope it passes the compile phase. We need to be able to establish a run time error only and not compile time error. Open your src/com/nraboy/testproject/TestProjectActivity.java file and make it look like the following: package com.nraboy.testproject; import android.app.Activity; import android.os.Bundle; import java.util.ArrayList; public class TestProjectActivity extends Activity { ArrayList<String> testList; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); testList = new ArrayList<String>(); testList.remove(0); } } I went ahead and created a new ArrayList and then tried to delete from it. Because there are no items in the list prior to trying to delete, it will throw an index out of bounds error. With logcat running, launch the application and you should see something like the following: Notice that the Terminal not only displays the file that the error occurred in, but it also tells us the line number. This information paired with the type of error should be enough to resolve the issue. Let’s start by creating a fresh Ionic Framework application from the command line: ionic start TestProject blank cd TestProject ionic platform add android Now we need to open our project and add some bad code. It is easier to create bad code in a hybrid application because JavaScript is way more forgiving than Java. With that said, crack open app.js and make your file look similar to the following: var example = angular.module('starter', ['ionic']); example.run(function($ionicPlatform) { $ionicPlatform.ready(function() { if(window.cordova && window.cordova.plugins.Keyboard) { cordova.plugins.Keyboard.hideKeyboardAccessoryBar(true); } if(window.StatusBar) { StatusBar.styleDefault(); } alert("It will error " + myVariable.run); console.log("See Above"); }); }); Variable myVariable doesn’t exist so we’d be trying to access the run element on a null object thus throwing an error. If this were native Java it probably wouldn’t compile, but since it’s JavaScript, the compile process won’t even check. When trying to launch the application with logcat running you should see the following in your Terminal: Now I’m sure you’re probably going to come up with nastier errors than I did in my example, but at least you’ll have a good idea on what to look for. You’ll notice in the above error it tells us the exact file and line to look at. With that kind of information it should only take seconds to correct. It doesn’t matter if you’re building native Android applications or hybrid Android applications, the ADB utility will help you debug your application quickly and easily. Yes the errors can be tricky to see in logcat, but after you catch a few of them and you know what to look out for, they should become easier to spot. A video version of this article can be seen below.
https://www.thepolyglotdeveloper.com/2014/12/debugging-android-source-code-adb/
CC-MAIN-2022-21
refinedweb
835
54.12
Common code between the AC-3 encoder and decoder. More... #include "avcodec.h" #include "ac3.h" #include "get_bits.h" Go to the source code of this file. Definition at line 76 of file ac3.c. Referenced by calc_lowcomp(), and ff_ac3_bit_alloc_calc_mask(). Definition at line 86 of file ac3.c. Referenced by ff_ac3_bit_alloc_calc_mask(). 97 123 of file ac3.c. Referenced by bit_alloc_masking(), and decode_audio_block(). Initialize some tables. note: This function must remain thread safe because it is called by the AVParser init code. Definition at line 223 of file ac3.c. Referenced by ac3_decode_init(), and ff_ac3_encode_init(). Starting frequency coefficient bin for each critical band.().
http://ffmpeg.org/doxygen/trunk/ac3_8c.html
CC-MAIN-2015-32
refinedweb
102
65.08
Dennis Korbar wrote:I'm not sure if there is a way to check it before, but how about this workaround? public class A { A(int x, int y) { if(this instanceof B && (x<5&&y<10)) throw new IllegalArgumentException("Invalid Values"); } } Rob Prime wrote:If I were your employer and you would hand in that code, I would fire you on the spot. Seriously. Well ok, maybe not fire you, but definitely yell at you. Classes should never need to know anything about which sub classes it has, and your workaround clearly violates that. Mike Simmons wrote:Rob is a harsh boss. Dennis Korbar wrote:eeek, it was the only quick way I could think up for doing what was requested... sorry about that, just wanted to help, didn't say that it was the appropriate way to do things...
http://www.coderanch.com/t/426195/java/java/super-statement-subclass-constructor
CC-MAIN-2015-40
refinedweb
141
69.72
Working in Visual Studio, we often find ourselves needing to rename our project and/or solution and the directories they live in. In particular, when we clone a project from Github, what we end up with. Let's take a quick walk through.: $ git clone You should now find the cloned project in target directory:: In Solution Explorer, we can easily rename the project and the Solution simply by clicking on the current name and editing. This part, most can figure out on their own:): Ok, so now you have changed the default project namespace. However, all of the files included in the project when you cloned it still contain the old namespace, For example, if we open the AccountContoller file, we see the following: AccountContoller. namespace AspNetRoleBasedSecurity.Controllers AspNetRoleBasedSecurity: Up to this point, everything in our project still works fine (or, it should, anyway). The project should build and run properly. However, there is still the issue of the project and solution directory folders, and this is where the trouble usually starts. "AspNetRoleBasedSecurity\MyNewProjectName.csproj" "MyNewProjectName\MyNewProjectName.csproj" This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Quote:Up to this point, everything in our project still works fine (or, it should, anyway). The project should build and run properly. Quote. General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/script/Articles/View.aspx?aid=697108
CC-MAIN-2015-32
refinedweb
251
51.48
SerialException: could not open port 'COM3' WindowsError(5, 'Access is denied') I've been trying to send a trigger through a CEDRUS StimTracker Duo. The information on here about using serial ports seemed fairly straight forward, but implementing the scripts has not worked for me despite many modifications to the script. My current setup looks as follows, This inline script is at the very beginning of the experiment: import serial ser = serial.Serial() ser.baudrate = 115200 ser.port = 'COM3' ser.open() exp.ser = serial.Serial('COM3', baudrate=115200, bytesize=serial.EIGHTBITS) exp.ser.write(chr(1)) exp.cleanup_functions.append(exp.ser.close) The following inline script is put in during the practice_sequence and is meant to send the trigger when the stimulus is presented: exp.ser.write(chr(1)) Running this gives me the error in the title. I have also thought that perhaps I need to implement the sample python code provided by Cedrus, but this gives me an even more complicated error. running the following script: The line that seems to be giving me trouble is dev = devices[0] as I get the error message IndexError: list index out of range. If anyone has any idea as to how I can approach these errors I'd appreciate it as google hasn't given me a great solution yet. Hi Flint, I suspect that both errors (from serial. Do you need to install drivers for it? Is there some hardware issue with the device or the cable? Cheers! Sebastiaan There's much bigger issues in the world, I know. But I first have to take care of the world I know. cogsci.nl/smathot Hello Sebastiaan, Thank you for taking the time to respond. I have checked device manager and made sure that COM3 is the StimTracker Duo so that is not the problem, it is under Ports as USB Serial Port(COM3). However, there is a USB driver that CEDRUS links to in their website and I have also already downloaded that. Perhaps the problem is where I have placed the pyxid folder? after downloading the file from github I placed the pyxid-master folder into the same folder that holds OpenSesame. I am not sure if this was the correct protocol. Similarly, I have also downloaded and installed pyserial-master from github and placed it into the same folder containing OpenSesame. At fist the problem was that it could not find pyxid so I copied the folder inside of the pyxid-master folder, called pyxid and brought it up to the main OpenSesame folder, as that did not work I then attempted to rename pyxid-master to simply pyxid, but that too gave me the same error of list index out of range. I have tried running a simpler script (such as the one described here:) for activating a serial port and sending triggers, and while the experiment ran just fine, the triggers were not sent so I do believe the sample python code provided by CEDRUS will be essential for getting the triggers to send. After speaking to someone from NIRx they gave us a sample code that works with psychopy, but I figured it might work for OpenSesame as well so I gave it a try. import pyxid import time devices = pyxid.get_xid_devices() assert len(devices) > 0 d = devices[0] d.init_device() d.set_pulse_duration(1000) Then to actually send the triggers it would be: d.activate_line(lines=[6]) Unfortunately, I get an Assertion Error for assert len(devices) > 0. If I delete assert, then I get and error with d = devices[0] and once again get the list index out of range error. Using a simpler script that doesn't use pyxid, I have tried your suggestion from a different thread where you suggest printing something to the debug window after sending the trigger. exp.trigger.write(chr(1)) print 'I just sent trigger 1! (or did I ... ?)' The problem I run into here is that I get a NameError: name 'exp' is not defined. I am sorry if this is a bit too long I just wanted to give as much information as possible, but I truly appreciate your help. Best, FlintMarco _Bob, I uninstalled and reinstalled the driver and deleted and re downloaded the pyxid package. Using the 'init' script you suggested: import serial var.experiment.serial_port = serial.Serial('COM3', baudrate=115200, bytesize=serial.EIGHTBITS) var.experiment.cleanup_functions.append(var.serial_port.close()) 'send trigger script': var.experiment.serial_port.write(chr(1)) Here is where I ran into the error: SerialException: Attempting to use a port that is not open If I remove the line "var.experiment.cleanup_functions.append(var.serial_port.close())" and place it as inline script at the end of the experiment, then the whole experiment will run but will give the error: TypeError: 'NoneType' object is not callable. Triggers were not sent in this situation either even though it ran the whole experiment. Now if I try to implement the pyxid script above import serial I initially got the error: UnicodeDecodeError: 'ascii' codec can't decode byte 0xe8 in position 0: ordinal not in range(128) But, every time after that I receive the same error: list index out of range. I suspect something in my laptop is giving me issues so I have decided to try downloading OpenSesame on a different computer in which I have successfully sent triggers using E-Prime. If I continue having the same issues it will definitely be an error in my code. I will update immediately if I have success using a different computer. I would still appreciate any suggestions you may have regarding my latest error codes. Many thanks, FlintMarco Unfortunately, switching computers did not make a difference as I continued to get the same error codes. I even accounted for the switch in COM ports ('COM2' in this case) in this other computer by using device manager. I tried many iterations of my script, but each had the same errors as before. Mainly: UnicodeDecodeError: 'ascii' codec can't decode byte 0xf0 in position 0: ordinal not in range(128) UnicodeDecodeError: 'ascii' codec can't decode byte 0xf0 in position 1: ordinal not in range(128) SerialException: could not open port 'COM2': WindowsError(5, 'Access is denied.') SerialException: Attempting to use a port that is not open I have included a few screenshots if they are any help visualizing my experiment: The UnicodeDecodeError is exclusive to the import pyxid scripts. SerialException is exclusive to import serial scripts. Hi FlintMarco, There are several issues mixed up here. Rather than posting them all here without any further information, let's tackle them one at a time. This: Doesn't work, because you're calling var.serial_port.close(), and appending the return value of that function to the cleanup_functionslist. Do you see how this works? What you want to do instead is append the function itself to the cleanup_functions, without calling ( ()) it: So let's start by fixing this. Then you'll probably run into some issue. Post this issue here, including the full traceback that you get, and the part of the script that triggers the error. Baby steps! Cheers! Sebastiaan There's much bigger issues in the world, I know. But I first have to take care of the world I know. cogsci.nl/smathot Hey Sebastiaan, I see what you're saying. I have now modified the script to append the function itself just like your example. The send_trigger_script inserted into the practice_sequence1 looks as follows: var.experiment.serial_port.write(chr(1)) The experiment runs without any errors, but it does not send any triggers at all. If I unplug the StimTracker and try to run the experiment I get the error: SerialException: could not open port 'COM3': WindowsError(2, 'The system cannot find the file specified.') This would mean that OpenSesame is aware of the StimTracker, but something must be missing in my code because the trigger isn't being sent. Attached is an Image with the traceback: Perhaps I have made an error with my "Run if" statements in the practice sequence? I will attach a screenshot of this as well, I have it set to run the send trigger script when the stimulus is heard: Best, FlintMarco Hi FlintMarco, Let's think about how to debug this issue: The most obvious possibility is that the trigger code is simply not executed. So how could you find this out? print("Sending trigger!")line to the code that also sends a trigger. This will print "Sending trigger!" to the debug window, allowing you to see whether this piece of code is executed. Do you see what I mean? Try to systematically pin down where the problem lies, and then you'll figure it out! Cheers, Sebastiaan There's much bigger issues in the world, I know. But I first have to take care of the world I know. cogsci.nl/smathot Hi Sebastiaan, As always, thank you for your help. I added the print("Sending trigger!") line as you suggested and interestingly enough it would seem the trigger code is sent: Unfortunately, that does not appear to be the case as the StimTracker should light up each time a trigger is sent and so should the m-pod. The other computer shows no records of receiving the event markers either. I will continue to try and systematically pin down where the problem lies as you have suggested and will update if I have any more helpful information. Best, FlintMarco Looking at the set up for E-Prime the code seems to have many settings and intricacies to it. I figured perhaps there is something I can extract from it to help me activate the triggers using OpenSesame. The following is E-Basic script for sending triggers: Even though it is not in Python I figured it wouldn't hurt to ask for advice since I am currently at a standstill with my current set up. The experiment functions as normal and indicates triggers are being sent, but there is no recorded evidence that this is the case. I must still be missing something for this to work. One thing that comes to mind is that the triggers are only registered when a change occurs. With a parallel port that's certainly the case. With a serial it doesn't need to be, but perhaps they mimic that behavior. So what happens if you do: Does that register? There's much bigger issues in the world, I know. But I first have to take care of the world I know. cogsci.nl/smathot Hey Sebastiaan, I tried your suggestion two different ways. First with the latest set up of: import serial var.experiment.serial_port = serial.Serial('COM3', baudrate=115200, bytesize=serial.EIGHTBITS) var.experiment.cleanup_functions.append(var.serial_port.close) Then I added your script: and received the following error: Attempting to use a port that is not open. I then went back to an earlier version of the set up script that seemed a bit more fitting: And it ran the whole experiment and it printed that it sent the triggers in the debug window, but again did not send any triggers: It seems I am in a bit of a tricky situation and feel I have exhausted my resources. I feel like it is definitely possible to use this piece of hardware to send triggers through OpenSesame I just don't know what I could possibly be missing besides that pyxid script. Thank you for taking the time to help me through this difficult situation even if all we hit are dead ends. I will continue trying different combinations and hope something works. Best, FlintMarco Hi, After taking a closer look at the E-Prime and pyxidcode, I'd say that the stimtracker uses a specific protocol. That is, you cannot send random bytes and expect the device to respond to that, but rather you need to send a specific byte sequence that forms a command that the stimtracker responds to. Does that make sense? This protocol is no doubt documented somewhere by Cedrus. Alternatively, you can try to figure out what they're doing in pyxid: Cheers! Sebastiaan There's much bigger issues in the world, I know. But I first have to take care of the world I know. cogsci.nl/smathot Hello, I see what you mean about a specific byte sequence that forms a command. I found an interesting excerpt from the link you provided.: Try to determine if we have a valid packet. Our options are limited; here is what we look for: a. The first byte must be the letter 'k' b. Bits 0-3 of the second byte indicate the port number. Lumina and RB-x30 models use only bits 0 and 1; SV-1 uses only bits 1 and 2. We check that the two remaining bits are zero. c. The remaining four bytes provide the reaction time. Here, we'll assume that the RT will never exceed 4.66 hours :-) and verify that the last byte is set to 0. Refer to: The hardware they are referring to here are two response pads and a voice key; therefore I am not sure if the byte sequence would be similar for the StimTracker. The link they provide is not found, but: seems to be similar. Since we are looking for a specific byte sequence I remembered using RealTerm to test the triggers by sending Event Codes myself and it worked just fine in that setting. The way they set it up: A pulse can be sent using the mh command. 255 different event codes can be produced by sending pulses on up to 8 lines at the same time. Using mh is slightly more complicated than using _d1 because it involves sending a binary value to indicate which of lines the pulse should be sent on. In this context, “binary value” means that the byte of information cannot be easily represented by a text character. As described in the StimTracker Software Commands page, the entire command consists of four bytes: •Letter m, which corresponds to number 109 (its ACSII value) •Letter h, which corresponds to number 104 •A number indicating which lines we want to send the pulse on. For testing purposes, we will send value 255 to send a pulse on all eight lines. This makes it easier to see things on your EEG, eye tracker, or other recording software. •The last number is reserved for future use and should always be 0. Continuing where we left off in RealTerm’s Send tab: •Type “109 104 255 0” in the edit field shown. Looking back at the E-Prime code I can see that the numbers 109 and 104 once again had important roles for sending the event codes. The problem is I don't really have any idea how I would incorporate this into the inline scripts on OpenSesame. Would you happen to have any suggestions? Thank you for taking the time to respond and providing helpful advice. Best, FlintMarco Hi, Python's serialmodule expects byte strings: strings of characters that each reflects a single byte. The chr()function will convert a byte value to its corresponding character. So sending 109 104 255 0 would be: Cheers! Sebastiaan There's much bigger issues in the world, I know. But I first have to take care of the world I know. cogsci.nl/smathot Sebastiaan you magnificent genius! I have successfully gotten OpenSesame to send a trigger! At the moment it only sends one that is constant throughout the experiment, but I had a similar issue when I first started working on E-Prime and It's only a matter of making a few changes to the experiment and adding some inline script. The important thing is that we have finally gotten OpenSesame to send a trigger with the StimTracker! If I run in to any roadblocks I'll be sure to ask again, but at this point we are past the toughest challenge and it should be a lot easier moving forward. Once I have it functioning smoothly I'll update this thread with some screenshots so that it can help other users in similar situations. Many thanks, Flint Marco I have now successfully set up my experiment to send triggers when the stimuli is presented; in my case it is audio. I will post some screenshots to hopefully make it easier for anyone who works with a CEDRUS StimTracker Duo to set up. First, you should begin the experiment with an inline script at the top of the experiment that is used for setting up the serial device that will be used to send triggers during the experiment. In my case I renamed it serial_setup and it looks like this: Next I will show you what my specific block loop looks like for my experiment. Yours will look different, but it is what will be used when you want to send the trigger. In my case I want it sent each time Mandarin tone is presented for the practice sequence: Now we will take a look at the send_trigger_script. It is fairly simple and includes the byte sequence that the StimTracker Duo is able to recognize. It should be noted that when this event code is sent it will show up as the number "15" on the receiving computer. While this is fine for my experiment since I just want to receive an event marker when one stimulus is presented, other experiments requiring more than one event marker will likely require you to change the number "255" in the script (likely a number between 1-255). I have not yet tested this as I am still finishing up my project. one send_trigger_script should be placed in every sequence and for every trigger you wish to send. I use unlinked copies to expedite the process: If we look at the sequence in which the send_trigger_script is placed you will see I have them synced to send at the same time the audio is heard: In my pretest sequence I have pairs of words grouped for counterbalancing, but I will show you how I use the send_trigger_script within them so that you can have a clearer understanding: Finally, I just want to note that you might have seen an inline script in my experiment called shuffle_script and wondered if it has anything to do with the triggers, it does not. I have used that script to shuffle the blocks so that they are presented in a randomized order for counterbalancing purposes. If you wish to find our more about this you can do so by clicking on this discussion: There is likely a way to make the trigger sending process more efficient and cleaner without having so much inline script, but for the time being this serves its purpose and isn't too difficult to set up. Special thanks to Sebastiaan and _Bob for providing insightful comments throughout the process. Best, FlintMarco Hi..i think as suggested guys, I need your help. I'm trying to control my arduino using python in Proteus. Below is the error that I get Python 2.7.15 (v2.7.15:ca079a3ea3, Apr 30 2018, 16:30:26) [MSC v.1500 64 bit (AMD64)] on win32 Type "copyright", "credits" or "license()" for more information. >>> import serial >>> p=serial.Serial() >>>>> p.port=a >>> p.timeout=2 >>> p.open() Traceback (most recent call last): File "<pyshell#5>", line 1, in <module> p.open() File "C:\Python27\lib\site-packages\serial\serialwin32.py", line 62, in open raise SerialException("could not open port {!r}: {!r}".format(self.portstr, ctypes.WinError())) SerialException: could not open port '1': WindowsError(2, 'The system cannot find the file specified.') >>> Maybe I am missing something, but this doesn't seem to be a Opensesame specific issue, right? You are probably better off posting this on Stackoverflow or some other place. Your chances are higher that you'll encounter someone who can actually help. Good luck, Eduard
http://forum.cogsci.nl/discussion/comment/14190/
CC-MAIN-2019-35
refinedweb
3,351
62.07
Watch this Author's files I am aware of the namespace problem... will fix it shortly and upload the new code. For now, just remove the 'IB.' prefix .'); Hi Jev, Is it possible to give me an example how to execute the positions via your api? So many thanks. Jev, At a quick glance this seems excellent since I last saw max dama's stuff he started with some years ago. Now with windows 7's 64 bit system is there allot of changes needed? Can i just use your code as is? thxs again Thanks! Really nice! Some modification should be done to make it more flexible. I for example introduced a loop to acquire more data. Anyhow, great program! 4 5 Comment only
http://www.mathworks.com/matlabcentral/fileexchange/authors/113441
CC-MAIN-2013-48
refinedweb
125
87.31
Welcome to my State Design Pattern Tutorial! I explain how the state pattern is used by using it to simulate an ATM machine. I explain how you come to decide on the different states. I then show you how to design the interface that each state will use. We think about the methods that are needed for every class that implements the interface then. All the steps are looked at from many directions and the code below will fill in the gaps. If you like videos like this it helps to tell Google [googleplusone] Share if you think others may like it Code From the Video ATMSTATE.JAVA public interface ATMState { // Different states expected // HasCard, NoCard, HasPin, NoCash void insertCard(); void ejectCard(); void insertPin(int pinEntered); void requestCash(int cashToWithdraw); } ATMMACHINE.JAVA public class ATMMachine { ATMState hasCard; ATMState noCard; ATMState hasCorrectPin; ATMState atmOutOfMoney; ATMState atmState; int cashInMachine = 2000; boolean correctPinEntered = false; public ATMMachine(){ hasCard = new HasCard(this); noCard = new NoCard(this); hasCorrectPin = new HasPin(this); atmOutOfMoney = new NoCash(this); atmState = noCard; if(cashInMachine < 0){ atmState = atmOutOfMoney; } } void setATMState(ATMState newATMState){ atmState = newATMState; } public void setCashInMachine(int newCashInMachine){ cashInMachine = newCashInMachine; } public void insertCard() { atmState.insertCard(); } public void ejectCard() { atmState.ejectCard(); } public void requestCash(int cashToWithdraw) { atmState.requestCash(cashToWithdraw); } public void insertPin(int pinEntered){ atmState.insertPin(pinEntered); }; } } HASCARD.JAVA public class HasCard implements ATMState { ATMMachine atmMachine; public HasCard(ATMMachine newATMMachine){ atmMachine = newATMMachine; } public void insertCard() { System.out.println("You can only insert one card at a time"); } public void ejectCard() { System.out.println("Your card is ejected"); atmMachine.setATMState(atmMachine.getNoCardState()); } public void requestCash(int cashToWithdraw) { System.out.println("You have not entered your PIN"); } public void insertPin(int pinEntered) { if(pinEntered == 1234){ System.out.println("You entered the correct PIN"); atmMachine.correctPinEntered = true; atmMachine.setATMState(atmMachine.getHasPin()); } else { System.out.println("You entered the wrong PIN"); atmMachine.correctPinEntered = false; System.out.println("Your card is ejected"); atmMachine.setATMState(atmMachine.getNoCardState()); } } } NOCARD.JAVA public class NoCard implements ATMState { ATMMachine atmMachine; public NoCard(ATMMachine newATMMachine){ atmMachine = newATMMachine; } public void insertCard() { System.out.println("Please enter your pin"); atmMachine.setATMState(atmMachine.getYesCardState()); } public void ejectCard() { System.out.println("You didn't enter a card"); } public void requestCash(int cashToWithdraw) { System.out.println("You have not entered your card"); } public void insertPin(int pinEntered) { System.out.println("You have not entered your card"); } } HASPIN.JAVA public class HasPin implements ATMState { ATMMachine atmMachine; public HasPin(ATMMachine newATMMachine){ atmMachine = newATMMachine; } public void insertCard() { System.out.println("You already entered a card"); } public void ejectCard() { System.out.println("Your card is ejected"); atmMachine.setATMState(atmMachine.getNoCardState()); } public void requestCash(int cashToWithdraw) { if(cashToWithdraw > atmMachine.cashInMachine){ System.out.println("You don't have that much cash available"); System.out.println("Your card is ejected"); atmMachine.setATMState(atmMachine.getNoCardState()); } else { System.out.println(cashToWithdraw + " is provided by the machine"); atmMachine.setCashInMachine(atmMachine.cashInMachine - cashToWithdraw); System.out.println("Your card is ejected"); atmMachine.setATMState(atmMachine.getNoCardState()); if(atmMachine.cashInMachine <= 0){ atmMachine.setATMState(atmMachine.getNoCashState()); } } } public void insertPin(int pinEntered) { System.out.println("You already entered a PIN"); } } NOCASH.JAVA public class NoCash implements ATMState { ATMMachine atmMachine; public NoCash(ATMMachine newATMMachine){ atmMachine = newATMMachine; } public void insertCard() { System.out.println("We don't have any money"); System.out.println("Your card is ejected"); } public void ejectCard() { System.out.println("We don't have any money"); System.out.println("There is no card to eject"); } public void requestCash(int cashToWithdraw) { System.out.println("We don't have any money"); } public void insertPin(int pinEntered) { System.out.println("We don't have any money"); } }); } } hi how r you ? could you please tell me how to add an twitter account for twitter stream? Sorry, but I need more information. What exactly are you trying to do. It’s nice to see you back again 🙂 My website isn’t very popular, but I enjoy it with the few in my community 🙂 Great things are rarely appreciated just the nature of the beast. The site is awesome. The game I’m trying to make uses states so i wanted to understand them better. So i watched this twice. good vid! I should have said fully appreciated. There definately seems to be alot of people here that do. Just find it hard to believe this site is’nt popular thats all. I was wondering if you could touch on hierarchical state machines if you get time. Thank you 🙂 Yes my site isn’t that popular, but that is ok. I like the people that find me. I’ll be covering design patterns again in my refactoring tutorial. It is coming up after I finish with Object Oriented Design Thank you 🙂 I enjoyed covering design patterns and I’ll revisit them in my refactoring tutorial which should be starting very soon. Thanks for stopping by my site k thanks…. i found what i was doing wrong… just a simple mistake… i thought i maybe i needed to use a diff approach to state design because the lib im using works with them but really i just needed to look closer. So basically your awesome and your code is bulletproof, i just forgot an important statement heh… thnx as always Thank you very much for the compliment 🙂 I do my best Great! I’m glad you got it working. I figured it was just a simple error Hello, nice tutorial I watched some of your previous tutorial and after reading about state design pattern , it came to my mind that we can include singleton design pattern to ATMMachine , so that not more one instance of ATMMachine() is created .This is also true in real world Atm application. is my thinking correct or am i thinking too much ?? 😛 😉 That is a good point. I try to focus on just one subject at a time so I don’t confuse people yeah it is good to keep stuff simple and easy to understand.My thought on this came up after watching your previous tutorial.So this shows that your previous tutorial is helping me in understanding and following different approaches…which is gr8 🙂 :). Thank you 🙂 Stick with it and it will all make sense very soon. I’m glad you are finding the videos useful Sir Thanks a Lot For Great work you are doing. the way you are teaching is very easy to understand i don’t have so much money to go and learn java outside you are saving me lots and lots money You are very very welcome 🙂 I as well didn’t have money to attend college and I made this site for people like you and I. I’m extremely happy to be able to provide a free education to the best of my ability. hi, thanks for the great tutorial i am making a ATM simulation . i am also using uml diagram . now that i have a class diagram i wanted to use the state pattern for it, but my coding is completely different from yours, can you help me ? You’re welcome 🙂 This isn’t by any means the only way to create an ATM simulator. I’m not the type of person that says if you don’t do it my way you are wrong. If your way works then go with that. Feel free to ask questions, but if your code works and it is understandable then it is good in my opinion Thanks for the reply. yes it works,but i can’t figure out a way to use state pattern in my UML class or my coding. that is my problem.because i have totally different classes.but still i will try to make it work. i love every single video that you have..so clear..thank you You are my hero! 🙂 Thank you 🙂 very nice … you are the real Hero… I am requesting you.. can u upload web services plz… thank u…. Thank you 🙂 I am planning a web services tutorial now. I’ll get it up asap You have always been providing in depth and concise information, through very simple demos .. I really appreciate your good work on Design pattern . Kindly keep this good work up and running as you are obviously helping less knowledgeable people like me 🙂 Thanks again Thank you very much 🙂 I do my best to make everything understandable and interesting. Those are my main goals. I also struggled with the same bad textbooks that you probably have seen. I always wanted to do what ever I could to make these topics understandable. Nice clean code, very good example…. If I was to adapt the structure for something else…how do you feel would be the best way to incorporate something like a sub state in one of those states….say Counting state (increments until hits a target then changes state to Execute state (which allows you to do something, then reverts back to Counting until reaches a target again) Thanks 🙂 Also the execute state does not preventing other items counting so several items counting in parallel, but cannot do anything until reached the target…but the user could choose to do nothing on one of the items so a later item could reach its target and the user could choose to do something with it before an item which hit its target earlier…if that makes sense 🙂 great job ! clear and useful 🙂 Thank you 🙂 Hi!! I’m C++ programmer, but your video tutorials for patterns are the best I’ve ever seen so far, so I’ve been following them, and I review them whenever I need to implement a pattern (for C++). Your explanations and source code are clean and easy to digest; however, I think a FSM diagram is needed in order to see when and why a transition is needed. Of course reading the source code gives that information, but you know, an image says more than a thousand words. Thank you and keep going!! Hi Xavier, Thank you 🙂 It is nice to know that these tutorials are helping people. Especially if you like them enough to use as a reference. that is very cool. Yes I should go in a see if I can add anything to improve them. I’ll set some time aside soon. Derek Hi, I’m studying CS in a Finnish uni, and took a design patterns course. After banging my head against the wall with the aid of 3-4 books ( a bit like this: ) I discovered your tutorials and I am more than glad for this coincidence. Writing the code alongside your tutorials is the best learning routine I’ve come up with so far. It shortened the learning experience of getting from theory to practice from 2-10hrs with the book to 15-30mins with your tutorial. It’s such a good method, I’m undecided on should I tell everyone about this or should I keep it my secret for the time being 😉 To other ones that might be reading this, GoF’s book, Head First Design Patterns and maybe Java Design Patterns vol 1 with Derek’s tutorials is a guaranteed way to go about these patterns – in case that the tutorials left any questions. Just sharing.. A huge thanks to you! I’ll be checking your uml, refactoring and android dev tutorials next. -r Hi, Thank you for all of the nice compliments 🙂 I very much appreciate them. I did my best to translate the GOF book into a series of easy to understand videos. I struggled with them as well many years ago. I wish you all the best Derek Thank you very much … very nice tutorial on design patterns You’re very welcome 🙂 Derek, Thank you for all of the good work you put to create these videos. Could you please provide UML class diagram for this video? I don’t know how I can show ATMState hasCard, noCard, hasCorrectPin, atmOutOfMoney in ATMMachine class You’re very welcome 🙂 Sorry, but I don’t have one. I have a class diagram tutorial here that may help.
http://www.newthinktank.com/2012/10/state-design-pattern-tutorial/?replytocom=24451
CC-MAIN-2019-22
refinedweb
2,005
57.06
: Installing Java Development Kit (JDK) Installing Eclipse for Java Installing Google Web Toolkit (GWT) SDK Installing all relevant Eclipse plugins, such as Gradle, Android Development Tool (ADT), Google App Engine (GAE), GWT, and RoboVM Using the LibGDX Gradle setup app to create our first LibGDX project Exploring LibGDX demos and tests Hope I didn't scare you with all those abbreviations. The majority of what we will do in this chapter will only need to be done once, as it is required to set up your development environment. Once it is done, we can forget about it and focus on creating wonderful games. I will be using a Mac-based development setup throughout this book, but the process is similar for Windows as well. The added benefit that Mac provides is the ability to deploy to iOS as well. In the first part, we will set up a Java development environment that will be very simple for all our Java developers. This can be accomplished by setting up Eclipse, IntelliJ, or NetBeans IDEs, but we will be focusing on Eclipse throughout this book. This does not mean that this way is superior to the others but we've used it just because Eclipse is the most widely used Java IDE and the majority of our potential readers should have worked with Eclipse one way or another. Note If you need to use another IDE, please check the LibGDX wiki page at. Our first step will be to install the Java Development Kit (JDK) if the development PC does not have it. For all of the tools, applications, and SDKs that we will use, there are different versions for Windows and Mac. Your browser will automatically take you to the relevant download in most cases, but be sure to double check that you are indeed downloading the right version. Also, there are different versions for 32-bit and 64-bit processors; make sure that you download the right one for your development PC. You can download the latest version of JDK from Oracle's site at. At the time of writing this book, the latest version is JDK 8u5 and the following screenshot shows how the different versions are listed: Once you download the relevant file, go ahead and install it on your machine. Now, your machine is ready to be used for Java-based application development. On a Windows machine, you may need to set the value of the environment variable JAVA_HOME to the installation path of the JDK after installation. We can check this by running the following command in the command prompt: C:\Users\admin>java -version If the system displays the version details as follows, then Java is installed correctly: java version "1.7.0_25" Java(TM) SE Runtime Environment (build 1.7.0_25-b17) Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode) Otherwise, we need to set up the Java environment variable. To find the correct path, go to C:\Program Files (x86)\Java\. You should see a folder with the jdk extension in its name. The complete path should be similar to the following one: C:\Program Files (x86)\Java\jdk1.8.0_05 Follow the given steps on a 64-bit Windows machine with 32-bit Java installed to set the environment variable: Click on the Windows Start button, right-click on Computer, select Properties, and open control panel system window. Click on Advanced system settings on the left-hand side of the Control Panel window to open the System Properties window. Next, click on the Environment Variables⦠button and click on the New... button at the top that corresponds to User variables for <USERNAME>. A window with the title New User Variable will appear. Now, fill in the two text fields. Use JAVA_HOMEin the Variable name field and the JDK path you found out earlier in the Variable value field. Next, we will need to install our IDE of choice: Eclipse. The latest version of Eclipse can be found at. We need to select Eclipse for Java developers from the multitudes of flavors in which it is available. At the time of writing, the latest Eclipse build is Luna, as shown in the following screenshot: Once you have downloaded the relevant IDE, go ahead and install it on your machine. Now, we are all set to write and compile the Java code. It's time to set up the Android development environment. To use our Eclipse IDE for Android development, we need to install two things: the Android SDK and the Android ADT plugin. Android SDK can be downloaded as a compressed archive from. It will be listed under GET THE SDK FOR AN EXISTING IDE. The archive can be extracted to a location on your hard drive which needs to be linked from within Eclipse. The following screenshot shows the download page where you can see a link to download Eclipse ADT, which is a complete package for Android development. If you are setting up Eclipse for the first time, then downloading Eclipse ADT is the way to go; however, in this chapter, we are assuming that Eclipse is already installed. The ADT plugin connects with our Android SDK and keeps it up to date. It also has the SDK Manager and Android Virtual Devices that are used to emulate Android devices. We need to fire up Eclipse so that we can install the plugin. For those who are not aware of the process of installing plugins in Eclipse, the following screenshot will help. We need to select Install New Software... from the Help section. In the new window that pops up, enter the URL in the section that says type or select a site and press Enter. We need to select all available items and proceed with the installation. Once the plugin is successfully installed, we need to restart Eclipse. After restarting, Eclipse will ask you for the Android SDK location; you can also set it up by navigating to Window | Preferences | Android. Once the Android SDK location is set, the SDK Manager will check our installation to find missing items. It will prompt you to download the latest Android SDK platform and Android platform tools. Check out the selected items in the Download dialog box. We need to have the recommended Android build tools and SDK platform. It is safe to not alter the recommended setting and allow the downloader of Eclipse to download all the required files. The following screenshot shows the update in progress: Now we are all set to develop Android applications. LibGDX uses GWT plugin to publish HTML5/JavaScript, which is the web platform. GWT includes the GWT SDK and Google App Engine. The other support plugins that we need are RoboVM and Gradle. RoboVM is used to compile the LibGDX project on the iOS platform. Gradle is a dependency management and build system that wires our LibGDX game together. From within Eclipse, launch the Install New Software... window, input the following link, and install the GWT plugin: The last part of the link is actually the Eclipse version and the preceding link is for Version 4.4.x. Select the checkboxes to install Google Plugin for Eclipse, Google App Engine SDK, and Google Web Toolkit SDK. Once the plugin is installed, we need to restart Eclipse. iOS development is only possible if we are using a Mac machine for development. We will also need Xcode, the Mac IDE for support. RoboVM is the brainchild of Niklas Therning, a Swedish developer and co-founder of Trillian Mobile AB. RoboVM is used to make Java work on iOS via a Java to Objective-C bridge. RoboVM is open source and stable. Let's all take a few minutes to appreciate this wonderful effort by learning more about RoboVM (). Mac users can go ahead and install the RoboVM plugin from. After restarting, we are now all set to deploy our games on the Apple iOS platform as well. Gradle is a dependency management system and is an easy way to pull in third-party libraries into your project without having to store the libraries in your source tree. Instead, the dependency management system relies on a file in your source tree that specifies the names and versions of the libraries you need to be including in your application. Adding, removing, and changing the version of a third-party library is as easy as changing a few lines in that configuration file. The dependency management system will pull in the libraries you specified from a central repository and store them in a directory outside of your project. Note If you want to read more about Gradle and get to know how it benefits the LibGDX setup, visit. Gradle also has a build system that. More information can be found at. At this point, we are very comfortable adding new plugins to Eclipse. Fire up the new software window to add the Gradle plugin from the link. After installation, restart Eclipse and that's the end of our setup. I know that you are wondering how we reached the end of our setup without installing anything related to LibGDX but everything else out there. The LibGDX installation will be handled by Gradle automatically, and we will be using a helper application to create all the dependencies and the project structure. Such a LibGDX Gradle combo requires an Internet connection while creating the project, as Gradle needs to load all the necessary dependencies, files, and libraries on the fly while setting up the project for the first time. Personally, I am not a fan of this as I come from a country where a good Internet connection is still a luxury. For those of you who are in a similar situation, the alternative way will be to download all dependencies and wire them all up as required. This is a complicated task although we used to do it during the initial days of LibGDX. Let's start creating our first Gradle-based LibGDX project. We will be following the steps explained in the LibGDX wiki page, which can be found at. We need to use a Java application with the Gradle setup to create Gradle-based LibGDX projects. This setup file can be found at. Note A direct link to the setup application is. The setup application has to be stored for easy access, as we will need it when we create new LibGDX projects. The purpose of the application is to create all platform-specific Eclipse projects, such as desktop, Android, iOS, and JavaScript applications. It also links with all the necessary dependency libraries and the latest version of LibGDX. The following screenshot shows the Gradle project structure that clearly explains how the different projects are created under the relevant folders: Let's launch the gdx-setup.jar Gradle setup application. The following screenshot shows how I have populated the options for our Hello World project: We need to specify a name for our project, which is Hello World in this case. Then, we need to specify a package name for our project. In my case, I am using my company's package name but you can use any unique package name. It's time to specify our main class name; take care that there are no spaces in between. While setting the destination project, make a new folder within your Eclipse Workspace folder so that there are no possible conflicts between Gradle files and Eclipse workspace metadata files. In this case, I have specified FirstGradleProject. Link the Android SDK and select the version of LibGDX that you will use. In the Sub Projects section, we need to select the platforms that we want the project to target. Note that we will need a Mac to compile an iOS build. The Extensions section will link any of the standard LibGDX libraries or packages available. It is always safe to select the options in this stage rather than hacking into the project later on. For our Hello World app, we do not need any of these extensions. Hit the Generate button and your LibGDX Gradle project structure will be created. Alternatively, we can create an Eclipse-specific project structure by clicking on the Advanced button, enabling the Eclipse checkbox, and clicking on Save. If the project is created this way, then we need to import it as a normal Eclipse project by navigating to File | Import | General | Existing Projects to Workspace. This process will need an active Internet connection to load the files needed by the setup application. It's time to import our projects into Eclipse by navigating to File | Import | Gradle | Gradle Project. Browse the root folder, FirstGradleProject, and hit the Build Model button. This step is very important, as you may not see your projects without this. Gradle will download a few necessary files based on our project, and it will take a while before you see your projects listed as shown in the following screenshot: Go ahead and select all the projects and click on Finish to load them to Eclipse. At this point Gradle will load all the dependencies as per our selection in the setup application. This is where Gradle will actually load the LibGDX packages and extensions. The following screenshot shows the process: Everything should be wired properly and all projects are good to go at this point. One issue that may pop up is that the Android project may have a red cross indicating that the proper Android SDK is missing. This happens all the time, but the fix is straightforward. Right-click on the Android Project folder to select its properties. Then, select the proper Android target version under the Android section in the window that pops up. This will remove the error. Congratulations! You can now run your projects. The following screenshot shows how to remove the Android error: In order to run the desktop project, right-click on the Hello World desktop project, select Run As, and click on Java Application. A popup may ask you to select the application class and you should select DesktopLauncher to run the app. The following window will pop up, which means we have successfully created and run our first Gradle-based LibGDX application: Running the application on your Android device is also very easy. Connect your Android device to the development PC via USB and make sure USB debugging is enabled in the device's settings. In order to see the connected device from within Eclipse, you need to enable the Devices view. In Eclipse, go to Window | Show View | Other. Then, select Devices from the Android section. A new tab showing the connected device will be added to Eclipse. For Windows, we need to install the respective drivers for the connected phone to show up, but on Mac the connected device would be automatically detected. Once you see your device listed in the Devices view, right-click on the Hello World-android project, select Run as, and click on Android Application. Eclipse will prompt you to select how to launch the Android app. You can select to launch on a device or on a Android Virtual Device (AVD) if you have set one up already. Select to launch the application on the device and then the application should show up on your Android device. Eclipse will show the LogCat view, which shows the application status: In order to run the iOS project, you need to be on a Mac and should have Xcode Version 5 or above installed. Xcode can be downloaded free from the Mac App Store. It provides the necessary frameworks and the iOS Simulator tool. Right-click the Hello World-ios project, select Run As, and select iOS Simulator App. You can select either of the simulator options, iPad or iPhone. This will start the RoboVM cross compilation and would take some time for entire classes to be compiled. Eventually, you will see the app running on the iOS Simulator. Tip In some cases, you may run out of memory while RoboVM works on the compiling. You may need to increase the memory heap sizes for Eclipse in the eclipse.ini file, as follows: -Xms512m -Xmx2048m Getting the HTML project to run is the trickiest part of them all. We need to follow exactly what the LibGDX wiki tell us to do. Right-click on the Hello World-html project, select Run As, and click on External Tools Configuration. Create a new configuration by double-clicking the Program entry in the left sidebar. Give the configuration a name, for example, GWT SuperDev. Set the location field to the gradlew.bat (Windows) or gradlew (Linux, Mac) file. Set the working directory to the root folder of your project and specify html:superDev as the argument. Click on Apply and then click on Run. Wait until you see the message The code server is ready in the console view. After that, go to the URL. You can leave the server running. If you change the code, simply click on the SuperDev Refresh button in the browser. This will recompile your app and reload the site. Check out this screenshot to see the app running on a browser: That functions: SpriteBatch batch; Texture img; create (); render(); The SpriteBatch class is a class that facilitates the efficient drawing of images on the screen. The create function, which will be called when the application is launched, just creates a new SpriteBatch class and a Texture class named img that loads a texture from the external asset badlogic.jpg. The render function is called continuously and can be considered as our game loop. It clears the screen and draws the texture onto screen at the coordinates (0,0). This means the origin of the coordinate system in LibGDX is the bottom-left corner of the screen, as we can see that the image is placed there. We will revisit the graphics package in the next chapter. The following code clears the screen before each draw call: Gdx.gl.glClearColor(1, 0, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); The next bit of code starts the batch drawing, draws the texture at the specified coordinates, and closes the batch drawing: batch.begin(); batch.draw(img, 0, 0); batch.end(); The only thing missing is the external image file, as we won't be able to find it among the core project files. LibGDX follows the convention to store all asset files within the assets folder in the Android project. Hence, we can find badlogic.jpg within the assets folder in the Hello World-android project. The assets folder is shared among all the other projects. All the other projects are simple wrapper projects that have platform-specific code and data that will launch the code in the core project. You can easily explore the projects on your own, but we will revisit specific platforms in the final chapter. Let's alter the code to display our Hello World text thereby officially declaring the start of our quest of mastering LibGDX. The easiest way to do this will be to draw some text on the screen using the BitmapFont class. A bitmap font is created using an external tool like Hiero () where a font is converted into a bitmap with all the letters in a fixed size along with a .fnt file that stores the presentation data. To make things easier, let's use fonts from the LibGDX test project. You can download verdana39.fnt and verdana39.png from. As explained, we need to place these files in the assets folder within the Hello World-android project. Now, let's edit the code in HelloWorld.java in the core project to remove the texture drawing and to add our text display. The code is as follows: public class HelloWorld extends ApplicationAdapter { SpriteBatch batch; BitmapFont font; @Override public void create () { batch = new SpriteBatch(); font = new BitmapFont(Gdx.files.internal("verdana39.fnt"), Gdx.files.internal("verdana39.png"), false); font.setColor(Color.RED); } @Override public void render () { Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); batch.begin(); font.draw(batch, "Hello World", 200, 200); batch.end(); } @Override public void dispose() { batch.dispose(); font.dispose(); } } Tip Downloading the example code You can download the example code files for all Packt books you have purchased from your account at you purchased this book elsewhere, you can visit and register to have the files e-mailed directly to you. The BitmapFont class requires the .fnt and .png file to create the font. In the render function, we draw the Hello World text at the coordinates (200,200). I have also added an overridden dispose function that releases the memory by destroying the created instances. Congratulations! You have our shining Hello World text. This method may not always be suitable to display text, as we may need to show text in multiple sizes. In such cases, we will use the Freetype extension that enables us to use a ttf font to create bitmap fonts of different sizes dynamically at runtime. We will revisit fonts in detail in Chapter 4, Bring in the Extras! For the adventurous among you, this is the chance to dive deeper into LibGDX by checking out the fully functional game demos. You can find the links for these games at. The most interesting one is the example project similar to Flappy Bird, The plane that couldn't fly good. The link to this project is. In order to explore this project, we need to clone the repository in Git or download it on our PC. Once downloaded, we can import it to Eclipse as a Gradle project. Tip The local.properties file may be missing from the downloaded project's folder. Just copy the file from our FirstGradleProject root folder. This file sets the Android SDK location in our PC. This is a very simple project to get you started and can be used as a starter code for your Flappy Bird game. The whole code is in the single file PlaneGame.java in the core project. The LibGDX tests project is a treasure trove of goodies. There are numerous small tests exploring the different features of LibGDX. These tests are the ideal way to get started with LibGDX and they also let you learn new things in an easy way. Alternatively, they act as a source of functional sample code that we can simply copy and paste for our own use cases. The LibGDX wiki entry for running these tests advises us to use Ant to set up these tests on our PC. The link is. The tests can be found at. You can temporarily set up Ant after downloading it from. Assuming that it is extracted to /Setups/apache-ant-1.9.4, we can temporarily install ANT using the terminal on a Mac. In the terminal, enter the following code: export ANT_HOME=/Setups/apache-ant-1.9.4 export PATH=${PATH}:${ANT_HOME}/bin In the terminal, navigate to the folder where LibGDX Git project is cloned. Run the Ant command to fetch dependencies: ant -f fetch.xml This will download all the LibGDX dependencies for all the projects. Later, we can import all these projects into Eclipse by navigating to Import | Existing Projects into Workspace. After importing the Demo and Tests projects, our Package Explorer in Eclipse will look something like this: The test examples are contained in the gdx-tests project. This project only contains the source code. To actually start the tests on the desktop, you have to run the LwjglTestStarter class contained in the gdx-tests-lwjgl project. To run the tests on Android, simply fire up the gdx-tests-android project in the emulator or on a connected device! To run the tests in your browser, fire up the gdx-tests-gwt project. For iOS, you can start the gdx-tests-robovm project. Play with them. Previously, LibGDX had to be manually wired up and was a complicated process. Later, the French developer Aurelien Ribon created a gdx-setup-ui tool that automated this process. Aurelien Ribon is a rock star Java developer who has released lots of goodies that can be accessed on his blog at. This non-Gradle-based system still works but is not recommended or supported at present. The blog post with all the details of Version 3.0.0 can be found at. The gdx-setup-ui interface can be downloaded from. The app will need an Internet connection to download the LibGDX files; this screenshot shows it in action: This chapter explained a lot of theory that lays the solid foundation for all our development. This chapter will serve as a reference to set up your development environment if you move to a new workplace or get a new laptop. You learned to set up Eclipse-based Android and Java development environments. We used the gdx-setup tool to create a Gradle-based LibGDX project, which was successfully imported to Eclipse with the help of associated support plugins. We explored the project structure of a typical LibGDX cross-platform project. We successfully executed the Hello World project on desktop, Android, iOS, and the browser. Some minor editing of the code helped us display the text Hello World using the BitmapFont class. It is very important to successfully import the LibGDX Demo and Tests and we used Ant to get them running. In the next chapter, we will start with the graphics package in LibGDX and get started with our game.
https://www.packtpub.com/product/libgdx-game-development-essentials/9781784399290
CC-MAIN-2020-40
refinedweb
4,271
72.36
OpenStack administration is documented in detail in the OpenStack Compute Administration Manual. In this section, we discuss key tasks you should perform that will allow you to launch instances. Refer to the official OpenStack documentation for more information. For these tasks, you must be logged in to the Dashboard as the admin user. These tasks can also be performed on the command line; some tasks require you to be logged into the controller via SSH, and some can be performed via python-novaclient on the controller or on a workstation. You should also be familiar with the material documented in "Accessing the Cloud". NOTE: Nova volumes are not supported in Rackspace Private Cloud. For block storage, refer to the instructions for configuring OpenStack Block Storage. For more information about downloading and creating additional images, refer to the OpenStack Virtual Machine Image Guide. Images can be added via the Horizon dashboard when logged in as the admin user. Currently only images accessible via http URL are supported, and the URL must be a valid and direct URL. Compressed image binaries in .zip and .tar.gz format are supported. Adding an image with the command line You can use glance image-create when logged into the controller node, or if you have Glance client installed on your local workstation and have configured your environment with administrative user access to the controller. In the following example, the user has a virtual disk image in qcow2 format stored on the local file system at /tmp/images/test-image.img. When the image is imported, it will be named "Test Image" and will be public to any Glance user with access to the controller. $ glance image-create --name "Test Image" --is-public true \ --container-format=bare --disk-format qcow2 < /tmp/images/test-image/img If the image is successfully added, Glance will return a confirmation similar to the following: Added new image with ID: 85a0a926-d3e5-4a22-a062-f9c78ed7a2c0 More information is available via the command glance help add. AMI images can be added with the glance image-create command as described above. However, since AMI disk images do not include kernels, you will first need to upload a kernel to Glance, as in the following example. $glance image-create name="< kernelImageName>" is_public=true \ --container_format=aki --disk_format=aki < <imagePath> After the kernel has been uploaded, you can then upload the AMI image, specifying the Glance ID of the uploaded kernel with the kernel_id property. $glance image-create name=" <imageName>" is_public=true \ --container_format=ami --disk_format=ami \ --property kernel_id= <kernelImageGlanceID>< <imagePath> Rackspace has developed a Python-based tool that enables you to convert and upload single-disk Linux VMDK images. Multi-disk and Windows image conversions are not available at this time. This tool can be used on any workstation that supports Python. Before you begin, ensure that you have installed libguestfs, python-libguestfs, python-hivex on your workstation.. convert.py. You can use the following options: <path>: path to the VMDK image that you want to convert. This option is required. <path>: the name of the qcow2 file that will be generated by the conversion process. <name>: The name that the image will be assigned in Glance, if you are using the -u/--upload option. <1-5>: Debug level. Level 5 is verbose. When the conversion process is complete, you can upload the image to Glance (if you did not already enable automatic image uploading). By default, your cluster is installed with nova-network. If you want to use OpenStack Networking, you must add a network node and enable OpenStack Networking as described in Adding OpenStack Networking. Currently, the Horizon dashboard does not permit robust network management. When you need to create a network or subnet, you should use the quantum commands. Refer to the procedures under Adding OpenStack Networking and to the OpenStack Networking Administration Guide for more information. You must create a project before you can launch an instance. A demo project is available by default, but if you want to create your own project, follow this procedure. Typically, when configuring your first project, these will be the admin user and the demo user that you created during the installation process (not to be confused with the operating system user). When prompted for a role for the user, you may wish to assign the admin role to the admin user and the member role the demo user.. Refer to the OpenStack Keystone documentation for information about customizing roles. Your project is now ready for additional configuration. Log out as the administrator and log in as the demo user before proceeding. When logged in, ensure that the project is selected in the navigation bar. Adding a project with the command line On the command line, projects are managed when logged in as root with keystone tenant-create. For example, to create a project named Marketing, you would use sudo -i to switch to root and execute the following command: $ keystone tenant-create --name Marketing --enabled true Keypairs provide secure authentication to an instance, and will enable you to create instances securely and to log into the instance afterward. Keypairs are generated separately for each project and assigned to instances at time of creation. You can create as many keypairs in a project as you like. .pemfile. Generating a keypair with the command line On the command line, keypairs are managed with nova keypair-* commands in python-novaclient. When generating a keypair, you must have your OS_USERNAME and OS_TENANT_NAME configured in your environment to ensure that you have access to the correct project. Our user jdoe, after configuring their environment, would then issue the following command to generate a keypair: $ nova keypair-add jdoe-keypair The client will generate a block of RSA Private Key text, which the user copies and saves to a file called jdoe-keypair.pem. A Security Group is a named set of rules that get applied to the incoming packets for the instances. Packets that match the parameters of the rules are given access to the instance; all other packets are blocked. At minimum, you should ensure that the default security group permits ping and SSH access. You may edit the default security group or add additional security groups as your security settings require. 0.0.0.0/0if you want to enable access from all networks, or you may enter a specific network, such as 192.0.2.0/24. You will receive a confirmation message at the top of the Dashboard window that the new rule was added to the default security group. To enable ping, repeat the procedure with a protocol of ICMP, type of -1, and code of -1. Managing nova-network security groups with the command line On the command line, security groups are managed with nova secgroup-* commands in python-novaclient. To add the ping and SSH rules to the default security group, issue the following commands: $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 $ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 Use nova secgroup-list-rules to view the updated default security group rules: $ nova secgroup-list-rules default +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | icmp | -1 | -1 | 0.0.0.0/0 | | | tcp | 22 | 22 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+ Managing OpenStack Networking (Neutron) security groups with the command line On the command line, security groups are managed with neutron security-group-* commands in the neutron client. To add the ping and SSH rules to the default security group, issue the following commands: $ neutron security-group-rule-create --protocol icmp --direction \ ingress default $ neutron security-group-rule-create --protocol tcp --port-range-min 22 \ --port-range-max 22 --direction ingress default Use neutron security-group-rule-list to view the updated default security group rules. Before you can create an instance, you must have already generated a keypair and updated the default security group. The project in which you want to create the instance should be in focus on the dashboard. preciseimage. m1.small. The Instances and Volumes page will open, with the new instance creation in process. The process should take less than a minute to complete, after which the instance status will be listed as Active. You may need to refresh the page. Launching an instance with the command line On the command line, image creation is managed with the nova boot command. Before you can launch an image, you need to determine what images and flavors are available to create a new instance. $ nova image-list +--------------------------+----------------------------+--------+--------+ | ID | Name | Status | Server | +--------------------------+----------------------------+--------+--------+ | 033c0027-[ ID truncated] | cirros-image | ACTIVE | | | 0ccfc8c4-[ ID truncated] | My Image 2 | ACTIVE | | | 85a0a926-[ ID truncated] | precise-image | ACTIVE | | +--------------------------+----------------------------+--------+--------+ $ nova flavor-list +----+-----------+-----------+------+-----------+------+-------+-------------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | +----+-----------+-----------+------+-----------+------+-------+-------------+ | 1 | m1.tiny | 512 | 0 | 0 | | 1 | 1.0 | | 2 | m1.small | 2048 | 10 | 20 | | 1 | 1.0 | | 3 | m1.medium | 4096 | 10 | 40 | | 2 | 1.0 | | 4 | m1.large | 8192 | 10 | 80 | | 4 | 1.0 | | 5 | m1.xlarge | 16384 | 10 | 160 | | 8 | 1.0 | +----+-----------+-----------+------+-----------+------+-------+-------------+ In the following example, an instance is launched with an image called precise-image. It uses the m1.small flavor with an ID of 2, and is named markets-test. $ nova boot --image precise-image --flavor="2" markets-test +-------------------------------------+--------------------------------------+ | Property | Value | +-------------------------------------+--------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | instance-0000000d | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | accessIPv4 | | | accessIPv6 | | | adminPass | ATSEfRY9fZPx | | config_drive | | | created | 2012-08-02T15:43:46Z | | flavor | m1.small | | hostId | | | id | 5bf46a3b-084c-4ce1-b06f-e460e875075b | | image | precise-image | | key_name | | | metadata | {} | | name | markets-test | | progress | 0 | | status | BUILD | | tenant_id | b4769145977045e2a9279c842b09be6a | | updated | 2012-08-02T15:43:46Z | | user_id | 5f2f2c28bdc844f9845251290b524e80 | +-------------------------------------+--------------------------------------+ You can also view the newly-created instance at the command line with nova list. $ nova list +------------------+--------------+--------+-------------------+ | ID | Name | Status | Networks | +------------------+--------------+--------+-------------------+ | [ ID truncated] | markets-test | ACTIVE | public=192.0.2.0 | +------------------+--------------+--------+-------------------+ All instances exist on a nova network that is not accessible by other hosts by default. There are various ways to access an instance. In all cases, be sure that you have updated the default security group. The login for each instance is determined by the configuration of the image from which it was created. Rackspace Private Cloud installations include a CIRROS image and an Ubuntu 12.04 (Precise) image. cirrosand the password cubswin:). ubuntuand the SSH key that you specified for the instance during the instance creation process. The key must be present on the host from which you are connecting to the instance, and you must log in with the key name and the -i flag. In the following example, the keypair file is named jdoe-keypair.pem. $ ssh -i jdoe-keypair.pem 192.0.2.0 For instances launched from other images, log in with the credentials defined in the image. The following procedure is for instances created in an OpenStack Networking environment. $ ip netns exec <namespace $ ip netns exec <namespace>ssh <username>@ <instanceIPAddress> If the login requires an SSH key, log in with the key name and the -i flag. $ ip netns exec <namespace>ssh -i <keypairName>.pem <instanceIPAddress> In a nova-network environment, these steps are performed on the Compute node the instance is running on, or any node with access to the instance's network root, and execute the following command to identify the compute node on which the instance is stored. $ nova-manage vm list | grep <instanceName> The output generated will include the following information, where N is the number of the compute node. Compute nodes will be numbered in the order in which you added them. <instanceName>compute <N>m1.small active 2012-08-13 00:42:53 $ $ ssh <username>@ <instanceIPAddress> If the login requires an SSH key, log in with the key name and the -i flag. You may need to copy the *.pem keypair file associated with the instance to the compute node. $ ip netns exec <namespace>ssh -i <keypairName>.pem <instanceIPAddress> If you are using nova-network instead of OpenStack Networking, you will need to use floating IP addresses. Before you assign a floating IP address to an instance, you must have a pool of addresses to choose from. Your network security team must provision an address range and assign it to your environment. These addresses need to be publicly accessible. NOTE: If your cloud is hosted in a Rackspace data center and you require more floating IP addresses, contact your Rackspace support representative for assistance. Follow this procedure to create a pool of floating IP addresses, allocate an address to a project, and assign it to an instance. root. Execute the following command, substituting in the CIDR for the address range in --ip_range that was provisioned by your network security team: $ nova floating-ip-create --ip_range= <CIDR> This creates the pool of floating IP addresses, which will be available to all projects on the host. You can now allocate a floating IP address and assign it to an instance in the dashboard. You will receive a confirmation message that a floating IP address has been allocated to the project and the IP address will appear in the Floating IPs table. This reserves the addresses for the project, but does not immediately associate that address with an instance. You will receive a confirmation message that the IP has been associated with the instance. The instance ID will now appear in the Floating IPs table, associated with the IP address. It may be a few minutes before the IP address is included on the Instances table on the Instances & Volumes page. Once the IP address assignment is completed, you can access the instance from any Internet-enabled host by using SSH to access the newly-assigned floating IP. See Logging In to the Instance for more information. Managing floating IP addresses with the command line Allocation and assignment of floating IP addresses is managed with the nova floating-ip* commands. In this example, the IP address is first allocated to the Marketing project with nova floating-ip-create command. $ nova floating-ip-create marketing The floating IP address has been reserved for the Marketing project, and can now be associated with an instance with the nova add-floating-ip command. For this example, we'll associate this IP address with the image markets-test. $ nova add-floating-ip markets-test 203.0.113.0 After the command is complete, you can confirm that the IP address has been associated with the nova floating-ip-list and nova-list commands. $ nova floating-ip-list +-------------+--------------------------------------+-----------+------+ | Ip | Instance Id | Fixed Ip | Pool | +-------------+--------------------------------------+-----------+------+ | 203.0.113.0 | 542235df-8ba4-4d08-90c9-b79f5a77c04f | 192.0.2.0 | nova | +-------------+--------------------------------------+-----------+------+ $ nova list +------------------+--------------+--------+---------------------------------+ | ID | Name | Status | Networks | +------------------+--------------+--------+---------------------------------+ | [ ID truncated] | markets-test | ACTIVE | public=[ <networkIPAddresses>] | +------------------+--------------+--------+---------------------------------+ The first table shows that the 203.0.113.0 is now associated with the markets-test instance ID, and the second table shows the IP address included under markets-test's public IP addresses. Congratulations! You have created a project and launched your first instance in your Rackspace Private Cloud cluster. You can now use your OpenStack environment for any purpose you like. If you're a more advanced user and are comfortable with APIs, OpenStack API documentation is available in the OpenStack API Documentation library. The following documents are a good place to start: You may want to purchase Escalation Support or Core Support for your cloud or take advantage of our training offerings. Contact us at <opencloudinfo@rackspace.com > for more information. And please come join your fellow Rackspace Private Cloud users on our customer forums. Welcome aboard! © 2011-2013 Rackspace US, Inc. Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License
http://www.rackspace.com/es/knowledge_center/article/rackspace-private-cloud-software-creating-an-instance-in-the-cloud
CC-MAIN-2014-15
refinedweb
2,636
54.63
The idea of function parameters in Python is to allow a programmer who is using that function, define variables dynamically within that function. For example: def simple_addition(num1,num2): answer = num1 + num2 print('num1 is', num1) print(answer) simple_addition(5,3) Here, we defined our function name as simple_addition. In the function parameters (often called parms for short), we specify variables named num1 and num2. Next, within the function, we say this new answer variable is equal to whatever num1 plus num2 is. We then print out what num1 is, whatever it happens to be. Finally, the last line of this function just prints out the answer variable, which is num1 plus num2. Now, to run this function and make use of these parameters, we run simple_addition(5,3). This runs the simple_addition function using the parameters of num1=5 and num2=3. Then our program sums 5 and 3 together, then we print out that num1 is 5, and finally we print out the "answer" which was defined already, which is the sum of 5 and 3, which is of course 8. There is no limit to the amount of function parameters you have. If you want to just specify the definitions of these parameters without saying the parameter, like when we just said 5 and 3, instead of putting the parameter=5, then you must put them in the exact order. If you have a lot of parameters where it might be difficult to remember their order, you could do something like: simple_addition(num2=3,num1=5) In that case, when you call the function and define the parameters, you can see how we actually defined num2 before num1, even though in the function definition we ask for them in the other way around. As long as you specify the parameter you are defining, you can jumble them up. Otherwise, you must keep them in order! Finally, not only must they be in perfect order, but you must not specify too many or two few definitions. This will not work: simple_addition(3,5,6)nor will this: simple_addition(3)
https://pythonprogramming.net/function-parameters-python-3-basics/
CC-MAIN-2022-40
refinedweb
351
58.92
Lock v2 for iOS This reference guide will show you how to implement the Lock user interface, and give you the details on configuring and customizing Lock in order to use it as the UI for your authentication needs. However, if you'd like to learn how to do more with Auth0 and Swift, such as how to save, call and refresh Access Tokens, get user profile info, and more, check out the Auth0.Swift SDK. Or, take a look at the Swift QuickStart to walk through complete examples and see options, both for using Lock as the interface, and for using a custom interface. Requirements - iOS 9 or later - Xcode 8 - Swift 3.0 Install Carthage If you are using Carthage, add the following lines to your Cartfile: github "auth0/Lock.swift" ~> 2.0 github "auth0/Auth0.swift" ~> 1.0 Then run carthage bootstrap. Cocoapods If you are using Cocoapods, add these lines to your Podfile: use_frameworks! pod 'Lock', '~> 2.0' pod 'Auth0', '~> 1.0' Then, run pod install. Setup Integrate with your Application Lock needs to be notified when the application is asked to open a URL. You can do this in the AppDelegate file. func application(_ app: UIApplication, open url: URL, options: [UIApplicationOpenURLOptionsKey : Any]) -> Bool { return Lock.resumeAuth(url, options: options) } Import Lock Import Lock wherever you'll need it import Lock Auth0 Credentials In order to use Lock you need to provide your Auth0 Client Id and Domain, which can be found in your Auth0 Dashboard, under your Application's settings. In your application bundle you can add a plist file named Auth0.plist that will include your credentials> Implementation of Lock Classic Lock Classic handles authentication using Database, Social, and Enterprise connections. OIDC Conformant Mode } To show Lock, add the following snippet in your UIViewController. Lock .classic() // withConnections, withOptions, withStyle, and so on .withOptions { $0.oidcConformant = true $0.scope = "openid profile" } .onAuth { credentials in // Let's save our credentials.accessToken value } .present(from: self) Use Auth0.Swift Library to access user profile To access user profile information, you will need to use the Auth0.Swift library: guard let accessToken = credentials.accessToken else { return } Auth0 .authentication() .userInfo(withAccessToken: accessToken) .start { result in switch result { case .success(let profile): // You've got a UserProfile object case .failure(let error): // You've got an error } } Check out the Auth0.Swift Library Documentation for more information about its uses. Specify Connections Lock will automatically load the connections configured for your application. If you wish to override the default behavior, you can manually specify which connections it should display to users as authentication options. This can be done by calling the method and supplying a closure that can specify the connection(s). Adding a database connection: .withConnections { connections.database(name: "Username-Password-Authentication", requiresUsername: true) } Adding multiple social connections: .withConnections { connections.social(name: "facebook", style: .Facebook) connections.social(name: "google-oauth2", style: .Google) } Styling and Customization Lock provides many styling options to help you apply your own brand identity to Lock using withStyle. For example, changing the primary color and header text of your Lock widget: Customize your title, logo, and primary color .withStyle { $0.title = "Company LLC" $0.logo = LazyImage(named: "company_logo") $0.primaryColor = UIColor(red: 0.6784, green: 0.5412, blue: 0.7333, alpha: 1.0) } Configuration Options There are numerous options to configure Lock's behavior. Below is an example of Lock configured to allow it to be closable, to limit it to only usernames (and not emails), and to only show the Login and Reset Password screens. Lock .classic() .withOptions { $0.closable = true $0.usernameStyle = [.Username] $0.allow = [.Login, .ResetPassword] } Password Manager Support By default, password manager support using 1Password is enabled for database connections. 1Password support will still require the user to have the 1Password app installed for the option to be visible in the login and signup screens. You can disable 1Password support using the enabled property of the passwordManager. .withOptions { $0.passwordManager.enabled = false } By default the appIdentifier will be set to the app's bundle identifier and the displayName will be set to the app's display name. You can customize these as follows: .withOptions { $0.passwordManager.appIdentifier = "" $0.passwordManager.displayName = "My App" } You will need to add the following to your app's info.plist: <key>LSApplicationQueriesSchemes</key> <array> <string>org-appextension-feature-password-management</string> </array>
https://auth0.com/docs/libraries/lock-ios/v2
CC-MAIN-2019-04
refinedweb
727
50.94
IDE development and code completion for messages When I create a .msg file and then build my package using rosmake, it places the generated header file in package_name/msg_gen/cpp/include/... However, when I include that header file in my program, ROS allows me to use #include "package_name/messageexample.h" The issue here is that an IDE (Eclipse, in my case) uses that include line to properly perform its code completion functions. Since that path is not an actual path, it doesn't work. Setting the proper include directories to the FULL path (msg_gen folder) above does not allow code completion to work properly either. The only solution (hack) I found, which is not good, is to do a #include of the FULL path right after the proper ROS format include. Are there any solutions or better workarounds to this issue that anyone has found? Update (not enough room in a comment): I have tried @AHornung answer using the tutorials on that page, but the command rosmake --target=eclipse-project --specified-only * does not work, providing the error: WARNING: The following args could not be parsed as stacks or packages: ['CMakeLists.txt', 'include', 'mainpage.dox', 'Makefile', 'manifest.xml', 'src'] [ rosmake ] ERROR: No arguments could be parsed into valid package or stack names. I have included the path to my package in ROS_PACKAGE_PATH and can correctly roscd to it. Running the same command without the --specified-only does produce the proper eclipse project files, but the same issue I originally had with the generated messages is still there. Any thoughts? I am using Fuerte with Eclipse Juno. Thanks!
https://answers.ros.org/question/38137/ide-development-and-code-completion-for-messages/
CC-MAIN-2021-31
refinedweb
267
62.78
SYNOPSIS #include <mqueue.h> mqd_t mq_open(const char *name, int oflag, ...); DESCRIPTION The The name argument points to a string naming a message queue. The name string must begin with a slash character (/) and must conform to the rules for constructing a a path name. Processes calling If name not the name of an existing message queue and creation is not requested, POSIX_MQ_OPEN_MAX, defined in the limits.h header, specifies the maximum number of message queues that can exist system wide. The oflag argument requests the desired receive and/or send access to the message queue. The requested access permission to receive or send messages is granted if the calling process would be granted read or write access, respectively, to an equivalently protected file. The value of oflag is the bitwise-inclusive OR of values from the following list. One (and only one) of the first three access modes listed must be included in the value of oflag: - O_RDONLY Opens the message queue for receiving messages. The process can use the returned message queue descriptor with the mq_receive()function, but not with the mq_send()function. A message queue may be open multiple times in the same or different processes for receiving messages. - O_WRONLY Opens the queue for sending messages. The process can use the returned message queue descriptor with the mq_send()function but not with the mq_receive()function. A message queue may be open multiple times in the same or different processes for sending messages. - O_RDWR Opens the queue for both receiving and sending messages. The process can use any of the functions allowed for O_RDONLY and O_WRONLY. A message queue may be open multiple times in the same or different processes for sending messages. The oflag value may contain any combination of the following flags: - O_CREAT Creates a message queue. When this flag is specified, mq_open()also requires the mode and attr arguments. The mode argument is of type mode_t, while the attr attribute is a pointer to a mq_attr structure. If a message queue created with the name argument already exists, this flag is ignored (except as described under O_EXCL). If no such queue exists, mq_open()creates an empty message queue with a user ID and a group set to the effective user ID and the effective group ID, respectively, of the process. The file permission bits of this queue are set to the file permission bits of mode group ID of the process. If attr is NULL, mq_open()creates the message queue with the default message queue attributes: 10 messages, each of size _POSIX_PIPE_BUF. If attr is non-NULL and the calling process has the appropriate privilege on name, mq_open()sets mq_maxmsg and mq_msgsize attributes of the created queue to the values of the corresponding members in the mq_attr structure pointed to by attr. If attr is non-NULL and the calling process does not have the appropriate privilege on name, mq_open()fails without creating a message queue. - O_EXCL If both the O_EXCL and O_CREAT flags are set in oflag, mq_open()fails if the message queue name already exists. The check for the existence of the message queue and the creation of the message queue if it does not exist is atomic with respect to other threads executing mq_open()naming the same name with O_EXCL and O_CREAT set. - O_NONBLOCK Determines whether an mq_send()or mq_receive()function waits for resources or messages that are not currently available, or fails with errno set to EAGAIN. See the mq_receive()and mq_send()reference pages for more details. PARAMETERS - name Points to the name to be assigned to the newly created message queue. - oflag Is the bitwise-inclusive OR of the flags to be used when creating a new message queue. RETURN VALUES On success, - EACCES The message queue exists and the permissions specified by oflag are denied, or the message queue does not exist and permission to create the message queue is denied. - EEXIST Both the O_CREAT and O_EXCL flags are set and the named message queue already exists. - EINTR mq_open()was interrupted by a signal. - EINVAL mq_open()is not supported for the given name. - EINVAL The O_CREAT flag is set in oflag, the value of attr is not NULL, and either mq_maxmsg or mq_msgsize was less than or equal to zero. - EMFILE The calling process is currently using too many message queue descriptors. - ENAMETOOLONG The name argument is longer than {PATH_MAX}. - ENFILE The system has too many message queues currently open. - ENOENT The O_CREAT flag is not set and the named message queue does not exist. - ENOSPC There is not enough space to create the new message queue.q_close(), mq_getattr(), mq_receive(), mq_send(), mq_setattr(), mq_timedreceive(), mq_timedsend(), mq_unlink(), msgctl(), msgget(), msgrcv(), msgsnd() PTC MKS Toolkit 10.3 Documentation Build 39.
https://www.mkssoftware.com/docs/man3/mq_open.3.asp
CC-MAIN-2021-39
refinedweb
788
62.78
On 04/20/2010 04:37 PM, Joel E. Denny wrote: > On Tue, 20 Apr 2010, Eric Blake wrote: > >>>>> if you use autoconf 2.64 (prior to that, if the existence and >>>>> compilation checks disagree, autoconf went with the existence check). >>>> >>>> We are using autoconf 2.65. >> >> Then I'm confused how /opt/csw/include/getopt.h is getting included at >> all in the first place. > > I guess the check only tries cc. Recall that bison is compiled with cc, > but some test groups need CC. > >> What does config.log say about getopt? > > configure:6857: checking for getopt.h > configure:6857: cc -c -g -I/opt/csw/include conftest.c >&5 > configure:6857: $? = 0 > configure:6857: result: yes Okay, so the problem is that the header works for C, where getopt.h is adequate, and we don't have the granularity at the moment to repeat header checks for C++. I'm assuming that later in configure we reject -lgnugetopt's getopt_long as broken (unless libgnugetopt has somehow picked up this month's glibc fix), meaning that we still want to declare rpl_getopt and friends; the conflict in getopt() is irrelevant if we are going to be declaring a replacement anyways. Meanwhile, as you reported: >>From /opt/csw/include/getopt.h: > > ------------------------------ > #if defined (__STDC__) && __STDC__ > #ifdef __GNU_LIBRARY__ > /*__ */ > extern int getopt (); > #endif /* __GNU_LIBRARY__ */ > ... > #else /* not __STDC__ */ > extern int getopt (); <---------- line 122, where the error is reported > ... > endif /* __STDC__ */ since __STDC__ is 0 under CC but 1 under g++, maybe the best fix would be to make /opt/csw/include/getopt.h drop the check for the value of __STDC__? Or have gnulib's getopt.h explicitly define __STDC__ to 1? Or do we modify the logic in getopt.m4 that sets @HAVE_GETOPT_H@ to instead check two language-dependent values, one for C and one for C++, such that the #include_next <getopt.h> only occurs for languages where it will work? -- Eric Blake address@hidden +1-801-349-2682 Libvirt virtualization library signature.asc Description: OpenPGP digital signature
http://lists.gnu.org/archive/html/bug-gnulib/2010-04/msg00346.html
CC-MAIN-2016-44
refinedweb
337
69.28
Since. CONTENTS 1. The Method 2. Client ID 2.1. Session-scoped Custom Dimension 2.2. Custom JavaScript Variable 2.3. Window Loaded Trigger 2.4. Event Tag 2.5. End result 3. Session ID 3.1. Session-scoped Custom Dimension 3.2. Custom JavaScript Variable 3.3. Modified Page View Tag 3.4. End result 4. Hit timestamp 4.1. Hit-scoped Custom Dimension 4.2. Custom JavaScript Variable 4.3. Modified Tags 4.4. End result 5. User ID 5.1. Hit-scoped Custom Dimension 5.2. Data Layer Variable 5.3. Modified Tags 5.4. End result 6. Summary In the following chapters, we’ll build four Custom Dimensions and four data collection methods that will let you include this information in your data set. We will, of course, be using Google Tag Manager to make things more manageable.. . The most difficult one of these solutions to implement, by far, is Client ID collection, so we’ll start with that. 2. Client ID. >>IMAGE. . . . 3.2. Custom JavaScript Variable The Custom JavaScript Variable is aptly named {{Random Session ID}}, and it has the following code:. . . So, to get something like this, some JavaScript is required. Create a new Custom JavaScript Variable, and name it {{Hit Timestamp Local Time With Offset}}. Add the following code within:. . . . . . . Bro – serious points here, very much appreciate the write up. Have had limitations with data warehousing of client data who are on free GA. Will take a serious look at incorporating this. Excellent post. Typo on the date… April 3rd I believe on the time stamp… but you are excused for being a day ahead considering how your ideas are also generally ahead of the rest :-) Haha, thank you for pointing that out :) I blame the timezones (I wrote the article -10 hours from where I usually write). Simo Man you are a legend! Most of the GA users are not even using GA to its 25% of its capacity and meaningful data collection is a concept not even known to 99% of them and the stuff you are doing here is legendary. Good stuff! Simo you rock! I nudged you at SMX and here it is – the info I was so long looking into the last months. NICE! Talk soon. Simo, this is a cool insight of what you should collect to be more granular about users’ interactions. A suggestion: I would rather get a client ID from a cookie. It allows to have client ID even for bouncing sessions (one hit in a session) and most often it is the same for all trackers. function() { var cookie = document.cookie.match(‘(?:^|;)\\s*_ga=([^;]*)’); var value = (cookie) ? decodeURIComponent(cookie[1]) : null; if (value) { cookie = value.match(/(\d+\.\d+)$/); } var cid = (cookie) ? cookie[1] : null; if (cid) { return cid; } } Do you agree? Hi, As I mention in the post, this has the same limitation as the tracker method (which is the recommended way to retrieve clientId anyway), which is that the cookie is written after the ‘create’ call, and is not yet available when the Tag fires. This means that first time visitors who bounce would be excluded from the count completely. I’m writing a post that will let you send the clientId with the Page View Tag by using the tracker object interface method. It’s a bit of a hack, but the best thing we’ve got until the GTM tag template lets you run ‘get’ commands on the tracker before the ‘send’ command. New visitor’s bounce will not work for either methods — true. But _ga cookie method will work for returned visitor’s bounce, while ‘get’ method will fail. It matters in case you struggle for most complete data. The Event method works for bounces as well, just send it non-interaction: true. In terms of data integrity, it’s best, because it doesn’t drop an entire visitor segment (new visitor). Also, multiple _ga cookies is not a rarity with rollup properties being very common on larger website complexes. So while I see the benefit of tapping into the cookie to avoid a superfluous hit to GA, I do not like the fact that new visitors are disregarded with the first hit, and thus an entire segment is dropped. You could do a combination, where the cookie is used first, and if it doesn’t exist then use the Event, but then you have the problem with multiple _ga cookies. So, personally I’m still not swayed by the cookie method. But thank you for sharing it, as others might find it preferable. The workaround I have in mind to write about soon will make it so that you can use the tracker interface with single-hit sessions as well, without a glitch. Does any of this violate Google’s guidelines for usage of Google Analytics? I feel like these are features that Google has intentionally withheld from us. Hey Rob, I’ve read through the Terms of Service, and try as I might I can’t see anything there that would indicate we’re breaking rules. All this information, except for UserID, is already exposed in BigQuery, so IF Google were to prevent us from exposing this data ourselves, it would be for purely commercial reasons (they wan’t you to buy Premium licence) and not for privacy reasons – and I don’t think that’s ever been the case with GA. Well, you could say that sampling and quotas are a marketing ploy to get people to buy Premium, but it’s well-founded that processing power is reserved for those who pay. The ToS strongly highlights that personally identifiable information is a no-no. So exposing ClientID / UserID might be in the grey area, since even though you’re not identifying a PERSON, you’re identifying an individual. So by segmenting with these dimensions, you can see what an individual is doing on the site. Nevertheless, you can already do this with Transaction ID or by choosing a segment that only displays data for a single user, so the logic breaks down there as well. Also, Google’s own Justin Cutroni has written about this method before:. If Justin can write about it and not be concerned about the implications – so can I :) Thanks for voicing out your concern, but I’m confident there’s nothing to worry about here. If there are repercussions, I would love to hear what Google’s arguments are, as it would reek of double-standard to me. Simo Thanks for that reply, Simo. Sounds like you’ve really done your homework on this. Hi Simo, just wanted to add a (not so) small detail. If you use the analytics.js’ user ID AND pass that SAME number to GA, the UserID enabled view will stop working. it did for me. I guessed GA sees UserID as too personnally identifiable. Raf Hi, Wow, really? That doesn’t make sense! If you’re sending personally identifiable information using UserID, then not getting the UID working is the least of your worries. However, if you follow the guidelines for setting up UserID, then you’re using a hashed key that has absolutely nothing in it that can be used to identify the person. Now, what you do with this hashed key when you pull data out of GA is up to you and your site’s privacy policy. There’s nothing in the ToS that prevents you from identifying the person in your backend (e.g. linking the hashed key to a customer profile in the CRM). Anyway, &uid is a key generated by you, not the tool. So it would be very, very strange if GA would take action against you for sending the same value in two different dimensions (&uid and custom dimension). You could, for example, be using a similar algorithm with your user IDs that also calculates transaction IDs, which means that the minute there’s an overlap the view would shut down? Make sure you haven’t botched the setup. I will definitely run some tests now to see if you’re correct. Thanks for the heads up! Simo Well just to be sure, I will use a different property and try to reproduce the issue. I’ll come back and keep you posted. Looks like I was wrong on that comment. I did the test separately and it seem to work. I will be sending the user ID through the custom dims now. Thanks Simo! Lol Raphael… It’s OK to be wrong… but never twice :D Brilliant post again. I was thinking about sending session Id and timestamp as custom dimensions and hadn’t thought about when the cookie is available. Really cool alternate approach creating your own and storing as a session variable. you make it really easy to implement, when all the hard stuff has been thought about :) Great post! Love this stuff. Also read the schema conspiracy story. Very intriguing as well. One question though about the session ID and sessionizing theory.. If I set custom dim at session level I’m exposed to the same session time-out set in GA. I know I can alter this, but GA also generates a new session when my campaign changes (i.e. going back and forth between website and Google search organic vs. paid). Hits will then also be reported with different sessionIDs. What are your thoughts on this? So, basically that you end up with a new sessionID only if the default (30min) session time out caused this. Thnx! Great post. Do you think it could work even using only GA instead of GTM? Hi Nicola, Of course! There’s nothing you can’t do in analytics.js that you can do when using GTM for Google Analytics data collection. Nice! Let me look into it and I’ll try to replicate this in GA :) WOW man this is brilliant and you explained it so clear. Pure inspiration! Great Post!!! Simo, I am a bit worried that implementing at such levels of granularity would hit the sampling thresholds earlier & would be bad if one is using this on a high traffic site & on free GA license. Not sure how sampling thresholds would be affected by Custom Dimensions alone, as they are piggy-backed on existing hits, but with using an Event for client ID this is a true concern and common sense is called for before someone implements something like this. I am working on a post which explains how it would be possible to add the client ID to the existing page view hit, so you wouldn’t need any additional tags. that is Superb!! thanks; awaiting you to publish the article on aligning GA to capture client id with pageview Regards, Deepak Nair Is there any good way to put these newly timestamped pageviews AND events into a single custom report so you can look at a user’s entire visit in chronological order? I haven’t been able to figure out a way to do that yet. I still love what we’re getting so far though. Many thanks! Hey, Not really, since the timestamp is a string and not really a date object or anything that could be easily sorted or visualized. The whole point of this exercise is to clean up the data for export and visualization in another platform (e.g. Tableau, Klipfolio), so I suggest any advanced analysis to take place outside the GA interface:) Makes sense. Thanks! Hi Simo, Really helpful post! Would this approach have any adverse affects if I were to use a user scoped custom dimension? Further context – I’m looking to store Client ID in a custom dimension, which I currently have scoped to “user” Hey, Well, the question is more about privacy and how you handle user consent for tracking them across sessions. User ID policy at least used to have the initiative that after the client logs out, they should no longer be tracked for their user ID. Technically there’s no issue, and it does make the tracking more robust. But it’s definitely grey area in terms of privacy and opt-in tracking. Hi, ¿Could you explain a bit more why is it more robust? I’ve configured my custom dimension user-scoped. Another thing, I’ve defined two goals on GA and looking at some Google CID’s many have at least 2 sessions and at least 2 goal completions on the same time window of a session. (Some have even 120 sessions and 14 goal completions).To my understanding if a user’s in a session just one goal of a kind is counted eventho the user makes it more than one time. Could it be that, since it’s user-scoped, sessions are counted as separate and goals counted but attributed to a single user(GCID user). I’ve checked this out via User explorer in Analytics (beta). User-scoped is more robust, in my opinion, since it’s bound to the user cookie and not some fickle session-algorithm that GA uses. If you’re observing a User-scoped dimension then yes, Sessions will include all sessions the user had in the time range, and all goal completions (with the caveat that only one goal per kind per session is recorded). Is the event tag in step 2.4 supposed to be a Universal Analytics Event Tag? In my case, it’s sending a lot of undefined events into Analytics (as off course it’s an empty event tag with only the custom dimension value in there). I even believe that as soon as it catches the clientId that it will fire a Universal Analytics event with every consecutive pageview. You do actually need to add values to the Event Tag. Category and Action are required values. As long as you use the Trigger from 2.3, the Event will only fire once on every page. Thanks for the quick reply. Now I know a UA event will be sent with every pageview where the cid is available. It increases the event count in GA a lot in my case. 1. I’m wondering if I exclude these events via an exclude filter, if the magic with the cid would still work. Don’t know at what stage the session-scoped value will be applied to the session. I might test this out to see if it would work. 2. In my case, I did not define the event category or action for this UA event (implemented via GTM). It did show up in UA as undefined for both. Strange, as UA does say everywhere it’s a required value. Hey Rene, It’s not ideal, but it does the trick without adding a single new tag (you can just use your existing Page View Tag). Hi Simo, I know that you answered this question before, but still isn’t all clear to me. Regarding the event tag in step 2.4… Should I create a new Google Analytics Universal tag or should I re use the one that’s already created? Thanks in advance! Hi, The Event Tag should be a new one, created specifically for sending the Client ID. Simo Hi, Simo! Thank you for sharing your experience! Can I use just my GA id or constant variable instead of {{GA Tracking Code}} for 2.2. Custom JavaScript Variable? And should I choose All pages trigger for Client ID Event Tag? Sorry for boring questions Thanks your posts about this are so useful. 1 question, regarding scoping the client id. You say: “Since we’re sending the information (client id) to a session-scoped Custom Dimension, you only need one successful hit sent during the session.” Why not make the custom dimension user-scoped, so you only need one successful hit across all sessions? If you ever miss the event from a return session, you would still retain the value from the earlier session. After all, by definition all sessions will have the same client id, as that is how sessions are linked together. Hey Michael, Yeah, that’s true and for some reason I’ve neglected this in my original post. I had some train of thought which I have lost since, and I’ve received some feedback about this from others as well. I think I’ll add a note there to this effect. Thanks for pointing this out! Hi Simo – First off, I’ve learned so much from your posts. Thank you for sharing your knowledge with the community! I’m having issues with 2.4 and how to implement the event tag in your instructions. Can you post what you’ve put in your Category, Action and Label fields as well as what trigger you use to fire this tag? Thanks so much in advance! yeah, I am newby and a bit confused by this also. Adam, did you ever figure it out? Hi, The Event Category, Action and Label can be _ANYTHING_. Doesn’t matter. The Event Tag is simply used to carry the value to GA as a Custom Dimension. The Trigger is the one you create in 2.3. (Window Loaded). Simo – thank you for this post! Is there a way I can pass these Custom Dimensions into my CRM or via a hidden field in a Form? That would be a big help – to co-relate Lead Attributes with Campaign / Session attributes. Regards Hari Hi Simo, With regards to the Client ID as custom dimension, I was planning to tie users (people in real life parlance, browsers for all technical aspects) before and after they had generated a lead on my website. The way I would go about this is to capture a Client ID as a user scoped custom dimension and whenever a lead is generated (In case of my site, it is submitting your phone number) reset the Client ID to a value like cid|phone number. So if the cid of the user was 1234.5678 and his phone number was 9909, his new client id would be 1234.5678|9909. I will then capture this as a user scoped custom dimension again. This way, I can actually tie users to after and before they submitted a lead. I understand that it will alter my sessions and users data, but then again schema conspiracy! I may have to encrypt the phone number before I set it up as a client id but I think it should be fine with Google’s policies. Do you see any pitfalls in this method? Will appreciate your feedback So for a subscription based product where you’re trying to track the changes between the different plans that you have using GTM/GA. Would you set the customer’s plan as a hit, scope or user level dimension. My initial instinct was to use user level dimension but then a plan change happens within a session. So you’re in the product, you see an upsell dialogue, you click it and you upgrade. As a customer that would be the flow you take. As an analyst I’m trying to visualize a goal flow report starting with the plan you were on before you upgraded till you upgraded, but that becomes impossible when you previous plan has been overwritten by the plan you upgraded to and all the actions you took are now correlated with the new plan as the dimension even though you did them on the old plan. So now I’m thinking it should be a hit level dimension but I feel like there’s something I’m missing, a problem that I’d encounter in the future just because it seems so counter intuitive (brain scratch). Would be great to get anyone’s insight on this. Great write up – thanks! Got everything implemented and working great for the most part. I did run into the same issue as Raphael Calgaro where adding a User ID dimension caused my userId “view” to stop showing Cross Device reports, and I can’t see User ID dimension in reports. I’m going to test switching the actual name of the dimension to something different (eg. Account) and see if they are actually looking for that dimension name, but seems like a long shot. If anyone else has had success with User Id dimensions, I’d love to know! Getting that data into Tableau would be really useful for us. Hey Simo, Really enjoy reading your posts. Regarding section 2.2, specifically the Custom JavaScript Variable I am looking to create a custom dimension that leverages the Hubspot cookie (_hstc) rather than the _ga cookie, which would pull the correct client Id (HubSpot token (hubspotutk) assigned to each identified user) into Google Analytics. Is there a best way to approach this for leveraging Google Tag Manager? Thanks in advance. -Justin Hi Simo, Thanks for your post! I have been able to do everything exept I have a little issue with the Client ID. It always return me “n%25252525252fa” as the Client ID value… Is there something I did wrong? Can you help with that? Thanks again Hello Simo, congrats for the blog and useful content! I was trying to setup a custom dimension (User Id), all looks good in the tag manager debugger but in analytics it only appears “{{“. Have any idea of what may be happening? thanks in advance! Regards, Nuno Hello Simo, I have implemented this post, I am facing issues with IE 9 and previous versions. Can you suggest me something? Thanks, Ravi Simo, For the Client ID implementation, I am seeing an overwhelmingly high volume of “false” come through as my Client ID custom dimension. I do see about 1% of sessions populating the Client ID string, but the other 99% are false. Any tips on what might be going wrong? Hi If you followed my guide to the letter, the Tag shouldn’t even fire if clientId is ‘false’. So I’m assuming you diverted from my guide. To get the clientId using my method, you need an Event Tag which fires on the Window Loaded Trigger, where the condition is that the “get clientId” variable does not equal false. Simo Hi Simo, thanks for your post. I have a question…how can i get client Id in a input value form,…once I have client Id inside custom dimension? I saw this but is when you get client id from cookie: ga(function(tracker) { var clientId = tracker.get(‘clientId’); }); Thanks Simo!! Hello Simo, I’ve implemented the process to pull client ID and I see it populate on all of the pages on my main hostname, but I am having difficulty with getting the Client ID to populate within a subdomain hosted iFrame embeded on a Contact Us page. However, I do see Session ID populating within the iFrame. Any advice? Hello Simo, I have almost got the client ID implemented on my site. But there seems to be something wrong with it. When i do a GTM debug in my get client ID tag property i see “false” for my new custom dimension, but when i go in the variable i could see my client id value there. Since the property has false, i am not able to use the trigger so i tried using fire on all pages trigger and that is how i was able to see the value in the variable. Can you please let me know what possible could i be doing wrong here? thank you in advance. The “All Pages” will most certainly return “false” – that’s because it fires before the analytics.js library has created the first GA tracker on the page, so it’s not a good debugging tool. There must be some other reason it’s not returning the clientId properly. Make sure the Tag fires on the Window Loaded Trigger, and that the code is copy-pasted exactly, without any errors. You can even try copy-pasting the code into the JavaScript console on your site to see if it returns a proper string or just “false”. If it returns “false”, there’s something wrong in the code, and you need to debug it. Hello Simo, Thank you for the reply. My Window Loaded Trigger rule is “Get Client ID for current Tracker does not equal false” The problem that i am having is that when i do tag debugging via GTM Preview mode, I have “false” for my tag property. So my trigger is not working. But when i look in the variable i see my GA Client ID. Tag Property Variable I am not sure what seems to be the issue. The JS is working because i am able to populate the client ID in the variable. please let me know. thanks Hi Simo, Thanks a lot for these great tips. They really add value to our business. I just wondered maybe it is possible to reduce the volume of events sent to Google Analytics to pass CID value to the Custom Dimension. I would suggest only firing this event when the referrer does not match the internal page. This way it would be possible to save the number of events being passed to Google Analytics. What do you think? Is this feasible or am I missing and important detail here? Thanks again for your insights! Jonas You’re a saint for answering all of these questions! I’ve a silly one for you, looking at the example above (way above) in your post regarding the 7 rows and the various custom dimensions you set up – the only one I am trying to figure out is where you get the hit type from? e.g Page, Event, etc That’s just my Excel data store illustration where I add the information based on the API query :) So there’s no “hit type” dimension you can query (BigQuery, of course, has this). I just grab (and transform) the dimension name I used in the query. Hi Simo, Great and very detailed article indeed. Thanks for sharing your knowledge with the community! We would like to implement the user ID tracking for our Android App but we do not use GTM at this point. We are however, sending the user id to GA and have a separate user ID view which works. Is it still necessary to configure variable and modify tags in GTM? Your reply is much appreciated. Peter Could I set the scope of the custom dimension Client ID to user scope to collect and since its not personal information still comply with google’s terms and condition’s about storing personal information? Sure, go right ahead :) Thanks Sim! Great blog, keep it going. Simo, I’ve followed your instructions and have setup all four custom dimensions in Google Tag Manager and Analytics. For the developers role here, how can they extract the GA CID and pass it to our CRM on a form submission? I understand this will be done through hidden fields, however, I want to make sure they implement the code correctly. Is there a specific java script code that they need to use that pulls the CID from GTM/GA? Thank you, Jonathan Hey Jonathan, The code that pulls the Client ID is: If you fire that AFTER the first GTM GA Tag has executed, it will store the Client ID in clientId, and you can then proceed to do whatever you want with it. Simo Excellent, thank you, SImo. How do I make sure the code fires AFTER the first GTM GA Tag? Is that something I can control within GTM, or would our developers know what to do from here? Thanks again for your support. Brian Hey Jonathan/Brian (not sure whom I’m addressing here :), That’s complicated, or at least usually is. GTM loads asynchronously, so there’s no way to actually pinpoint the exact time that a Tag has completed. One thing you could do is create a dummy Universal Analytics tracker in the same instance where you’re decorating the form field: So this loads just a dummy GA tracker, and only utilizes it to pick up the Client ID. You can have this Tag fire on the “Page View” event, i.e. as early as possible in the page load sequence. After you’ve stored clientId into a global variable (or locally scoped one, doesn’t really matter), you can do whatever you wish with it, for example create a hidden form field into your form with the clientId as its value. That’s just one way to do it. If you want to leverage GTM’s native features, you can visit stuff like tag sequencing and tag callbacks to signal the page that a tracker has been created. Hey Simo, this is a very interesting solution. Just wondering, is there an app solution for these 4 dimensions as well? Yeah, all these fields are available in an app as well. You’d just have to use native methods to create the timestamp and the session ID. User ID and Client ID can be accessed via the tracker interfaces for Android and iOS respectively. Since in GTM for apps the GA tracker is not exposed, you’ll need to create a dummy tracker to access the Client ID. Thanks, a little bit more complicated but I am sure the dev guys will know what to do. Thank you Hi Simo, Have you written, or are you aware of, a guide that outlines how to do this entire process without using GTM? We are yet to implement GA inside of GTM, so don’t have access to the data layer or the relevant triggers inside GTM. Thanks in advance! Jordan Hey, I haven’t written a guide, nor am I aware of one. However, the solution is just JavaScript, so all you basically need to do is run the scripts before executing the ga('send'...)commands, applying the script results into the ‘dimensionX’ keys in the ga()commands. With an able web developer, it should be very trivial to do! Simo Thanks Simo. So run the custom Javascript variables you’ve outlined in code, above the ga(‘send’…) commands on the page, and then capture the relevant names in as custom dimensions as you would any other custom dimensions? We basically know how to capture the User ID function as a custom dimension, but things like the Client ID look more complex. Thanks again! Jordan Hi, Check this post for ideas: You can’t run the .get(‘clientId’) command before the ga(‘send’…) command, since most likely the tracker object hasn’t been created yet. Instead, you need to wrap the commands in ga(function() {...})to be able to use tracker interface commands. Other than that, it’s quite simple, really. You execute the synchronous JavaScript commands before the ‘send’ command, and set the values either directly on the tracker with ga('set', ...)or on individual hits. Simo Hi Simo, I am new to Google Tag Manager and trying to implement this data collection on my project. Regarding (2.4. Event Tag ) – Is this “Event Tag” should be configured on my current existing Page-View tag ? Or should i create a new Tag ? Thanks ! Hi Roy Event is its own track type for the Universal Analytics Tag, so completely different from your Page View Tag. Take a look at the GTM Fundamentals Course to learn more about GTM. Hi Simo, Thanks, this is amazing. Putting aside the issue of running multiple tags on one page for a minute.. Do you see any problem with replacing your custom javascript with one of GTM’s predefined ‘1st party cookie’ variables? I’m thinking combining the in-built variable with with the non-interaction event & window load trigger will avoid the missing client-ids experienced when sending with the page view? Many thanks Ross Hi Simo, Thanks for the post. I’m trying to get this to work with GTM and the value ‘false’ is being passed into GTM instead of the client ID. I set up everything exactly as it is in the article. I read through all the comments, and tried putting the javascript into the console. I get an error when I do that. It says “Uncaught SyntaxError: Unexpected token ((…) VM1056:2”. I looked through the script and I can’t find any parens or curly braces that don’t match. Any ideas? What is the exact code you are using? Copy-paste it to the response. Thanks! Hello, Excellent post, again :) I would like to use UserId with datalayer and hit. .” How will the exact code be here? window.dataLayer = window.dataLayer || []; dataLayer.push({ ‘event’ : ‘userId’, ‘userId’ : ‘1’ // Setting to 1 if logged in, 0 if not logged in }); You may not need an event here? Hi, You won’t need an ‘event’ if it’s before the container snippet. If it’s after, you’re best off using an ‘event’. Sending ‘userId’ with ‘1’ or ‘0’ isn’t the best idea. It will group all your logged-in users under a single GA user in that case. UserID should be a hashed, unique identifier of the individual user. Hi Simo, Great post. These dimensions became standard in all implementations. However now I am facing hit limitations. Do you know a solution where you can send sessionID, timestamp and clientID with one hit? Instead of two seperate hits? I can’t figure out a workaround for the clientID. Hi, you can send an Event with Window Loaded, which sends both sessionID and clientID as part of it. That would combine them in the same hit. Simo Hi Simo, Thank you for your reply. Actually I want to get rid of this event, because I send sessionID and timestamp with the standard pageview tag. However, I don’t know if the clientID is allready known before the pageview tag is fired. If this is possible I can send the clientID with my pageview tag. Frank. Hello Simo. Very interesting and precise. thank you ! but when I trigger the event (2.4. Event Tag) I have an undefined event in the GA reports. Is it just possible to send the custom dimensions only ? Thanks a lot Olivier Custom Dimensions need to be piggy-backed on existing hit types, e.g. Events and Pageviews. So no, you can’t just send custom dimensions. If you see undefined fields in event hits, you can always add some values to the fields of the Event Tag to fix this issue. Thanks for this article Simo. Just a comment for anyone browsing this post who works in spreadsheets a lot. Based on the ISO timestamp + offset, it’s possible to get the Gsheets to read this as a date (and do subsequent formula based on that date) with a rather long ugly formula. See someones SO post “” (you need to remove the offset part of the timestamp for that formula to work). So though the ISO formatting is convenient for reading within GA interface, it involves some light spreadsheet acrobatics to convert to a date and make use of date functionality in spreadsheets. I experimented with the different timestamp formats in the list above, starting with unix time on line 1, and ended up at the last line item in that list by altering Simo’s JS to fit my needs. From what I can tell if you drop a date into GSheets witht he following format, it will be read as a date straight of the bat without having to edit anything: m/d/yyyy hh:mm:ss The JS below is what I ended up with, so for anyone else browsing this post and want to cut n paste a timestamp GTM variable they can subsequently drop into a spreadsheet without having to use formula to convert to a date: function () { // Get local time var now = new Date(); var pad = function(num) { var norm = Math.abs(Math.floor(num)); return (norm < 10 ? '0' : '') + norm; }; return pad(now.getMonth()+1) + '-' + pad(now.getDate()) + '-' + now.getFullYear() + ' ' + pad(now.getHours()) + ':' + pad(now.getMinutes()) + ':' + pad(now.getSeconds()) } Regarding the timezone offset. If you still need this you could separate with a bar "|" and use =split() function in Gsheets then format the left side as date. Or pass the timezone offset in a separate dimension if you have abundant GA custom dimension slots to spare. I see how you can pass the Client ID through a hidden form field, but can you do the same with the “Session ID?” We would eventually like to pass both values into our CRM database. Hi, It’s quite difficult. The sessionId changes every time the Variable is resolved. This doesn’t matter for GA, since only the last value passed is the one that counts. But trying to match this with one of the values you send to your CRM is really difficult if not impossible. My suggestion is to use some other identifier (something fixed) to match the form submission with GA (using some submission ID that you send to both GA and CRM), and then cross-check the submission ID in GA with the related session ID. Hey Simo, I was thinking about this also. We have an ecommerce site where we get lots of fraud, as such we would like to pass the client ID and Session ID into CRM and only confirm sales want they are actually sales – sometimes not fact until about a week later. Is it possible to capture Client ID and Session ID on ‘confirmation’ page and then use the measurement protocol to attache a sale to that event once it has been closed out? Thanks, Dave Hi, Yeah you can capture Client ID, but Session ID won’t do you much good as it’s just a Custom Dimension. The problem with your approach is that it happens a week later, so sending the data with Measurement Protocol will create a new session with nothing but the transaction. GA doesn’t let you add data to history, so you’re looking at a quite difficult situation. My suggestion is to send Refunds instead when the transactions turn bogus. There’s information about refunds in the Enhanced Ecommerce documentation: Simo Hi Simo, GA has launched new feature UserExplorer and is showing ClientID and associated navigation details. I’ve a query on this,, Sure, just extract the Client ID using the same method as exemplified in the Custom JavaScript Variable, and then add it as a hidden field into a form or something. When the form is submitted to your backend, the Client ID is sent along too. It really depends how your backend system is built. If there’s an API you can send HTTP Requests to, that would be an alternative solution. For section 2.4. Is it possible to capture the ClientID in a custom dimension but not in an event? Every time someone lands on the site it creates a new event and I would like to prevent that from happening. Any help is appreciated. Hi You still need to send the Custom Dimension with either a Page View or an Event hit. I think I’ve explained in the article why Page View won’t really work (the tracker object isn’t available for the first Page View). That’s why I use an Event in this example. Hi Simo, I’m trying to implement your solution to get the Client ID on a custom dimension, but it’s getting populated with “false” instead. What do I did wong? For the record, I have my property ID stored on a custom variable called {{Analytics ID}}, so I made the proper change to your Javascript code. Instead of using {{GA Tracking Code}} I am using {{Analytics ID}}. Any ideas? Thanks in advance. Hi Simo, thank you very much for this great Blog article :) I have a question regarding the privacy concerns of using a client-scoped userId custom dimension. Would this really provide me with any information I can not already retrieve from what I have? Once a user authenticates I could get his client_id and could therefore link the user to the client_id and vice versa. Am I missing something here? Hi, If you don’t need to query for the actual User ID value, then no, you don’t need to expose it as a Custom Dimension, as you can just go to the User ID view to query for the Client IDs and Session IDs etc. However, I find it useful to do direct queries against the hashed User ID too every now and then, which is why I want to expose it as a Custom Dimension. I don’t really see any privacy concerns if it’s hashed and you’ve informed your visitors about the types of tracking you have running on the site. Hi again Simo, I was wondering whether you would be so kind as to give me hand with my problem. FYI, I have a WordPress based site and I’ve implemented GTM using this plugin: Thanks in advance. Love everything in this post! One argument against your way of generating and sending the Session ID… We are using Google Analytics API to pull data one hour at a time throughout the day. The problem we have run into is that the Session ID changes with each new “hit”, so whenever a Session spans two different API downloads, it looks to our system as if the user/client are generating two different sessions. So, we will swap out some more complex logic for the Session ID and make sure it sticks to the original/first value over the life of a session and doesn’t change. Cheers! Hi, Yeah, that sounds like a very exceptional use case. Normally sessions would reflect user interactions, which is why there shouldn’t be an issue with the default settings. But if you add measurement of programmatic stuff like periodic API calls, it’s no longer up to the user to enable those dispatches, which means you’ll need to hack around it. If this is typical in your profile, you could also increase default session timeout from 30 minutes to 61 minutes in your GA property settings. Simo Hey Simo, I’ve been using these tips for a while now, they’re like a “step 1 after setting up GA tag”. However, I’ve just come across info that custom dimensions are limited in the number of values they will display per day (50.000 for free GA). Since the custom session ID creates a lot of these, I can assume this becomes a problem pretty fast on a medium sized site. Any experiences with this? Hi, Custom Dimensions aren’t part of the cardinality calculations, so even if you have 50000+ Custom Dimensions in the report, you’d still see the data for them. Ah, OK, thank you! Hi Simo Just looking at your method for collecting the Client ID, and I have a couple of questions: If you set up Event tracking for every hit, won’t this significantly impact the hit count. I’m looking at a website which already has about 4M hits per month and I’m a bit nervous about doing this. I’ve also noticed that Event reports in particular tend to get sampled quite a lot when there are a lot of events. Would an alternative be to set a cookie after the Client ID has been tracked with an initial GA Event tag, and then block the trigger for the event if the visitor already has the cookie? This way the event only fires once per session? Otherwise, could you have an initial GA pageview tag fire that sends data to a different GA Property and then use tag sequencing to fire your main GA pageview with the Client ID info? Wouldn’t the Client ID be the same for both GA Properties? Also, with getting the Client ID from the cookie vs variable. If you don’t have the multiple tracker issue, and you are getting the client ID from the Event tracking rather than the pageview, is there any disadvantage from getting the Client ID from the _ga cookie in that scenario? Conversely, is it faster to get the Client ID from the custom javascript variable? Thanks Nick Hey, If you’re concerned about hit count then by all means, use a workaround. 1. Setting a cookie in the hitCallback of the Event Tag is definitely an option, and would probably work nicely for the scenario you’re thinking of. 2. Having a “dummy” pageview sounds like a clumsy hack. I’ve used a different approach (a Custom HTML Tag) here. 3. The only disadvantage to getting it from _ga is if the tag happens to be the first one that fires in the session, and the user doesn’t have the _ga cookie yet (having flushed cookies or being a new visitor). In that case, it won’t work. Also, if this means anything, Google discourages accessing the cookie directly, no doubt because they want to reserve the right to change its syntax on a whim (which they haven’t done). Simo Thanks Simo “Clumsy hack”. Ouch! I thought it was quite clever. However, I do much prefer the elegance of your custom HTML Tag approach. Hey Nick, Sorry, did not mean to come off as rude! The reason I think it’s clumsy is because when creating a Tag, GTM goes through the entire process of checking if analytics.js exists, checking if _ga exists, creating a tracker, and sending the hit. It’s quite a bit of overhead just for getting a single value out of the tool. With a Custom HTML Tag, you’re not diverting from the main purpose of creating GA Tags, as the Custom HTML Tag itself doesn’t create a redundancy (it loads the library and creates the tracker with your cookie settings). If it works for you, it’s of course a good solution :-) Tag Sequencing can be a problem, though, as all GTM Variables are set to their initial values when an event fires. So if you try to push something into dataLayer, for example, in the setup tag, it will not be available in the main tag. There’s more information about Tag Sequencing here. Just referring to your answer down below. Wow! That’s really interesting. Thanks Simo. Hi Simo, We have recently implemented a Session ID custom dimension using the method above. This site has a very high level of traffic, >100k sessions/day. It appears that our session ID is not sufficiently random, as we are seeing multiple sessions having the same ID. See a screenshot here: Assuming that the randomness is the problem here, any ideas on ensuring a unique value for each session? Thanks, Marc Hey Marc, That looks very odd. If you want, you could use a GUID instead. But those results look weird. First of all having four hits happening on the same millisecond (even with such a large number of traffic), where each have the same random session ID? And a Client ID which only changes by the first part? Looks like all the four client IDs were also created on the same millisecond. Very strange… Hi Simo, Thanks for the great article. We want to be able to query the API for more than 7 dimensions in order to get information based on the clientID we passed through via a form submit. If we hit the API twice for the same clientID I’m unsure of the best method to tie both hits to the API into the same record in our table where we’ll store the data. I was thinking the sessionID would be effective at doing this but then thought of the sessionCount dimension. Isn’t the sessionCount dimension essentially a sessionID key as well as an indication of how many times the clientID has visited the site? What would be the best metric to use to have the API return 1 row for each time the clientID entered/visited the site? Thanks so much, Will I am getting following error message before publishing my event tag for tracking client id “Unknown variable ‘GA Tracking Code’ found in another variable. Edit the variable and remove the reference to the unknown variable. “ Hello Simo! Thank you very much for your great job! Your articles are full of wisdom. There is a little issue in fourth paragraph (Hit timestamp), which cause some problems in my research. When you make a Custom JavaScript Variable, you use that script from StackOverflow. And you add there now.getMilliseconds() function, which value has three digits in general. Therefore you can’t apply here function pad(), which is designed for one- or two-digit numbers. Here you get HH:MM:SS.89 is case if now.getMilliseconds()==89, and you get HH:MM:SS.123 in case if now.getMilliseconds()==123. Obviously, latter is less than former, which is wrong. The right value for the first case is HH:MM:SS.089. So I suggest that you can apply to milliseconds something like var pad00 = function(num) { var norm = Math.abs(Math.floor(num)); return (norm < 10 ? '00' : (norm < 100 ? '0' : '')) + norm; }; Thanks again and good luck! Oleg Hello, Great article, thank you! But I have some problem sending event Tag containing Client ID. I left “Event Tracking Parameters” (Category, Action, Label) and pass clientID to custom dimension, but now I see a lot of “undefined” events in GA reports. Is it normal or it’s possible to filter them out? Thank you in advance! Alex Hi, What do you mean “I left”? Do you mean you didn’t set any value to them? Category and Action are required, so you’ll need to add something to them. Simo Hi, Sorry, yes, I meant that I didn’t set any values. Unfortunately, screenshot under “2.4. Event Tag” shows only advanced settings. Should I set values for events like that: Event Category: “Pageview” Event Action: “Pageview with Client ID” ? Thank you Hi You can add whatever values you want to an Event Tag, but Category and Action are required. This isn’t a GTM-specific requirement but more about what Google Analytics requires thus I felt it comfortable to leave out of the tutorial. Great article. I have implemented the solution here to start recording the ClientID in Google Analytics. That works. I can see using some GA debugging tools that the variable ClientID is being set and I can also see the _ga cookie values being set. Now I am trying to retrieve that value using Javascript in my web pages with no luck. I am trying to use this function: ga(function(tracker) { // Logs the client ID for the current user. console.log(tracker.get(‘clientId’)); }); But I get this error: Uncaught ReferenceError: ga is not defined I assume I’m getting that error because GA is being loaded through GTM and it uses a random namespace each time the page loads? Not sure how to solve this issue. Any ideas? Thanks. Hi, GTM doesn’t use a different namespace, unless you’ve set it manually. The problem is that the ga() function is defined after the analytics.js library is loaded, and GTM loads it asynchronously. Thus, if you have something like that code on your web page HTML, it will run before GTM has completed execution of the tag, creating a race condition. Also, you can’t use ga(function(tracker)…), since that expects a default tracker name to be used. GTM uses random, unique tracker names. The way to make it work outside GTM would be something like: Thanks for the article. I tried implenting the client ID. Did as you suggested. I see only false as my client ID. For GA Tracking Code variable I created a Constant variable and put the tacking ID as the value. For event tag step 2.4, I created an google analytics event tag and left the Event Tracking Parameters empty and passed the Index and Dimension Value {{Get Client ID for current Tracker}}. Correct me please where am I doing wrong. Hi Simo, Thanks for the great article. I think my problem was not addressed in this article. What I need is, to retrieve session ids corresponding to different events/hits. Thanks in advance ! Kanwer Inspiring article – we actually made a company based on this approach. – tell us what you think – it might come in handy for you :) Awesome approach. Just want to point out that the User Explorer Feature in Google Analytics has encroached on this territory and pretty offers the same output as the above. Although at this point I’m sure how easily that data can be exported elsewhere for further use.
http://www.simoahava.com/analytics/improve-data-collection-with-four-custom-dimensions/
CC-MAIN-2016-44
refinedweb
8,866
72.56
I was interested to see how spammers are tacking emails and I thought to create something like that .This little project shows you how to track emails you sent by adding a little track quote at the end. I also create it a program to remove this kind of tracking and flag the email as SPAM. There is only one function to use: AddTracking(string sBody, string sEmail, string sDomain) private string AddTracking(string sBody, string sEmail, string sDomain) { string sRet = sBody + "<IMG height=1 src=\"." + sDomain + "/emailtrack.aspx?emailsent=" + sEmail + "\" width=1>"; return (sRet); } This function will add a LINK IMG at the end of the email that when loaded will actually be an ASPX page, that will alert you. Outlook and any mail editor that loads images will call the emailtrack.aspx with the parameter you added in the email. This is an error in the code above, I took the character '<' out so will display in this article. Actually because I am addid HTML in the code, is difficult to see, I don't know how to make sure the article shows that code without adding weird formats The Tracking page is pretty simple when receives the request: if ( Page.IsPostBack == false ) { if ( Request.Params["emailsent"] != null ) StampSentEmail(Request.Params["emailsent"].ToString()); } Response.Redirect("none.gif"); So when the email editor (Outlook, Netscape) makes the request, the answer will be a transparent image, so won't show on the email. You need to make sure you send an HTML email. The demo project uses the Mail namespace from .NET. On that class you must set mailMsg.BodyFormat = MailFormat.Html; General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/aspnet/EmailTracker.aspx
crawl-002
refinedweb
279
70.94
I try to install umap on an alwaysdata server. I follow this tutorial : At the migrate step umap migrate i have this error: File "/home/*******/umap/lib/python3.6/site-packages/django/db/backends/base/base.py", line 171, in connect self.connection = self.get_new_connection(conn_params) File "/home/*******/umap/lib/python3.6/site-packages/django/db/backends/postgresql/base.py", line 176, in get_new_connection connection = Database.connect(**conn_params) File "/home/*******/umap/lib/python3.6/site-packages/psycopg2/__init__.py", line 164, in connect conn = _connect(dsn, connection_factory=connection_factory, async=async) django.db.utils.OperationalError: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? This is my local.py (i have try lot of things for the host: SECRET_KEY = '********' INTERNAL_IPS = ('127.0.0.1', ) ALLOWED_HOSTS = ['*', 'postgresql-<accountname>.alwaysdata.net',] DEBUG = True ADMINS = ( ('You', 'your@email'), ) MANAGERS = ADMINS DATABASES = { 'default': { 'ENGINE': 'django.contrib.gis.db.backends.postgis', 'NAME': '********_umap', 'USER': '********', 'PASSWORD': '********' 'HOST': 'postgresql-<accountname>.alwaysdata.net', 'PORT': '5432', 'DATABASE_HOST' = 'postgresql-<accountname>.alwaysdata.net', } } The alwaysdata team haven't find a solution for my problem. Have you any idea? Thanks a lot. asked 04 Apr '17, 10:37 Suryavarman 11●1●1●2 accept rate: 0% edited 04 Apr '17, 23:39 aseerel4c26 ♦ 32.2k●16●239●552 The last three lines of the error message are pretty clear, aren' they? It's expecting to connect to Postgres via a named pipe at /var/run/postgresql/.s.PGSQL.5432, but that file doesn't exist. Does anything named similarly exist in /var/run/postgresql/? Can you see the server running when you type "ps aux | grep postgres"? It looks like umap (which I don't have a clue about, sorry) is actually supposed to use an IP socket to connect but somehow the database adapter is forced to only try the Unix socket. Maybe psycopg2 has some kind of config file? answered 05 Apr '17, 15:22 mbethke 336●5●10●13 accept rate: 66% Thanks a lot for your answer. I'm sorry for the delay. The folder /var/run/postgresql/ doesn't exist. "ps aux | grep postgres" return nothing I haven't find any psycopg2 config file. From psycopg2/__init_.py def connect(dsn=None, database=None, user=None, password=None, host=None, port=None, connection_factory=None, cursor_factory=None, async=False, **kwargs): the inputs values are: user : None password : None database : **** # not the good one host : None I have force the values inside this init.py file: The output error is : django.db.utils.OperationalError: could not translate host name "postgresql−****.alwaysdata.net" to address: Name or service not known answered 21 Apr '17, 15:14 edited 21 Apr '17, 15:17 meta: please login and then use the "add new comment" button below mbethke's answer if you are commenting on it. Afterwards please delete this "answer". always data have solve my last bug: postgresql−*.alwaysdata.net has to be ( "−" -> "-" ) postgresql-*.alwaysdata.net answered 22 Apr '17, 10:21 Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown This is the support site for OpenStreetMap. Question tags: umap ×203 postgresql ×152 installation ×55 local.py ×1 migrate ×1 question asked: 04 Apr '17, 10:37 question was seen: 4,178 times last updated: 22 Apr '17, 10:21 Error while installing Nominatim Nominatim Install Problem, Final Step. PHP? ./utils/setup.php: No such file or directory initial data problem installing uMap locally and problem saving POIs (probably connected) How to install OpenStreetMap for working offline [closed] How can I extract some LINESTRING consisted of 3 or more POINTs to several LINESTRINGs each by 2 POINTs in PostGIS Speeding up OpenStreetMap PostGIS querying Nominatim installation Problem Error while following osmosis import to database examples Installing OSM (main objective is reverse geocoding) on windows 2008 server First time here? Check out the FAQ!
https://help.openstreetmap.org/questions/55468/umap-installation-postgresql-connection-error
CC-MAIN-2020-29
refinedweb
665
58.18
This code does not work as stated in the comment.$(CONFIG_MODVERSIONS) is always empty because it is expanded beforeinclude/config/auto.conf is included. Hence, 'make modules' withCONFIG_MODVERSION=y cannot record the version CRCs.This has been broken since 2003, commit ("kbuild: Enable modules to bebuild using the "make dir/" syntax"). [1][1]:: linux-stable <stable@vger.kernel.org> # v2.5.71+Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>--- Makefile | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-)diff --git a/Makefile b/Makefileindex 2df903429d31..b856f84e28c9 100644--- a/Makefile+++ b/Makefile@@ -619,12 +619,8 @@ KBUILD_MODULES := KBUILD_BUILTIN := 1 # If we have only "make modules", don't compile built-in objects.-# When we're building modules with modversions, we need to consider-# the built-in objects during the descend as well, in order to-# make sure the checksums are up to date before we record them.- ifeq ($(MAKECMDGOALS),modules)- KBUILD_BUILTIN := $(if $(CONFIG_MODVERSIONS),1)+ KBUILD_BUILTIN := endif # If we have "make <whatever> modules", compile modules@@ -1337,6 +1333,13 @@ ifdef CONFIG_MODULES all: modules +# When we're building modules with modversions, we need to consider+# the built-in objects during the descend as well, in order to+# make sure the checksums are up to date before we record them.+ifdef CONFIG_MODVERSIONS+ KBUILD_BUILTIN := 1+endif+ # Build modules # # A module can be listed more than once in obj-m resulting in-- 2.25.1
https://lkml.org/lkml/2020/5/31/49
CC-MAIN-2020-34
refinedweb
229
55.54
i have this line in my Web.Config <add key="localhost2" value="http:\\0.0.0.0\export\" /> in vb Code: <a href='" & ConfigurationSettings.AppSettings("localhost2") & TransactionNo & "\" & fi.Name & "' target='_blank' >" & fi.Name & "</a>" the Publish URL : How i can encrypt the 803 value or prevent the user from change the value in address bar? the scenario as the following, after the user login , the application returns he/she visiting numbers for example : 123 150 i preview the value on TextView , how i can make it clickable, and open pdf file like when user click on 150 open the pdf file the located on server 150.pdf???? my activity is load a grid view from xml file and the controls from other xml file public class ResultActivity extends AppCompatActivity { GridView gridview; ArrayList<String> arrayList; String ip, db, un, passwords; Connection connect; PreparedStatement stmt; ResultSet rs; Button TButton; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.gridviews); TButton = (Button) findViewById(R.id.btnTrans); TButton.setOnClickListener(new View.OnClickListener(){ @Override public void onClick(View v){ Intent intentLoadNewActivity = new Intent(ResultActivity.this,MyResultsActivity.class); startActivity(intentLoadNewActivity); } }); try { connect = CONN(un, passwords, db, ip); Statement statement = connect.createStatement(); rs = statement.executeQuery(query); List<Map<String, String>> data = null; data = new ArrayList<Map<String, String>>(); while (rs.next()) { Map<String, String> datanum = new … hi everybody , my Question is , how i can get the last row in my datagridview thanx in advance :) i have crated a job to backup automaticly my database , when procees happen failure message appear , anyone can help me ?? thx BACKUP DATABASE [MYDATABASE] TO DISK = 'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Backup' how i can check if text box value exist in data base or not plzzz helppp how i can make my crsytal report show the data depend on Supplier name , any help plz ??? i have 6 Suppliers on each report hey every body , i need to know how i can create an automatic back up for my database step, i am using sql Enterprise server 2008 R , what i need is the steps of how to do thats and how i can check if the job start or not , thanx i get this error message when i try to connect from clinets pcs to login with the crystal repor but the forms is correctly connect to the DB srver but the server not this is the error : Crystal report viewer login failed detail : ADO error code: 0x80040e4d Source: Microsoft OLEDB, Provider For sql server Descrition : login failed for user sa Sql state : 42000 Native error : 18456 Any suggestion could be help please ??? hi all , when i install my vb project on clients machine and runing it its show this error : Cant generate SSPI Context , what should i do ???? when i need to install the vb6 project to network pcs this error is show : The program cant start because ActiveX Tools.dll is missing from your computer. Try to install the program to fix this problem. i search on internet for the dll required but i am not find it , some one help me please ?? i have a server called OAS-VB and a network pc what i need is the sql that installed on pc see the OAS-vb server to reach the database file ??? or some one can tell me how to configure the sql server for network to reah the pc ?? i need is to connect an application of its database that installed on server thnx in advance hi all , from where i can install or enable the ActiveX tool.dll ?? anyone can provided to me anylink to download it ?? i have vb6 project with cr8 and 10 , its correctly run on one pc what i need is to run this project on local netwrok , how i can do this and how i can edit the connection string and make .exe file ???? Help me plz i uesed vb6 with crystal report 10 , when i try to open report.rpt file in crystal report this message appear to me : Invalid report schema , cant open the document,, anyhelp plz?? hey everybody i need an answer of this question plz as soon as possible , write a method named season that takes two integers as parameters representing a month and day and that returns a string indicating the season for that month and day. Assume that months are specified as an integer between 1 an 12 (1 for jun, and so on) and that the day of the month is a number between 1 and 31. if the date falls between 12/16 and 3/15 you should return "winter" , if the date falls 3/16 and 6/15 you should return "spring" … how i can do the following : when the user click on some cell : EX: cell(3) of the current datagridview onther datagridview appear and when the user select row from the second datagridview the seleceted row information on cell(4) and cell (5) . someone help plz??? i have this two tables fields: table1 (BOQSectionsAndParts_ID,BOQSection,BOQPart,ProjectNO) BOQSectionsAndParts_ID is a primary key table2 (BOQItemEntry_ID,ProjectNO,BOQSectionsAndParts_ID,BOQ_Item) BOQItemEntry_ID is a primary key and BOQSectionsAndParts_ID is a forign key. this is a sql server in my vb forum : i view the BOQ sectin as a combobox what i need is when the user choose the BOQSection the BOQSectionsAndParts_ID value save to table 2 i use DataGridView to save the value of table2. anyhelp PLzZzz :) the procedure entry part 1stractI could not be located in the dynamic link library MSDART.dll ??? how i can fix this ?? after i add all the reference on my vb6 project and everything loaded successfully when i press on report button to view the report i face this error msg: Logon failed Details: ADO error code ox80004005 Source: Microsoft OLEDB Provider for SQL Server Description: [DBNETLIB][Connectionopen] sql server doesnt exist or access deined sql state: 08001 Native error: 17 any suggestion can be help plzzzz anybody can help me of how i can access the registry key of windows to application?? Private Sub Cmd9_Click() 'On Error GoTo Error_Handler Dim Report As New Blank_rShopDrawing 4 Set Report = New Blank_rShopDrawing Report.RecordSelectionFormula = "false" Form1.CRViewer1.ReportSource = Report Form1.CRViewer1.ViewReport Form1.Show 1 Exit_Error_Handler: Set rs = Nothing Exit Sub Error_Handler: MsgBox Err.Number & " : " & Err.Description Resume Exit_Error_Handler End Sub the error in line 6 anyhelp please when i want to add a component or Referrence to vb6 Project it give me this error msg : error accessing the system registry how i can fix that ??? sry again Private Sub Cmd_Print_Click() 'On Error GoTo Error_Handler If newflag <> 0 Then MsgBox "Inserting New Record Mode, Press EXIT Button To Return To Normal Mode", vbCritical, "Attention" Exit Sub End If If RepNo.Text = "" Then Exit Sub Dim Report As New rstoreinmaterial Report.RecordSelectionFormula = "{DetailsStore_InOut_Material.RepNo}= " & Val(RepNo.Text) & " and {DetailsStore_InOut_Material.projectno} = '" & ProjectNo1 & "'" Form1.CRViewer1.ReportSource = Report Form1.CRViewer1.ViewReport Form1.Show 1 Exit_Error_Handler: Set rs = Nothing Exit Sub Error_Handler: MsgBox Err.Number & " : " & Err.Description Resume Exit_Error_Handler End Sub how i can fix this error ??? what is the run time error 424 , object is Required and how i can solve it ... i have two row in my dataGridView one store the labour working hour (Time) and the other one is store the cost of one working hour , i need to multiply these two row but its give me this error : operator * is not defined for type timespan and type double . ` row.Cells(6).Value = (row.Cells(4).Value) * CDbl(row.Cells(5).Value)` hey everybody , i need the help plzz,,, i have an application that developed using vb6 and i have its database that design by sql server , but i dont know how to make it run , when i run the application its give me the following error : Dim rs as new Recordset Compiler error : user defined type not defined. amd when i load the application again its gives me crviewer.dll could not be loaded , any suggestion can help me plz ?????? "SqlDbType.Time overflow. Value '3.00:00:00' is out of range. Must be between 00:00:00.0000000 and 23:59:59.9999999." Does anyone know what is the problem. hey everybody , anyone can help me by some ideas as design and coding to calculate the Labour Productivity ?? i have the labour type stored in combobox and the salary as input to DataGrid view and accomodation , Social Security , overtime, Sick day, Dayoff, Transportation.... how i can start off coding , any idea please ??? i have this connection string <connectionStrings> <add name="Tendering_Pro" connectionString="Data Source=HABOUSH-PC\SQLEXPRESS;Initial Catalog=TenderingProgram;Integrated Security=True" providerName="System.Data.SqlClient" /> </connectionStrings> <system.diagnostics> AND THIS MY CLAAS : Imports System.Data Imports System.Data.SqlClient Imports System.Configuration Public Class BuyNewTender Private conn As SqlConnection Private _MSG As String = "" Private _ProjectTenderingNO As String = "" Private _TenderingNo As String = "" Private _ProjectName As String = "" Private _Clientname As String = "" Private _PurchasePrice As Integer = 0 Private _BuyDate As String = "" Private _TenderClosingDate As String = "" Private _DocumentDetail As String = "" Private _Notes As String = "" Public Property ProjectTenderingNO() As String Get … The End.
https://www.daniweb.com/members/955478/hibapro/threads
CC-MAIN-2019-22
refinedweb
1,517
50.77
Is there a program that takes a Microsoft Word document and produces a spreadsheet of all the words contained in the document and the number of times the word appears? e.g. cat 23 said 15 jumped 12 dog 7 cat 23 said 15 jumped 12 dog 7 EDIT: I appreciate the answers regarding VBA macros, but I was hoping for a readily available tool (open source, shareware or commercial) that can handle this task. Apart from VBA, one can develop such an application using API of OpenOffice to read the contents of the Word document; process it and export the results as a CSV file to open in a spreadsheet application. However it's actually just a few line of codes if you're familiar with any programming language. For example in Python you can easily do it like that: Here we define a simple function which counts words given a list def countWords(a_list): words = {} for i in range(len(a_list)): item = a_list[i] count = a_list.count(item) words[item] = count return sorted(words.items(), key = lambda item: item[1], reverse=True) The rest is to manipulate the content of the document.First paste it: content = """This is the content of the word document. Just copy paste it. It can be very very very very long and it can contain punctuation (they will be ignored) and numbers like 123 and 4567 (they will be counted).""" Here we remove the punctuation, EOL, parentheses etc. and then generate a word list for our function: import re cleanContent = re.sub('[^a-zA-Z0-9]',' ', content) wordList = cleanContent.lower().split() Then we run our function and store its result (word-count pairs) in another list and print the results: result = countWords(wordList) for words in result: print(words) So the result is: ('very', 4) ('and', 3) ('it', 3) ('be', 3) ('they', 2) ('will', 2) ('can', 2) ('the', 2) ('ignored', 1) ('just', 1) ('is', 1) ('numbers', 1) ('punctuation', 1) ('long', 1) ('content', 1) ('document', 1) ('123', 1) ('4567', 1) ('copy', 1) ('paste', 1) ('word', 1) ('like', 1) ('this', 1) ('of', 1) ('contain', 1) ('counted', 1) You can remove parentheses and comma using search/replace if you want. All you need to do download Python 3, install it, open IDLE (comes with Python), replace the content of your word document and run the commands one at a time and in the given order. Use VBA; a script to do exactly what you request is towards the bottom of this page. By posting your answer, you agree to the privacy policy and terms of service. asked 4 years ago viewed 1866 times active 1 year ago
http://superuser.com/questions/251324/program-to-get-unique-words-and-their-count-from-a-word-document/251347
CC-MAIN-2015-32
refinedweb
441
64.95
symlink - make a symbolic link to a file #include <unistd.h> int symlink(const char *path. Upon successful completion, symlink() shall return 0; otherwise, it shall return -1 and set errno to indicate the error. The symlink() function the path2 argument exceeds {PATH_MAX} or a pathname component() function may fail if: - [ELOOP] - More than {SYMLOOP_MAX} symbolic links were encountered during resolution of the path2 argument. - [ENAMETOOLONG] - As a result of encountering a symbolic link in resolution of the path2 argument, the length of the substituted pathname string exceeded {PATH_MAX} bytes (including the terminating null byte), or the length of the string pointed to by path1 exceeded {SYMLINK_MAX}. IEEE Std 1003.1-2001 does not require any association of file times with symbolic links, there is no requirement that file times be updated by symlink(). None. lchown(), link(), lstat(), open(), readlink(), unlink(),.
http://pubs.opengroup.org/onlinepubs/000095399/functions/symlink.html
CC-MAIN-2019-13
refinedweb
141
54.12
Previously I was using powershell script for exporting PA config, now I attempted to write one with Python, took me a while to read data from the xml tag… I was using BeautifulSoup4 library to read the data from the tags easily. Here’s the code with comments import requests from bs4 import BeautifulSoup as BS import os.path import datetime r = requests.get("", verify=False) #get the xml from http response soup = BS(r.text,"lxml") #get the key data from key = soup.find('key').text #Get the config using PA expot config API r = requests.get("{}".format(key), verify=False ) #define pathname path = 'd:/temp/' current_date = datetime.datetime.now().strftime("%Y%m%d-%H%M%S") #define file name filename = os.path.join(path, "config" + current_date + ".xml") #Open a new file config.xml, as defined above config = open(filename, "w") #write the response to the config.xml, content must be in string so use r.text config.write(r.text) config.close() Use the crontab to run this script, if you are using Windows Task scheduler you need to have python3 installed. One thought on “Export Palo Alto firewall config without Panorama with Python” very helpfull.please do for upgrading multiple palo alto firewalls through python script
https://cyruslab.net/2017/10/02/export-palo-alto-firewall-config-without-panorama-with-python/
CC-MAIN-2022-27
refinedweb
209
59.8
- Cookie policy - Advertise with us © Future Publishing Limited, Quay House, The Ambury, Bath BA1 1UA. All rights reserved. England and Wales company registration number 2008885. The last few projects have been relatively easy, but with your skills racing forward as they are it's definitely time you tackled something much bigger. How does 360 lines of code sound to you? Scary? We're going to produce a game that's simple and fun to play, and along the way you'll learn some valuable new coding skills. The game we're going to make is called Bang! and the idea is simple: as different-coloured fireworks launch into the air, the player needs to use their mouse to select the fireworks with the same colour, then press Space to explode them. The more they select of the same colour, the more points they get. Just to make things a bit more lively, I'm going to introduce you to something called a particle system, which is a very common special effects technique for games that simulates explosions, fire, smoke and much more. Sound good? Sound worth the effort of writing almost 400 lines of code? Let's go! Create a new command-line project in MonoDevelop and call it Bang. As with Project Four: Fish Feast and Project Six: Bubble Trouble you'll need to copy the SdlDotNet.dll and Tao.Sdl.dll files into your project directory then add them as references in your project. While you're adding references, add one for System.Drawing. You'll also need to change the output directory for your program so that it points to the root directory of your project - again, see project four for how to do that. In the source code for this project I've provided a "media" directory containing pictures for use in the game; copy that into your project directory so that it's in the same place as the Main.cs file that MonoDevelop made for you. That's all the media in place; now we just need to create the three class definitions that be used to power the game. Right-click on the Bang project (not "Solution Bang" but the "Bang" that's beneath it) and choose Add > New File, then choose Empty Class from the window that appears. Name it "Bang". Repeat the procedure to create classes called Explosion and Firework, which gives us the three classes that will power our game. In Bang.cs, Explosion.cs and Firework.cs, set the "using" statements to these: using SdlDotNet.Core; using SdlDotNet.Graphics; using SdlDotNet.Graphics.Primitives; using SdlDotNet.Input; using System; using System.Collections.Generic; using System.Drawing; While you're in each of those files, you'll see that MonoDevelop "helpfully" created a constructor method for each class - just delete them, because we won't be using them. As a reminder, constructor methods get called when an object of a class is created, and they look something like this: public Bang() { } Like I said, just delete them. Now open up Main.cs and replace the Console.WriteLine() call with this: Bang game = new Bang(); game.Run(); That will create a new instance of our game and start it running - or at least it will once we make the Run() method! Open up Bang.cs and put these variable declarations inside the Bang class: Surface sfcGameWindow; Surface sfcBackground = new Surface("media/background.jpg"); Random Rand = new Random(); const int GameWidth = 640; const int GameHeight = 480; const int GameHalfWidth = GameWidth / 2; That gives us just enough data to fire up a basic game screen and show the background. But first we need to drop in a few basic method to handle running a simple, skeleton version of our game. First up, the Run() method. this needs to create the game window, hook up methods for the events we'll be using, then hand control over to SDL. Which events will we be using? Well, quite a few, actually: we want to read mouse movement, mouse button clicks, keyboard presses, game ticks and the quit signal, so we need to hook them all up to empty methods that we'll fill out later. Here's how the first-draft Run() method should look - put this in Bang.cs: public void Run() { sfcGameWindow = Video.SetVideoMode(GameWidth, GameHeight); Events.MouseMotion += new EventHandler<MouseMotionEventArgs>(Events_MouseMotion); Events.MouseButtonDown += new EventHandler<MouseButtonEventArgs>(Events_MouseButtonDown); Events.KeyboardDown += new EventHandler<KeyboardEventArgs>(Events_KeyboardDown); Events.Tick += new EventHandler<TickEventArgs>(Events_Tick); Events.Quit += new EventHandler<QuitEventArgs>(Events_Quit); Events.Run(); } You'll need to provide skeleton methods for Events_MouseMotion(), Events_MouseButtonDown(), Events_KeyboardDown(), Events_Tick() and Events_Quit(). They can be pretty much empty for now, with the exception of Events_Tick() and Events_Quit() which will use the same calls to Update()/Draw() and Events.QuitApplication() as seen in project six. If you missed that project, go back and start there first because I'm not going to explain it all again here! So, here are the skeleton methods for you to put into Bang.cs: void Events_MouseMotion(object sender, MouseMotionEventArgs e) { } void Events_MouseButtonDown(object sender, MouseButtonEventArgs e) { } void Events_KeyboardDown(object sender, KeyboardEventArgs e) { } void Events_Tick(object sender, TickEventArgs e) { Update(e.SecondsElapsed); Draw(); } private void Draw() { sfcGameWindow.Blit(sfcBackground); sfcGameWindow.Update(); } private void Update(float elapsed) { } void Events_Quit(object sender, QuitEventArgs e) { Events.QuitApplication(); } As you can see, the Draw() method just draws the background image then tells SDL to update the screen. Finally, copy in the PointOverRect method that we used in previous games to detect whether a mouse click is over a rectangle: bool PointOverRect(float x1, float y1, float x2, float y2, int width, int height) { if (x1 >= x2 && x1 <= x2 + width) { if (y1 >= y2 && y1 <= y2 + height) { return true; } } return false; } Phew! That's all the setup code out of the way: if you run the "game" now you'll see it brings up a window that shows the background. Nothing happens - after all that work! As you can imagine, it's fairly common practice to take a snapshot of your project right now, then squirrel that away to use to kickstart any future game projects. The first thing we're going to do is define what we need to store about a firework to make it functional in the game: The combination of the angle and colour information is enough to let us know which graphic should be used to draw the firework. Here's how the Firework class looks in C#: class Firework { public int Angle; public Color Colour; public float X; public float Y; public float XSpeed; public float YSpeed; public bool IsSelected; } As you should recall from project six, using floats for position rather than integers allows us to store the position of things more accurately by making frame-independent movement possible. More on this later! And now comes the first tricky bit: we need to load all the firework graphics into the game. This might sound easy, but it's actually quite hard because we need to make sure it's all carefully organised by direction and type. As far back as project two we looked at the List generic data type, which is an array that holds data of a specific type. That works well for storing data in a given order: you add items to the list, then read them back out simply by referring to their position in the list. In this tutorial I want to introduce you to a new data type called a Dictionary, which lets you add items to an array at a specific location rather than just at the next available slot. What can the location be? Well, just about anything - if you want to store something at location "fish", that's fine. If you want to store something at location 90, that's fine - even if locations 0 through 89 don't exist. Unlike lists, dictionaries store their items in any positions you want - you can even define what kind of data the position is. As with a List, you can store anything you want in a Dictionary, and this is where things can become a bit tricky. You see, we're going to use a Dictionary to store Dictionaries, and each dictionary inside will store SDL Surfaces for the fireworks we'll be drawing to the screen. If that doesn't make sense, it will once you see the code that actually loads the fireworks. First, add this code just before the Run() method: List<Firework> Fireworks = new List<Firework>(); Dictionary<int, Dictionary<Color, Surface>> FireworkTypes = new Dictionary<int, Dictionary<Color, Surface>>(); Technically it's "bad style" to have a dictionary storing other dictionaries, but it's a really quick way of solving our problem! Regardless, it's that last line that's likely to trip you up. Let me show you a simpler example: a dictionary designed to hold people's names and ages: Dictionary<string, int> People = new Dictionary<string, int>(); Like I said earlier, a dictionary store any value at any location. In the People dictionary above, we're telling Mono that the value type we want to store will be an int, and the location - known as the "key" - will be specified as a string. Using that example, we could add items to the dictionary like this: People.Add("Paul Hudson", 29); People.Add("Ildiko Hudson", 30); People.Add("Nick Veitch", 59); We can then read values back out from the dictionary like this: Console.WriteLine("Paul's age is " + People["Paul Hudson"]); There is one proviso, though, which is that you shouldn't add two values to a dictionary using the same key. That is, code like this will cause an error: People.Add("Paul Hudson", 29); People.Add("Ildiko Hudson", 30); People.Add("Nick Veitch", 59); People.Add("Paul Hudson", 17); Let's go back to the dictionary we'll be using in this project: Dictionary<int, Dictionary<Color, Surface>> FireworkTypes = new Dictionary<int, Dictionary<Color, Surface>>(); What that means is that our key will be an integer, and the value is a dictionary with Colours as the keys and SDL Surfaces as the values. If you look in the media directory for this game you'll see there are three types of firework: blue, green and red. And for each colour, we also have the same firework at three different angles. So what we're going to do is add each colour and firework image for a given angle to a dictionary, then add that list to the dictionary at the specified angle. That means we can retrieve the correct picture for a firework by knowing its angle and colour. So, let's start by loading all the firework images that point upwards. Add this to the start of your Run() method: Dictionary<Color, Surface> fwtypes = new Dictionary<Color, Surface>(); fwtypes.Add(Color.Blue, new Surface("media/firework_blue.png")); fwtypes.Add(Color.Green, new Surface("media/firework_green.png")); fwtypes.Add(Color.Red, new Surface("media/firework_red.png")); FireworkTypes.Add(90, fwtypes); First we create a new dictionary with Color for the key and Surface for the value. We then add the three firework images for this angle, each time using the correct colour value for it. Once that's done, we add that dictionary to the FireworkTypes parent dictionary with the key 90 - an angle of 0 is pointing to the left, so 90 is pointing directly upwards. Using this method, if you wanted to read the SDL Surface for the blue firework at 90 degrees, you would use FireworkTypes[90][Color.Blue], which is nice and easy to read. We need to load all the other firework images as well, so add this code beneath the code from above: fwtypes = new Dictionary<Color, Surface>(); fwtypes.Add(Color.Blue, new Surface("media/firework_blue_135.png")); fwtypes.Add(Color.Green, new Surface("media/firework_green_135.png")); fwtypes.Add(Color.Red, new Surface("media/firework_red_135.png")); FireworkTypes.Add(135, fwtypes); fwtypes = new Dictionary<Color, Surface>(); fwtypes.Add(Color.Blue, new Surface("media/firework_blue_45.png")); fwtypes.Add(Color.Green, new Surface("media/firework_green_45.png")); fwtypes.Add(Color.Red, new Surface("media/firework_red_45.png")); FireworkTypes.Add(45, fwtypes); And now for a little bit of SDL magic: if you look at the firework pictures in the media directory, you'll notice they all have a magenta background. We can tell SDL that we want magenta to be drawn as transparent on the screen, which will cause all those magenta parts of the pictures to be invisible. To do that, we just need to loop through every angle and every picture, setting its TransparentColor and Transparent values. Put this just after the previous code: foreach(Dictionary<Color, Surface> direction in FireworkTypes.Values) { foreach (Surface firework in direction.Values) { firework.TransparentColor = Color.Magenta; firework.Transparent = true; } } We've used the foreach loop a lot in previous projects, but this is the first time we've used it to loop over a dictionary, and also the first time we've used it to read a complex data type. So: as you might have guessed, when looping over a dictionary, you can choose to read either the keys (FireworkTypes.Keys) or the values (FireworkTypes.Values). Each firework is available at three angles, but all of them have a magenta background. We can knock that out with SDL. You also need to specify the exact kind of data you're getting back - this is nothing new, as we've been using things like "foreach (string str in somestrings)" for a while, but the difference here is that when you read a generic data type back in, such as a dictionary, you need to tell Mono exactly what kind of dictionary you're working with. Yes, this can be a bit of a pain, but if you remember all the way back to project one you can use the "var" keyword in place of data types, so if you wanted to you could be a bit lazy and write this: foreach(var direction in FireworkTypes.Values) { foreach (Surface firework in direction.Values) { firework.TransparentColor = Color.Magenta; firework.Transparent = true; } } Once we've set the TransparentColor and Transparent properties, those magenta parts will no longer appear - SDL will ensure they are automatically invisible. Now that all the firework images are loaded into RAM, let's make the game launch some fireworks so we can make sure everything is working. This task can be broken down into four small ones that we'll tackle individually: We'll tackle them in order of difficulty, starting with tracking the last time a firework was launched. Add this to the collection of variables in Bang.cs, just above the Run() method: int LastLaunchTime; Then put this line somewhere in the Run() method: LastLaunchTime = Timer.TicksElapsed; That tells starts the game off with a zero counter, meaning that it will wait a few seconds before launching the first firework. Task number one solved! Next, drawing all the fireworks on the screen. As discussed earlier, Firework objects use floats for their X and Y position, whereas SDL needs integers for drawing to the screen. So, inside the Draw() method we need to loop over each firework, force its position floats into integers, then draw them at the screen. I also told you that to pull out the blue firework at 90 degrees we need to use this code: FireworkTypes[90][Color.Blue]. So, to pull out any given surface for a firework, we just need to use its Angle and Colour values. Put this code into your Draw() method, after drawing the background and before updating the game window: foreach (Firework firework in Fireworks) { sfcGameWindow.Blit(FireworkTypes[firework.Angle][firework.Colour], new Point((int)firework.X, (int)firework.Y)); } Next up, we need to move each of the active fireworks. We're also going to take the opportunity here to remove any fireworks that have moved off the screen. If you think back to the definition of the Firework class, you'll remember that each firework has an X and Y position, but also that it has X and Y speeds, so to move a firework across the screen we just need to subtract their X and Y speeds from their X and Y positions. So, put this code into your Update() method: for (int i = Fireworks.Count - 1; i >= 0; --i) { Firework firework = Fireworks[i]; firework.X -= firework.XSpeed; firework.Y -= firework.YSpeed; if (firework.Y < -300) { Fireworks.RemoveAt(i); } } Note that we need to loop backwards over the array because we're potentially removing one or more fireworks and it would cause problems if we remove a firework during a loop - see project four if that all seems a bit hazy. Why do you think I'm using -300 rather than just looking for the height of the firework? The answer is simply a game-play one: if a firework is just off the screen and the player wants to explode it, we need to give them a bit of leeway because it's likely the explosion will just about make it onto the screen. The last task is the most complicated one: we need to launch a new firework every few seconds. In principle, this is as simple as adding code like this to the Update() method: if (LastLaunchTime + 4000 < Timer.TicksElapsed) { LaunchFireworks(); } ...but that's just shifting the work to a new method! So, add that code into Update(), and let's take a look at what a LaunchFireworks() method should do. First, it needs to reset the LastLaunchTime variable to the current time, so that the game waits another four seconds before launching more fireworks. Then it needs to decide randomly whether it should launch fireworks from the left, from the right or from the bottom. And then it needs to create five fireworks, each with a random colour. Each direction (from the left, right, or bottom) has to create five fireworks at different positions on that side of the screen, so there are fifteen ways to create a firework. Hopefully that should be setting off alarm bells in your head: this is something that definitely calls for a method all of its own. Let's start there: let's create a method that launches precisely one firework at a given angle, and at a specific X/Y position: void CreateFirework(int angle, int xpos, int ypos) { Firework firework = new Firework(); firework.X = xpos; firework.Y = ypos; firework.XSpeed = (float)(Math.Cos(angle * Math.PI / 180)) * 5; firework.YSpeed = (float)(Math.Sin(angle * Math.PI / 180)) * 5; switch (Rand.Next(0, 3)) { case 0: firework.Colour = Color.Blue; break; case 1: firework.Colour = Color.Green; break; case 2: firework.Colour = Color.Red; break; } firework.Angle = angle; Fireworks.Add(firework); } There's nothing really new in there - the two long-ish lines that call Math.Cos() and Math.Sin() have both been used and explained in project six, with the exception that I've put "* 5" on the end to make them all move a bit faster! As each firework is created, we add it to the list of active fireworks (the Fireworks list) so that it can be moved and drawn correctly. That CreateFirework() method launches a firework at an angle and position, so all we need to do is write the LaunchFireworks() method that calls CreateFirework() once for each firework it wants to create. Put this somewhere into Bang.cs: void LaunchFireworks() { // reset the timer LastLaunchTime = Timer.TicksElapsed; // pick a random direction for the fireworks switch (Rand.Next(0, 3)) { // fire your fireworks here } } You can have as many firework launch patterns as you want, but for the sake of this simple tutorial we're only going to have three: up, from the left and from the right. Put this code where I've marked "// fire your fireworks here": case 0: // fire five, straight up CreateFirework(90, GameHalfWidth - FireworkTypes[90][Color.Blue].Width, GameHeight); CreateFirework(90, GameHalfWidth - FireworkTypes[90][Color.Blue].Width - 50, GameHeight); CreateFirework(90, GameHalfWidth - FireworkTypes[90][Color.Blue].Width - 100, GameHeight); CreateFirework(90, GameHalfWidth - FireworkTypes[90][Color.Blue].Width + 50, GameHeight); CreateFirework(90, GameHalfWidth - FireworkTypes[90][Color.Blue].Width + 100, GameHeight); break; case 1: // fire five, from the left to the right CreateFirework(135, -FireworkTypes[135][Color.Blue].Width, GameHeight - 300); CreateFirework(135, -FireworkTypes[135][Color.Blue].Width, GameHeight - 250); CreateFirework(135, -FireworkTypes[135][Color.Blue].Width, GameHeight - 200); CreateFirework(135, -FireworkTypes[135][Color.Blue].Width, GameHeight - 150); CreateFirework(135, -FireworkTypes[135][Color.Blue].Width, GameHeight - 100); break; case 2: // fire five, from the right to the left CreateFirework(45, GameWidth, GameHeight - 300); CreateFirework(45, GameWidth, GameHeight - 250); CreateFirework(45, GameWidth, GameHeight - 200); CreateFirework(45, GameWidth, GameHeight - 150); CreateFirework(45, GameWidth, GameHeight - 100); break; That's quite a lot of code, but it's really dull stuff - it's just repetitive calls to CreateFirework(), varying the angle, X and Y positions of the parameters. For example, when firing fireworks up from the bottom of the screen, we fire one in the middle, two off to the left and two off to the right. Firing from the left or right, we keep the X position the same, and vary the height. There is one minor point I ought to clear up, and that's the use of Color.Blue for reading the width values. The reason this is used is because we need to offset the fireworks' positions by their width, so they appear off the screen. As all our fireworks are the same size, we can read any colour and it will be correct, so I just used the first one, Color.Blue. With those four tasks complete, the game is starting to come together a little bit: if you build and run the code, you'll see the same background, but now fireworks will fly across the screen. Of course, you can't actually do anything with them yet, but then again we're only half way through this project! Our game now has fireworks that fly up the screen in various directions - it's a bit dull, but at least things are starting to come together now. For the player to be able to score points, we need to let them a) choose which fireworks should be exploded, then b) explode them. Choosing which fireworks should be exploded is pretty straightforward: when the mouse button is clicked, or if the mouse is moved when the mouse button is held down, we need to see whether the cursor is over any fireworks. If it is, then we need to select it. However, here's the catch that makes the game more difficult: the player can select only one colour at a time, which means if they have a green firework selected then click a red firework, we need to deselect the green firework. On the other hand, if they have a green firework selected and then click another green firework, they both become selected. Put this method, CheckSelectFirework() into Bang.cs: void CheckSelectFirework(Point point) { // loop over every active firework foreach (Firework firework in Fireworks) { if (PointOverRect(point.X, point.Y, firework.X, firework.Y, FireworkTypes[firework.Angle][firework.Colour].Width, FireworkTypes[firework.Angle][firework.Colour].Height)) { // a firework was selected! foreach (Firework firework2 in Fireworks) { // now loop over every other firework if (firework2.IsSelected && firework2.Colour != firework.Colour) { // deselect any other fireworks that aren't this colour firework2.IsSelected = false; } } // finally, select the new firework firework.IsSelected = true; } } } That's all very simple in code, but it makes the game a lot harder to play! Now to run that method whenever the mouse is moved with the mouse button down or when the mouse button is pressed, we just need to modify Events_MouseButtonDown() and Events_MouseMotion(). Both of these methods receive the mouse position as part of their parameters, so we just need to pass that onto CheckSelectFirework() so that it can check whether any fireworks were selected. Modify your code to this: void Events_MouseButtonDown(object sender, MouseButtonEventArgs e) { CheckSelectFirework(e.Position); } void Events_MouseMotion(object sender, MouseMotionEventArgs e) { if (Mouse.IsButtonPressed(MouseButton.PrimaryButton)) { CheckSelectFirework(e.Position); } } Finally, we need to make it easier for players to see which fireworks are currently selected, because selecting a different colour will unselect previously selected fireworks! So, go into your Draw() method and modify the Fireworks foreach loop to this: foreach (Firework firework in Fireworks) { if (firework.IsSelected) { short box_left = (short)(firework.X - 3); short box_top = (short)(firework.Y - 3); short box_right = (short)(FireworkTypes[firework.Angle][Color.Blue].Width + firework.X + 3); short box_bottom = (short)(FireworkTypes[firework.Angle][Color.Blue].Height + firework.Y + 3); sfcGameWindow.Draw(new Box(box_left, box_top, box_right, box_bottom), Color.White); } sfcGameWindow.Blit(FireworkTypes[firework.Angle][firework.Colour], new Point((int)firework.X, (int)firework.Y)); } With that in place, when the fireworks are drawn a box is drawn behind them if they are selected. One of the (many!) advantages of SDL is that it makes it very easy to draw shapes on the screen such as boxes, circles and lines. In this code, we define the four corners of the box: it's left and top edges are the firework's X and Y positions, and the right and bottom edges are the position plus the firework width. In both cases I've added or subtracted 3 to make the box a little bigger than the firework to make it look better. If you run the game now, you'll be able to select fireworks by clicking on them. Having the option to hold down the mouse button and just wave the cursor over fireworks to select them makes the game a little easier, so it's good to have both. Clicking on fireworks selects them, showing a box to make it clear they are highlighted. And now it's time for the main event. Or at least it's time to start working towards the main event: we need to make it possible for players to explode the fireworks they've selected. First, add a new variable just under the declaration of "int LastLaunchTime" so that we can track the player's score: int Score; Second, create a new method that will update the window's title with the score whenever we call it: void SetWindowCaption() { Video.WindowCaption = "Bang! Score: " + Score; } At the start of the game, we should update the window title with an empty score, so add this at the start of the Run() method: SetWindowCaption(); And now all we need to do is make pressing the space key detonate any selected fireworks. This takes just under 40 lines of code, so you might be forgiven for thinking this is quite hard to do. But the truth is that it's really easy: Updating the score actually takes up almost half the code for the method, because we want to add more score for exploding multiple fireworks - ie, exploding two fireworks together should give the player more points than exploding both fireworks individually. All this is done inside the Events_KeyboardDown() method, and should only happen when the player presses space. So, modify your method so that it looks like this: void Events_KeyboardDown(object sender, KeyboardEventArgs e) { if (e.Key == Key.Space) { int numexploded = 0; for (int i = Fireworks.Count - 1; i >= 0; --i) { Firework firework = Fireworks[i]; if (firework.IsSelected) { // destroy this firework! Fireworks.RemoveAt(i); ++numexploded; } } // how much score should be added? switch (numexploded) { case 0: // nothing; rubbish! break; case 1: Score += 200; break; case 2: Score += 500; break; case 3: Score += 1500; break; case 4: Score += 2500; break; case 5: Score += 4000; break; } SetWindowCaption(); } } Go ahead and run the game now: fireworks fly up, you can select them with your mouse, then press space to destroy them. Is the game finished? Well, yes. But what it really misses is some sort of interest: making fireworks disappear is very dull, hardly worthy of a game called "Bang!" And, of course, I did promise you a particle system, right? We're going to add colourful explosions to the game to make it look a little nicer. To do that, we need to make a couple of changes to Bang.cs: All those changes add up to just 12 lines of code, at which point we can crack on with making the Explosion class. First, add this line up in the variable declaration section, just beneath the FireworkTypes dictionary: public List<Explosion> Explosions = new List<Explosion>(); Next, we need to call a non-yet-created method, ExplodeFirework(), for each firework that is being detonated because the player pressed space. This needs to pass in the firework that is being destroyed, so add this line into your Events_KeyboardDown() method, just beneath the comment "// destroy this firework!": ExplodeFirework(firework); That ExplodeFirework() method isn't terribly complicated, but it's worth keeping it as a separate method in case we ever need to explode fireworks from anywhere outside the Events_KeyboardDown() method - you might add a smart bomb power up for example. What ExplodeFirework() needs to do is create a new instance of the Explosion class, passing in the position that the explosion should be, how fast the particles should move, how long each particle should live, and also what colour should be used to draw it all. Once that's done, the new explosion should be added to the Explosions list so that we can keep track of it. Turning all that into C#, we get this: void ExplodeFirework(Firework firework) { // find the horizontal centre of the firework float xpos = firework.X + FireworkTypes[firework.Angle][firework.Colour].Width / 2; // create the explosion at that centre in the firework's colour Explosions.Add(new Explosion(xpos, firework.Y, 200, 2000, firework.Colour)); } The final change we need to make in Bang.cs is to draw any active explosions, and remove any ones that are finished. Put this code just before sfcGameWindow.Update() in your Draw() method: for (int i = Explosions.Count - 1; i >= 0; --i) { if (Explosions[i].Particles.Count == 0) { // this explosion is done, remove it Explosions.RemoveAt(i); } else { // it's alive, draw it Explosions[i].Render(sfcGameWindow); } } As with any other loop where we might remove elements, we need to loop through this one backwards. Otherwise all that code is pretty straightforward. What's that you say? Where did Particles and Render() come from? How can we create an Explosion and pass in all those variables without a constructor? Simple: we can't. And that's why we still have one more thing to do: we need to create the Explosion class. This really is the last thing we have to do in this project, but I've saved the best - and hardest - to last. As per usual, before we jump into the code, let's spec out how explosions should work. Each explosion needs: Going a step further, we need to make a class for the particles as well, because each particle needs to store some information: So, the first step to creating our particle system is to create the basic classes for the explosion and its particles - we'll call the classes Explosion and ExplosionDot. Go into the Explosion.cs file and put these two classses in there: public class ExplosionDot { public float X; public float Y; public float XSpeed; public float YSpeed; public int TimeCreated; } public class Explosion { float X; float Y; Random Rand = new Random(); Color Colour; int Speed; int Lifespan; List<ExplosionDot> ExplosionDots = new List<ExplosionDot>(); } The only real code we need inside the Explosion class is a constructor and a Render() method, the first of which is very straightforward: we need to copy all the parameters into the explosion for later reference, then create 100 particles. Here it is: public Explosion(float x, float y, int speed, int lifespan, Color colour) { // copy all the variables into the Explosion object X = x; Y = y; Colour = colour; Speed = speed; Lifespan = lifespan; // now create 100 particles for (int i = 0; i < 100; ++i) { ExplosionDot dot = new ExplosionDot(); dot.X = X; dot.Y = Y; // one of the parameters passed in is the speed to make particles move // we use that as a range from -speed to +speed, making each particle // move at a different speed. dot.XSpeed = Rand.Next(-Speed, Speed); dot.YSpeed = Rand.Next(-Speed, Speed); dot.TimeCreated = Timer.TicksElapsed; Particles.Add(dot); } } And that just leaves the Render() method for explosions. To make the code a bit easier to read in Bang.cs, I've made this Render() method both update the particle positions and draw the particles - it's not ideal because it means we can't draw the explosions without making them move, which rules out things like a pause option, but it's OK for now and something you can tackle for homework. The best way to explain this code is to scatter comments through it, so here goes: public void Render(Surface sfc) { // loop backwards through the list of particles because we might remove some for (int i = Particles.Count - 1; i >= 0; --i) { ExplosionDot dot = Particles[i]; // update the particle's position dot.X += dot.XSpeed / Events.Fps; dot.Y += dot.YSpeed / Events.Fps; // if this dot has outlived its lifespan, remove it if (dot.TimeCreated + Lifespan < Timer.TicksElapsed) { Particles.RemoveAt(i); continue; } // figure out how much time has passed since this particle was created int timepassed = Timer.TicksElapsed - dot.TimeCreated; // ... then use it to calculate the alpha value for this particle int alpha = (int)Math.Round(255 - (255 * ((float)timepassed / Lifespan))); // if the particle is basically invisible, don't draw it if (alpha < 1) continue; // otherwise, create a colour for it based on the original colour // and the alpha for this particle Color thiscol = Color.FromArgb(alpha, Colour.R, Colour.G, Colour.B); // now draw the particle short left = (short)Math.Round(dot.X); short top = (short)Math.Round(dot.Y); short right = (short)(Math.Round(dot.X) + 2); short bottom = (short)(Math.Round(dot.Y) + 2); sfc.Draw(new Box(left, top, right, bottom), thiscol); } } There are two points of interest in that code: the code to create an alpha value for the particle, then the call to Color.FromArgb() to turn that alpha value into a colour that can be drawn. If particles have a lifespan, as they do in this example, then the best way to draw them is usually to fade them out slowly so that they are 100% opaque when created, and 0% opaque when they are about to be destroyed due to old age. In SDL, alpha values are expressed as values between 0 (wholly transparent) and 255 (wholly opaque), so what we need to do is what value a given particle should have by comparing its creation time against the current time and its lifespan. For the purpose of a really simple example, let's say a particle was created at time 0, has a lifespan of 200, and the current time is 100, so the particle is exactly half way through its life. What the code does is to figure out the amount of time that has passed by subtracting the current time from the created time. In our example, the current time is 100, so we subtract from that the created time, which is 0, giving 100 - 100 milliseconds have passed since the particle was created. Next it takes that value and divides it by the lifespan of particles, which is 200, giving 0.5. It then takes that value and multiplies it by 255, giving 127.5. Finally, it subtracts that from 255, giving, again, 127.5. Finally, that number gets rounded to an integer, giving 128, so the final alpha value for this pixel is 128. If you're wondering the we need to subtract the value from 255, it will become clear if you take different input values. For example, if the particle was created at time 0, the current time was 100, but the lifespan of particles was 110, you get this: 100 - 0 = 100; 100 / 110 = 0.909090909; 0.909090909 * 255 = 231.818181818. As you can see, the particle has gotten closer to its end of life, its alpha value has gone up rather than down - it's going from transparent to opaque! We want the exact opposite of that, so we just take the finished value and subtract it from 255 so that over time the particles get more transparent until eventually becoming invisible. Once the alpha value is calculated, we can create a new Mono Color object from it by using the Color.FromArgb() method. This takes colour values in the order Alpha, Red, Green, Blue, so we just specify the new alpha value as the first parameter then use the colour value that was specified when the explosion was created for the other parameters. And with that the game is finished: if you run it now you'll see pretty explosions when fireworks are detonated, and it only looks nicer when you destroy multiple at once! This is the effect our particle system gives us: a hundred or so little particles flying outwards from the centre, fading away as they get older. This has been the biggest project to date, and I think the finished product is something you can be really proud of - it's a fun game, it's easy to play, and you've learned a lot of neat stuff along the way! Up until this project, all the little classes we've defined as part of our projects have basically been glorified data stores - they haven't had any intelligence of their own. But in this project, the Explosion class has two methods of its own: the functionality required to make it work is encapsulated inside the class, so you can copy it around to other projects fairly easily. You've also learned how to use the Dictionary data type for when a simple List isn't enough, how to draw boxes using SDL, and, most impressive of all, how a simple particle system works. And actually it's in particle systems that you can have the most fun extending this project - it's easy to add gravity (just modify each particle's YSpeed value each update) or wind, to make particle systems that keep firing new particles rather than expelling them all at once, and so on. Play around and see what you can achieve! It's been a lot of work, but the payoff is another big batch of learning, plus another finished project. four coding problems; the first three are required, and the last one is for coders who want a bit more of a challenge. If you're having trouble trying to figure out the third problem, you should look at the Math.Sin() and Math.Cos() code for an example of how to do it. Oh!".
http://www.tuxradar.com/content/hudzilla-coding-academy-project-ten
CC-MAIN-2017-04
refinedweb
6,479
61.97
- data Server - serverThreadId :: Server -> ThreadId - serverMetricStore :: Server -> Store - forkServer :: ByteString -> Int -> IO Server - forkServerWith :: Store -> ByteString -> Int -> IO Server - getCounter :: Text -> Server -> IO Counter - getGauge :: Text -> Server -> IO Gauge - getLabel :: Text -> Server -> IO Label - getDistribution :: Text -> Server -> IO Distribution Required configuration To make full use out of. API is versioned to allow for API evolution. This document is for version 1. To ensure you're using this version, append ?v=1 to your resource URLs. Omitting the version number will give you the latest version of the API. The following resources (i.e. URLs) are available: - / - JSON object containing all metrics. Metrics are stored as nested objects, with one new object layer per "." in the metric name (see example below.) Content types: "text/html" (default), "application/json" - /<namespace>/<metric> - JSON object for a single metric. The metric name is created by converting all "/" to ".". Example: "/foo/bar" corresponds to the metric "foo.bar". Content types: "application/json" Each metric is returned as an object containing a type field. Available types are: - "c" - Counter - "g" - Gauge - "l" - Label - "d" - Distribution In addition to the type field, there are metric specific fields: - Counters, gauges, and labels: the valfield contains the actual value (i.e. an integer or a string). - Distributions: the mean, variance, count, sum, min, and maxfields contain their statistical equivalents. Example of a response containing the metrics "myapp.visitors" and "myapp.args": { "myapp": { "visitors": { "val": 10, "type": "c" }, "args": { "val": "--a-flag", "type": "l" } } } The monitoring server A handle that can be used to control the monitoring server. Created by forkServer. serverThreadId :: Server -> ThreadIdSource The thread ID of the server. You can kill the server by killing this thread (i.e. by throwing it an asynchronous exception.) serverMetricStore :: Server -> StoreSource The metric store associated with the server. If you want to add metric to the default store created by forkServer you need to use this function to retrieve it. Arguments Like forkServerWith, but creates a default metric store with some predefined metrics. The predefined metrics are those given in registerGcMetrics." and "text/html". Registers the following counter, used by the UI: ekg.server_time_ms - The server time when the sample was taken, in milliseconds. Note that this function, unlike forkServer, doesn't register any other predefined metrics. This allows other libraries to create and provide a metric store for use with this library. If the metric store isn't created by you and the creator doesn't register the metrics registered by forkServer, you might want to register them yourself. Defining metrics The monitoring server can store and serve integer-valued counters and gauges, string-valued labels, and statistical distributions. A counter is a monotonically increasing value (e.g. TCP connections established since program start.) A gauge is a variable value (e.g. the current number of concurrent connections.) A label is a free-form string value (e.g. exporting the command line arguments or host name.) A distribution is a statistic summary of events (e.g. processing time per request.) Each metric is associated with a name, which is used when it is displayed in the UI or returned in a JSON object. Metrics share the same namespace so it's not possible to create e.g. a counter and a gauge with the same. Attempting to do so will result in an error.. Similar for the other metric types. It's also possible to register metrics directly using the System.Metrics module in the ekg-core package. This gives you a bit more control over how metric values are retrieved. Arguments Return a new, zero-initialized counter associated with the given name and server. Multiple calls to getCounter with the same arguments will result in an error. Arguments Arguments Arguments Return a new distribution associated with the given name and server. Multiple calls to getDistribution with the same arguments will result in an error.
http://hackage.haskell.org/package/ekg-0.4.0.2/docs/System-Remote-Monitoring.html
CC-MAIN-2014-52
refinedweb
645
59.9
30 May 2012 17:01 [Source: ICIS news] LONDON (ICIS)--Crude oil futures extended losses on Wednesday pressured by mounting concerns over the eurozone debt crisis and reports that ?xml:namespace> By 15:00 GMT, the front-month July Brent crude oil contract had hit an intra-day low of $103.28/bbl, a loss of $3.40/bbl from the previous close. The contract then edged a little higher to trade around $103.40/bbl. At the same time, the July NYMEX WTI contract was trading around $87.65/bbl, having touched an intra-day low of $87.52/bbl, a loss of $3.24/bbl compared with Tuesday’s settlement. Crude oil futures are pressured by mounting concerns over the eurozone debt crisis. The euro fell towards two-year lows against the US dollar on Wednesday amid fears of a collapse of the single currency. Italian benchmark 10-year bonds were heard to be attracting yields above 6% on Wednesday, a level which is widely agreed to be unsustainable. In addition, a new opinion poll in Away
http://www.icis.com/Articles/2012/05/30/9565616/crude-falls-on-eurozone-concern-china-indicating-no-large-stimulus.html
CC-MAIN-2014-52
refinedweb
179
76.11
Practical Naive Bayes — Classification of Amazon Reviews BlogNLP/Text AnalyticsPredictive AnalyticsStatisticsBayes|classificationposted by Jack Schultz July 17, 2017 Jack Schultz If you search around the internet looking for applying Naive Bayes classification on text, you’ll find a ton of articles that talk about the intuition behind the algorithm, maybe some slides from a lecture about the math and some notation behind it, and a bunch of articles I’m not going to link here that pretty much just paste some code and call it an explanation. So I’m going to try to do a little more here, by hopefully writing and explaining enough, is let you yourself write a working Naive Bayes classifier. There are three sections here. First is setup, and what format I’m expecting your text to be in for the classification. Second, I’ll talk about how to run naive Bayes on your own, using slow Python data structures. Finally, we’ll use Python’s NLTK and it’s classifier so you can see how to use that, since, let’s be honest, it’s gonna be quicker. Note that you wouldn’t want to use either of these in production, so look for a follow up post about how you might go about doing that. As always, twitter, and check out the full code on github. Setup Data from this is going to be from this UCSD Amazon review data set. I swear one of the biggest issues with running these algorithms on your own is finding a data set big and varied enough to get interesting results. Otherwise you’ll spend most of your time scraping and cleaning data that by the time you get to the ML part of the project, you’re sufficiently annoyed. So big thanks that this data already exists. You’ll notice that this set has millions of reviews for products across 24 different classes. In order to keep the complexity down here (this is a tutorial post after all), I’m sticking with two classes, and ones that are somewhat far enough different from each other to show that classification works, we’ll be classifying baby reviews against tools and home improvement reviews. Preprocessing First thing I want to do now, after unpacking the .gz file, is to get a train and test set that’s smaller than the 160,792 and 134,476 of baby and tool reviews respectively. For purposes here, I’m going to use 1000 of each, with 800 used for training, and 200 used for testing. The algorithms are able to support any number of training and test reviews, but for demonstration purposes, we’re making that number lower. Check the github repo if you want to see the code, but I wrote a script that just takes the full file, picks 1000 random numbers, segments 800 into the training set, and 200 into the test set, and saves them to files with the names “train_CLASSNAME.json” and “test_CLASSNAME.json” where classname is either “baby” or “tool”. Also, the files from that dataset are really nice, in that they’re already python objects. So to get them into a script, all you have to do is run “eval” on each line of the file if you want the dict object. Features There really wasn’t a good place to talk about this, so I’ll mention it here before getting into either of the self, and nltk running of the algorithm. The features we’re going to use are simply the lowercased version of all the words in the review. This means, in order to get a list of these words from the block of text, we remove punctuation, lowercase every word, split on spaces, and then remove words that are in the NLTK corpus of stopwords (basically boring words that don’t have any information about class). from nltk.corpus import stopwords STOP_WORDS = set(stopwords.words('english')) STOP_WORDS.add('') def clean_review(review): exclude = set(string.punctuation) review = ''.join(ch for ch in review if ch not in exclude) split_sentence = review.lower().split(" ") clean = [word for word in split_sentence if word not in STOP_WORDS] return clean Realize here that there are tons of different ways to do this, and ways to get more sophisticated that hopefully can get you better results! Things like stemming, which takes words down to their root word (wikipedia gives the example of “stems”, “stemmer”, “stemming”, “stemmed” as based on “stem”). You might want to include n-grams, for an n larger than 1 in our case as well. Basically, there’s tons of processing on the text that you could do here. But since this I’m just talking about how Naive Bayes works, I’m sticking with simplicity. Maybe in the future I can get fancy and see how well I can do in classifying these reviews. Ok, on to the actual algorithm. Self Naive Bayes Now that we’ve got training and testing reviews all set up, along with a scheme for tokenizing the text, it’s time to run our custom Naive Bayes algorithm, slow, and in all it’s glory. Based on all the setup we did, the only input we need to this function is the classnames, ‘baby’ and ‘tool’. Getting Word Counts First step in the set up here, if you couldn’t tell from that appropriately named header just above, is getting counts for all the words in the reviews for each class. The code below, in English, is as follows. For each class (‘baby’ and ‘tool’), we get the training filename that we created above, we want a dictionary where the key is the word, and the value is the number of times we saw that word in the class corpus of reviews. That’s the counters_from_file function. Luckily, Python comes with a nifty Counter class so we don’t have to deal with the logic for a normal dict. Secondly, we want to get initial probabilities for each class, in this case, that’s the doc_counts variable. Going back to Bayes Rules, if you remember, the difference between that and frequentist probabilities, is that Bayes takes into account the probability of getting a document of a certain class. In our case I specifically made sure that there are 800 training samples, and 200 test for each class, so the probability in test that we get a review of either class if 50%. But let’s say people love buying tools a lot more than baby products on Amazon, and for every five tool reviews, there’s only one baby review, we want to know that and take that into account. So we need to do a little work here to get that probability before running the algorithm. In real life, getting this number is kind of difficult, because you can’t really know the probability of getting a review of each class since it’s only based on past reviews; those numbers might change. But at least this is a good estimate. Finally, combined_bag variable holds the counter dictionary for the entire corpus. from collections import Counter def counters_from_file(filename): reviews = read_reviews(filename) texts = [review["reviewText"] for review in reviews] tokens = [clean_review(review_text) for review_text in texts] flattened_tokens = [val for sublist in tokens for val in sublist] counter = Counter(flattened_tokens) return counter #gets line count from the file, for initial probabilities def line_count_from_file(filename): return sum(1 for line in open(filename)) counters = [] doc_counts = [] for label in class_labels: filename = "train_%s.json" % label doc_counts.append(line_count_from_file(filename)) counter = counters_from_file(filename) counters.append(counter) probabilities = [float(doc_count) / sum(doc_counts) for doc_count in doc_counts] combined_bag = Counter() for counter in counters: combined_bag += counter combined_vocab_count = len(combined_bag.keys()) Record Keeping Few things here for record keeping purposes. First are correct and incorrect counters, so at the end we can know what percentage we got correct. Second, a confusion matrix. This serves as a good way of figuring out specifically which classes are getting confused with each other. Very aptly named. I also have a nice function for printing out the confusion matrix in a readable form. def print_confusion_matrix(matrix, class_labels): lines = [" for i in range(len(class_labels)+1)] for index, c in enumerate(class_labels): lines[0] += "t" lines[0] += c lines[index+1] += c for index, result in enumerate(matrix): for amount in result: lines[index+1] += "t" lines[index+1] += str(amount) for line in lines: print line def initialize_conversion_matrix(num_labels): return [[0 for i in range(num_labels)] for y in range(num_labels)] correct = 0 incorrect = 0 confusion_matrix = initialize_conversion_matrix(len(class_labels)) Algorithm Time Remember when I said that most ML is just getting data set up and processed for learning? Yeah, we’re finally about to run the algorithm itself. Like I mentioned above, when I went through other articles online, I found a lot of math, and then some code, but nothing that explained what was going on very well. So I’m going to try to bridge that gap here. So I’ll English the algorithm in case you don’t want to jump right to code. For each review text that we clean using the cleaning and tokenizing method I mentioned above, we go through each class, and calculate the conditional probability of that class, given the text. In the naive Bayes world which assumes independnce, that value is: P(class) * P(word1| class) * P(word2 | class) * ... * P(wordn | class) The key term here, and the one I didn’t find explained very well elsewhere is the P(word | class), which in English is the probability you’d expect to see that word, given the corpus of words within that class. This number is the number of times you see that word in all of the documents in that class (found using the counter dictionary) divided by the total number of words seen in that class (found by counting the values in that counter dictionary for the class). This makes sense when you think about it. You have X total words seen from that class, and Y of them are the word you’re looking at. So the probability you see that word in that class is Y / X. The issue becomes if you haven’t seen that word before. In that case, Y will be 0, that term will be 0, and quickly the value for P(class | text) will be 0. To fix this, we’re going to do something called additive smoothing, where we make sure that the numerator in that statement will never be 0. I’m not going to go into the math or reasoning behind this since you can find that elsewhere, but in the end, that statement turns into (Y + 1) / (X + number_of_words_in_vocab) where number_of_words_in_vocab is the number of unique words that we’ve seen from all the words in every review regardless of text. If you look above, we can get that information combined_vocab_count variable in the setup information above. In the code, you can see the key line as: cond_prob = float((word_count + 1)) / (class_total_word_count + combined_vocab_count) Note about the math.log function calls you see in the code as well. The math in naive Bayes calls for multiplying the the conditional probabilities together. But if you look at those numbers, they’re less than 1, somewhere in the range of 0.00X. This means when strung together and multiplied, you’re ending up with scores very close to 0, and in some cases I noticed, Python runs out of decimal spaces and that number turns to 0 when there are many words in the text. We don’t want that obviously because we lose all information! Thankfully logs exist, and since we’re only looking at comparing magnitudes, we can turn that above multiplied line into: log(P(class)) + log(P(word1| class)) + log(P(word2 | class)) + ... + log(P(wordn | class)) and we end up with positive and comparable numbers. So if you ever hear someone complaining about how learning logs is pointless, you can point to this example and shut them up. That should be everything unique, so here’s the code, complete with all loops for the classes, as well as checks for the guesses and record keeping. for index, class_name in enumerate(class_labels): filename = "test_%s.json" % class_name texts = get_review_texts(filename) for text in texts: tokens = clean_review(text) scores = [] for cindex, bag in enumerate(counters): #for each class score = math.log1p(probabilities[cindex]) for word in tokens: #for each word, we need the probablity that word given the class / bag word_count = bag[word] class_total_word_count = sum(bag.values()) cond_prob = float((word_count + 1)) / (class_total_word_count + combined_vocab_count) score += math.log(cond_prob) scores.append(score) max_index, max_value = max(enumerate(scores), key=lambda p: p[1]) confusion_matrix[index][max_index] += 1 if index == max_index: correct += 1 else: incorrect += 1 When I run the full code on the 200 test samples for each class, I get the following output! 0.9625 baby tool baby 192 8 tool 7 193 meaning we got 95% of the documents correct. We thought 11 of the baby reviews were tool reviews, and 6 of the tool reviews were baby reviews. If you look at some of the mistakes, you can somewhat see what the reasoning was. For example, this review was guessed as being a tool reivew, even though it was a baby review: “These tweezers are a far more useful tool than those disgusting snot suction things! You have to be quick and precise with these but they do the trick. The tips are small enough to fit into a newborn’s nose and blunt enough that if you do touch skin or nose it doesn’t hurt. The tweezer is the length of a normal tweezer so it fits nicely in my hand. It is easy to clean and you don’t need any filters or hoses. The only downside is I could imagine the tip breaking off if I dropped it enough times since it’s plastic. On the other hand, you wouldn’t want metal in the baby’s nose anyway.” Just looking at the words, naively some might say, you can see why the model got confused. And there you go, Naive Bayes done ourselves. Nice and quick if you run it with 1000 examples and two classes, but add more, and you’ll find the classification part of the code to slow down big time. And plus, we don’t want to do the code ourselves! Naive Bayes using NLTK We’ve already seen use of the NLTK by using their stopword list for removal. It’s a huge First part of using the NLTK classifier is training the classifier. Same thing we did One thing when looking online about how to use the NLTK here is that basically none of the tutorials talk about the form of the data needed to pass into the classifier. Check out the comment at the top of the code block to see how we need to modify the training data. For each text, we need a tuple where the first item is a dictionary of word counts for that text, and the second item is the class string. #note, training set needs to be in form of #train_set = [ #({'I': 3, 'like': 1, 'this': 1, 'product': 2}, 'class_name_1') #({'This': 2, 'is': 1, 'really': 1, 'great': 2}, 'class_name_1') #... #({'Big': 1, 'fan': 1, 'of': 1, 'this': 1}, 'class_name_X') #] train_set = [] for class_name in class_labels: filename = "train_%s.json" % class_name texts = get_review_texts(filename) for text in texts: tokens = clean_review(text) counter = Counter(tokens) train_set.append((dict(counter), class_name)) classifier = nltk.NaiveBayesClassifier.train(train_set) Now in order to classify the texts, we again create the counter dictionary mapping words to frequency in the document, and for each of them, pass it into classifier.classify which returns the string value of the class it guesses. for index, class_name in enumerate(class_labels): filename = "test_%s.json" % class_name reviews = read_reviews(filename) texts = [review["reviewText"] for review in reviews] for text in texts: tokens = clean_review(text) counter = dict(Counter(tokens)) guess = classifier.classify(counter) lindex = class_labels.index(guess) confusion_matrix[index][lindex] += 1 if guess == class_name: correct += 1 else: incorrect += 1 Sweet! We can run that code and get the results that follow. Note a couple things. First that they use a slightly different math than the additive smoothing we used above, so the percentage correct and confusion matrix are slightly different. They talk about the math here, but looks like there’s some text formatting issues so it’s a little confusing. You’ll have to look at the code specifically if you want to know the math they use. And also, the NLTK classifier has a cool function that will return the most informative features. Basically, showing which words are most indicative of a text being from a certain class. So if you see ‘baby’, ‘wash’, ‘seat’, etc. in the text, you’re probably looking at baby. 0.9575 Most Informative Features baby = 1 baby : tool = 49.3 : 1.0 wash = 1 baby : tool = 34.3 : 1.0 seat = 1 baby : tool = 30.3 : 1.0 child = 1 baby : tool = 29.7 : 1.0 likes = 1 baby : tool = 28.3 : 1.0 babies = 1 baby : tool = 28.3 : 1.0 led = 1 tool : baby = 27.0 : 1.0 solid = 1 tool : baby = 26.3 : 1.0 tool = 1 tool : baby = 26.2 : 1.0 toys = 1 baby : tool = 25.7 : 1.0 baby tool baby 189 11 tool 6 194 And there you have it! Adding more classes to the NLTK classifier is really simple and quick which is the key. Using more training samples or more classes quickly increases the run time of the custom Naive Bayes code. The NLTK implementation runs much quicker, so use that for real world applications. Final Thoughts The key takeaway to remember here about how Naive Bayes works is thinking about the P(word | class) term, and the heuristic behind it. All we’re doing here, with some math in between, is counting the frequency of occurrence of each word in the test document, compared to all the documents we’ve used in the training set. If the test word occurs more frequently in the test documents of one class compared to the other, the P(word | class) term will have a greater value, and make the final value for that class greater than for the other classes. And it works! With two classes here, we’re getting like 95% accuracy on classifying these reviews. That’s pretty good for a computer. Using different features, adding more training reviews can all help with accuracy, but it’s pretty cool what a little code can get you. Originally posted at bigishdata.com/
https://opendatascience.com/practical-naive-bayes-classification-of-amazon-reviews/
CC-MAIN-2020-40
refinedweb
3,124
69.11
AWS Cloud Operations & Migrations Blog Visualize Amazon EC2 based VPN metrics with Amazon CloudWatch Logs Organizations have many options for connecting to on-premises networks or third parties, including AWS Site-to-Site VPN. However, some organizations still need to use an Amazon Elastic Compute Cloud (Amazon EC2) instance running VPN software, such as strongSwan. Gaining insight into Amazon EC2-based VPN metrics can be challenging when compared to AWS native VPN services that feature Amazon CloudWatch integration. This post aims to help surface those important metrics, so that administrators can better monitor the status and performance of their EC2 based VPNs. Publishing these metrics allow administrators to keep meaningful network metrics in CloudWatch to correlate potential VPN issues with other AWS metrics and logs. To learn more about running strongSwan on an EC2 instance, take a look at this blog post. Solution overview In our scenario, we will export key metrics from an EC2 instance running strongSwan and FRRouting to CloudWatch, including latency to a VPN target and the number of BGP prefixes received. We will also export the actual BGP prefixes that are present in the routing table of the EC2 instance to Amazon CloudWatch Logs. This lets administrators troubleshoot potential VPN and routing issues without the need to log in to the EC2 instance. FRR works along with strongSwan, and it is used to manage dynamic routing. In this case, we’ll use the BGP protocol, though simple modifications could be made to accommodate other dynamic routing protocols. Walkthrough When the AWS CloudFormation template is deployed, it will create an AWS Identity and Access Management (IAM) role that is attached to an EC2 instance that you specify. The permissions include access to put metrics into CloudWatch and to put data into a specific CloudWatch Logs log stream. Prerequisites To follow along with this blog post, you should have the following: - An AWS account - An EC2 instance running Ubuntu or Amazon Linux 2 with strongSwan and software capable of managing dynamic BGP routing, such as FRR or Quagga - An active VPN connection from your EC2 instance to another VPN device - A target IP address on the remote end of the VPN that is capable of receiving ICMP traffic Steps As an overview, we will follow these steps: - Deploy the CloudFormation template - Attach the IAM instance profile to the EC2 instance - Copy the bash file to your EC2 instance running strongSwan - Install the command-line interface (AWS CLI) on the EC2 instance - Create a cron job to schedule metric delivery to CloudWatch and CloudWatch Logs - View the CloudWatch dashboard to visualize the metrics and verify metric delivery Launch the CloudFormation stack - Download the CloudFormation template associated with this blog post - Log in to the AWS console, and navigate to the CloudFormation console - Select the Create stack button - Upload the template file that you previously saved, and select Next - Enter a name for your CloudFormation stack, and fill in the parameters required by the template Figure 1: Launching the CloudFormation stack - Select Next, and optionally specify any tags to apply - Select Next once more, review the details, select the checkbox to acknowledge that the CloudFormation template will create IAM resources, and select Create stack - You can monitor the resource creation progress on the Events screen Attach the instance profile to your EC2 instance - In the EC2 console, select the instance running strongSwan - From the Actions dropdown in the top right corner, select Security and then select Modify IAM role - From the dropdown menu, choose the role called “EC2-CloudWatch-Metrics”, as depicted in Figure 2 (this name may differ if you modified the default options when deploying the CloudFormation template) Figure 2: Modify the IAM role on your EC2 instance Create the bash script in your EC2 instance Figure 3: Creating the vpn-metrics.sh script on your EC2 instance - Log in to the EC2 instance running strongSwan - Create a new file, and copy the below script into it, as shown above in Figure 3 (the script is also available to download here) $ nano vpn-metrics.sh - Paste the script into the file - Modify the target_ip variable, and specify a target IP address on the other end of the VPN. Make sure that it accepts ICMP echo-request messages. Additionally, specify a source IP address on your local EC2 instance for the source_ip variable. This is where the ICMP messages will be sourced from. - Save the file - Press ctrl-X to exit the editor - Press Y to save the changes - Give the new file execute permissions $ chmod +x vpn-metrics.sh Install the AWS CLI If you’re using an operating system other than Amazon Linux 2, then you will need to install the AWS CLI. Create a cron job A cron job creates a scheduled task that runs at a specified time or interval. Follow these steps for Amazon Linux 2: $ crontab -e - Add this line to execute the script every five minutes: */5 * * * * /home/ec2-user/vpn-metrics.sh Follow these steps for Ubuntu: $ crontab -e - Add this line to execute the script every five minutes: */5 * * * * /home/ubuntu/vpn-metrics.sh When the crontab is saved, crontab: installing new crontab appears. View the CloudWatch dashboard - Navigate to the CloudWatch console, and select Dashboards. - Select the dashboard titled “EC2-VPN-Dashboard”. - If the cron job has already run, then you will see metrics populated in the three dashboard widgets (if you do not see any metrics yet, then wait a few minutes for CloudWatch to populate them). - New metrics will continue to appear after the cron job runs and executes the script. Use the refresh icon in the upper right of the CloudWatch console to see new metrics appear in the dashboard widgets. Figure 4, depicts metrics for all three widgets. Figure 4: Viewing the CloudWatch Dashboard The Amazon CloudWatch Contributor Insights widget in the bottom left of Figure 4 will show when a BGP prefix is no longer present in the BGP route table of the EC2 instance. This can be very useful in troubleshooting a scenario where routes from multiple BGP sources are present. The dip in the ReceivedBGPRouteCount widget in Figure 4 correlates with a temporary loss of the prefix 10.26.0.0/23, shown in the BGP Prefix Insights widget in Figure 5. Figure 5: BGP Prefix Insights The Contributor Insights rule can also be viewed in a standalone fashion by visiting the Contributor Insights link under the CloudWatch service page. Cleaning up Disable the cron job running on your EC2 instance to avoid incurring charges. To do this, execute the steps below: $ crontab -e - Remove this line: */5 * * * * /home/ec2-user/vpn-metrics.sh(replace ec2-userwith ubuntu, if the EC2 instance is running Ubuntu) To deprovision the CloudWatch dashboard, log group, and IAM role, delete the CloudFormation stack that was deployed. Conclusion This post demonstrated how to publish custom CloudWatch metrics from an EC2 instance to a CloudWatch dashboard and custom namespace. Publishing these metrics lets administrators view key performance metrics for an EC2-based VPN, and have the ability to create CloudWatch alarms if desired. To publish other EC2 metrics not covered in this post, take a look at the CloudWatch agent to publish in-guest, system-level metrics.
https://aws.amazon.com/blogs/mt/visualize-amazon-ec2-based-vpn-metrics-with-amazon-cloudwatch-logs/
CC-MAIN-2022-27
refinedweb
1,209
51.72
In the last tutorial we created a small REST API. So now that the “producing REST API” step is completed, it’s time to start consuming it in another Spring boot project. Last time we’ve already set up a module for this project, called spring-boot-rest-client. Creating a REST client To create a REST client with Spring, you need to create a RestTemplate instance. This class allows you to easily communicate with a REST API and serialize/deserialize the data to and from JSON. I created this bean in the SpringBootRestClientApplication class (main class), by writing an additional method like this: @Bean public RestTemplate restTemplate() { return new RestTemplate(); } Creating a service The next step is to create a service that uses this RestTemplate bean. I called it TaskServiceImpl: @Service public class TaskServiceImpl { @Autowired private RestTemplate restTemplate; } Now, to consume a REST API we’ll have to know where it is located. To configure the location I created a new property in application.yml (or application.properties) like this: resource: tasks: To use properties in our classes, you can use the @Value annotation. So back in our TaskServiceImpl I created two new fields: @Value("${resource.tasks}") private String resource; @Value("${resource.tasks}/{id}") private String idResource; As you can see, the ${resource.tasks} placeholder is the same name as we defined in our application properties. The reason I wrote two properties is because we have a path for our collection (finding all tasks + adding a new task) and a path for identifying a single task (updating a task + deleting a task) that requires an ID. For now, we defined the ID as {id}. Now we’re going to create a method for each REST endpoint. First of all let’s start with the find all operation: public List<TaskDTO> findAll() { return Arrays.stream(restTemplate.getForObject(resource, TaskDTO[].class)).collect(Collectors.toList()); } Quite easy, we used restTemplate.getForObject() to retrieve the collection of tasks. Now, because we can’t directly map to a List<TaskDTO> I mapped it to TaskDTO[] and used Java 8 streams to collect it as a list. If you’re happy with an array of TaskDTO you can use that as well though. Obviously, we have to define TaskDTO in this module as well. If you don’t want to do that you can create a shared module between both the consumer and the REST API, but for now I thought that would only make matters more complex. So I copied the TaskDTO over:; } } Now, to create a new task we can no longer use the restTemplate.getForObject() since we’re using POST request here. Luckily, RestTemplate has a method for that as well: public TaskDTO create(TaskDTO task) { return restTemplate.postForObject(resource, task, TaskDTO.class); } The second parameter in this case is the request body you want to pass. For us it’s the new TaskDTO we want to save. The next method is updating the task. This is a bit more trick than our previous two methods, because there’s no putForObject() method. However, in this case we can use the advanced restTemplate.exchange() method: public TaskDTO update(Long id, TaskDTO task) { return restTemplate.exchange(idResource, HttpMethod.PUT, new HttpEntity<>(task), TaskDTO.class, id).getBody(); } This means we have to wrap our request body in a HttpEntity, and our response is also wrapped into a ResponseEntity class, so to retrieve the actual body of the response we have to use the getBody() method. Looking to the parameters more in depth, you can see that it does not much differ from the other calls. However, we are referring to the idResource so we have to pass the ID somehow. Luckily, all RestTemplate calls allow you to provide all URL variables at the end of the call. As you may remember, we defined the ID using some special syntax ( {id}), so if we add a parameter to the end of the exchange() method, it will be used as the actual value for the ID placeholder. Lastly, we have the delete operation. This one is actually quite simple as well: public void delete(Long id) { restTemplate.delete(idResource, id); } Just like with the update method, we can add the ID to the end of the restTemplate.delete() method as a varargs property and it will be replace the {id} part in the path. Writing a controller Now that we have defined our REST client, it’s time to use it in our application. To do that, create a new controller called TaskController. The methods in here are going to be quite simple. Every method will just call a service and afterwards it will just redirect back to the findAll() method to show the updated content: @Controller @RequestMapping("/") public class TaskController { @Autowired private TaskServiceImpl service; @RequestMapping(method = RequestMethod.GET) public String findAll(Model model) { model.addAttribute("tasks", service.findAll()); model.addAttribute("newTask", new TaskDTO()); return "tasks"; } @RequestMapping(method = RequestMethod.PUT) public String update(@RequestParam Long id, TaskDTO task) { service.update(id, task); return "redirect:/"; } @RequestMapping(method = RequestMethod.DELETE) public String delete(@RequestParam Long id) { service.delete(id); return "redirect:/"; } @RequestMapping(method = RequestMethod.POST) public String create(@Valid @ModelAttribute("newTask") TaskDTO task) { service.create(task); return "redirect:/"; } } There’s not much special here. We’re using the Model from Spring MVC to add the tasks from the REST API to the model and a new TaskDTO as well to bind to a form which we can use to create a new task. For updating the task I chose to define a separate task, since Spring automatically injects any property matching the request parameters. I explained this a bit more into detail in my Spring MVC form tutorial. HTML template The final part of our application is the HTML template. I’m going to use Thymeleaf here, but there are multiple templating engines supported by Spring boot. So, let’s create a file called tasks.html in src/main/resources/templates. This folder is the default folder Spring boot will use to find all templates. The template I used can be found below: <"> <form th: <input type="hidden" name="id" th: <input type="hidden" name="description" th: <input type="checkbox" name="completed" th: <span th:</span> </form> </div> <div class="three columns"> <form th: <input type="hidden" name="id" th: <button class="button u-full-width" type="submit">Delete</button> </form> </div> </div> <hr /> <form method="post" th: <div class="row"> <div class="nine columns"> <input type="text" placeholder="Description of the task" class="u-full-width" th: </div> <div class="three columns"> <button type="submit" class="button-primary u-full-width">Add</button> </div> </div> </form> </div> </body> </html> A few things to notice, like I said, for adding new tasks I’m using form binding, that’s why you can see the th:object attribute on the form itself and the th:field attribute on the input field. For updating and deleting the task I chose to use a hidden form. For updating you can see that all fields are hidden except the checkbox to complete a task. Spring MVC will see that the input names ( completed, description) match the names of the properties within TaskDTO and will map them as such. Obviously we had to apply a small trick here to automatically submit the update form after changing the checkbox state. So that’s why we added the onchange="form.submit()" attribute to it. Testing it out Since we want to run both the REST service and the REST client application simultaneously while the default port of Spring boot is 8080, we’ll have to change on of them. So open application.yml or application.properties in the REST client application and add a new property like this: server: port: 8081 Now run both applications and go to. You’ll see that both our dummy tasks are visible: Now, if you enter a description and add a task you’ll see that the application will reload and that the new task is now visible: Similarly, we can update a task by checking its checkbox. The application will reload as well and you’ll see that the checkbox remains checked. If you’re not convinced, try closing and opening the tab again. The checkbox should stay in the same state as you left it. Deleting the task should work properly as well, which should remove the task from the list. Input validation However, what we didn’t do is to handle input validation. If you remember our previous tutorial, we used some input validation on the description field. The question is, what will happen now since we didn’t implement it? We get the default error page of Spring boot (Whitelabel error page). The reason for this is that when the REST call fails, the RestTemplate will throw a HttpClientErrorException, which we didn’t handle yet. To do that, we have to go back to the TaskController and add an exception handler by using the @ExceptionHandler annotation: @ExceptionHandler(HttpClientErrorException.class) public String handleClientError(HttpClientErrorException ex, Model model) { } However, what is the proper way to handle these errors? If you remember last tutorial, we defined a DTO called MessageDTO containing our error message. So the proper solution would be to get that message and show it to the end user. First of all, we have to copy the MessageDTO as well, like we did with the TaskDTO: public class MessageDTO { private String message; public MessageDTO(String message) { this.message = message; } public MessageDTO() { } public String getMessage() { return message; } public void setMessage(String message) { this.message = message; } } The next step is to implement the exception handler itself, like this: @ExceptionHandler(HttpClientErrorException.class) public String handleClientError(HttpClientErrorException ex, Model model) throws IOException { MessageDTO dto = mapper.readValue(ex.getResponseBodyAsByteArray(), MessageDTO.class); model.addAttribute("error", dto.getMessage()); return findAll(model); } So, we’re using a Jackson ObjectMapper to read the value of the exception itself and map it to MessageDTO, then we add the message to the model and use that model to call the findAll() method. The reason we can’t use redirect:/ here like the other methods is because that would clear the existing model, so our error message model would be gone into oblivion. We have to do one more thing though, since we used the ObjectMapper here, we have to define it in our controller as well: @Autowired private ObjectMapper mapper; Now all we have to do is to do something when the error model is not null. To do that, I created a <span> right below the description input field, like this: <form method="post" th: <div class="row"> <div class="nine columns"> <input type="text" placeholder="Description of the task" class="u-full-width" th: <span th:</span> <-- This is the new field --> </div> <div class="three columns"> <button type="submit" class="button-primary u-full-width">Add</button> </div> </div> </form> Now, if we run the application again and don’t enter a description, you’ll see that the error message from the backend is successfully forwarded to the end user. Conclusion Now that we have written a full CRUD application over a REST API it’s time to conclude why you could use this setup. The good thing about having a separate REST API is that it’s reusable. If we decide to write a mobile app, or have another application showing the amount of tasks we have then we don’t have to create a complete application again, since the REST API can be (re)used over and over again.
http://g00glen00b.be/consuming-rest-apis-with-spring/
CC-MAIN-2017-26
refinedweb
1,926
62.17
CDT/Archive/designs/StaticAnalysis/CheckerIdeas < CDT | Archive | designs | StaticAnalysis(Redirected from CDT/designs/StaticAnalysis/CheckerIdeas) This page is collection of ideas for checker that can be implemented for C/C++ Static Analysis in CDT (Codan). Feel free to add your own ideas or links. Checkers - Unused #include #include <stdio.h> int main() { return 1; } - Malloc called without sizeof consideration int * arr = (int *)malloc(20); // should be malloc(20*sizeof(int)) - Assigned to itself x = x; - Result of comparison is constant (x==x) (!x && x) - Redundant comparison operations (!(!x)) (x!=0 || 0!=x) - Comparison is used on "boolean" values 0<x<3 !x>5 - Consequent re-assignment without usage (sub-case of Value is never used after assignment) x=1; x=2; - Value is never used after assignment int x; x=23; return; - Unused local variable - local variable is not used in function - Undeclared variable - This is compiler error - catch early and have a quick fix so Ctrl-1 work like in java, I so like java quick fixes and code generation! { x = 5; } - Quick fix { int x = 5; } - Buffer over flow - This code is unsafe char x[10]; char y[15]; memcpy(x,y,20); - Also this code char x[10]; x[11] = 'a'; b = x[11]; - Invalid value assignment to enum enum ee { a, b }; ee dd; dd = 7; - Reduce scope - When a variable or a function has a greater scope than where it is used, that scope may be reduced. - For example: a variable with file scope that is only used in one function, can be declared static with function scope. - Or, a function that is only used in one file, may be declared with the static keyword, and its declaration removed from header files included by other files. - Variable with same name in higher scope int a; void foo( void ) { int a; } - Missing "break" in "switch" - finding missing "break" when one "case" ends and another starts, or the "switch" ends. Unless /* no break */ switch { case 1: // <- here (before next "case") case 2: /* no break */ // <- This is OK case 3: // <- here (end of "switch") } - Missing "default" in "switch" switch { case 1: case 2: case 3: // <- here (no default) } - Condition always TRUE / FALSE if( 1 > 2 ) // Always FALSE if( 1 < 2 ) // Always TRUE - -> or . - Spotting erroneous use of -> where . was intended. - Spotting erroneous use of . where -> was intended. - Static callback functions - Functions whose pointer is passed around must be declared static. - Names differ within the first 32 characters Flag the names which do not differ within the first 32/24/16 characters myModulePrefix_HandleReceivedDoThisMessage(...) myModulePrefix_HandleReceivedDoThatMessage(...) - Enforce use of types defined in a specific header file - types.h - specific header file typedef char Int8; typedef short Int16; typedef long Int32; typedef int Bool; - stuff.c #include "types.h" void f(void) { Int8 x; /* Ok */ char y; /* Flag: char should be substituted with Int8 */ } - if (a) ... else if (a) pattern if (count == 3) {} else if (count == 8) {} else if (count == 3) {} //second check - char array string checking for empty const char *str = "Hello"; if (str != '\0'){} //Wrong // if (*str != '\0') should be used - Wrong var incrementing/decrementing for ( i = 0; i < 18; i++) for (j = 0; j < 4; i++){} //Possible that wrong var incremented. - Double used var for cycles for (i = 0; i < 2; i++) { //some code for(i = 0; i < num; i++) {} }
https://wiki.eclipse.org/CDT/designs/StaticAnalysis/CheckerIdeas
CC-MAIN-2021-43
refinedweb
551
59.23
Well, I cannot say that this is a regular article. It is rather a small manual and presentation of my tool: Driver Loader, "DLoad". One may say - there are a lot of device driver loaders on the net, but when I typed in Google search engine: "Driver Loader" - I got only one provided by OSR group on the first page. One can say, search deeper and there are a lot of console utils for this, I will answer: I have finished my school long time ago and now I am an old guy who doesn't have 10 hours of free time daily to waste on searching for such a simple util - I need it in a second, right away; and I don't like messing with console tools, what age we got? I want some nice GUI, intuitive and straightforward user interface (in future I will probably make my Driver Loader voice controlled). So why did I code this one? Because OSRLoader for some reasons doesn't meet my needs? Because I simply don't want to traverse the whole of Google just to find one? Anyways, this version here is the 3rd version of DLoad, in the package you will also find version 2, which is written in C++ Gtk+, while version 3 is in C# .NET. So, let's see what we have here. While writing this introduction, let me mention 2 things: DLoad [1]. The code is organized in such a way that you can easily take some parts of it and adopt in your own app, rip it apart, mix and then put them back together. [2]. What actually 3rd version can do: ZwSetSystemInformation NtLoadDriver RtlCreateUserThread CreateRemoteThread NtCreateThreadEx Ok, here we go. I am not going to explain DLoad v.2 code because it is provided: "just like this", "as an alternative", "maybe you like Gtk?", "whatever...". Besides it is outdated. Let's proceed. DLoad I will explain things in their appearance order. Have a look at the GUI once again. There is no need for "Select Driver" button explanation I think... Ok. With these methods, you can load driver. ZwSetSystemInformation is pretty undocumented but has been used for a long long time. In fact, it is not the best method - it is the worst, but still it is. The worst because your driver gets into paged pool, you will not be able to unload it, nor will you be able to delete driver file until system reboot. Ok, how do we call this from C# code. The first thing is to have all native elements declared - for this "Native" class has been created, it looks like this: ZwSetSystemInformation public class Native { ....... public const int NtCurrentProcess = -1; public const int NtCurrentThread = -2; public const long NT_SUCCESS = 0x00000000L; public const int STATUS_SUCCESS = 0; ........ [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CLIENT_ID { int UniqueProcess; int UniqueThread; } [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct UNICODE_STRING { public ushort Length; public ushort MaximumLength; public string Buffer; } ............... [DllImport("ntdll.dll", CharSet = CharSet.Unicode, SetLastError = true)] unsafe public static extern int ZwSetSystemInformation( int Value1, IntPtr Value2, int Value3 ); I have included there elements needed by DLoad, you can in a very simple way extend the class for your needs, and create something like this famous "ntdll.h" on the net. Next, we have our "DLoad" namespace and "DriverLoader" class, which contains our methods. Like the previous class, you can use this one in your projects easily. Let's take a look at the function. DriverLoader namespace DLoad { public class DriverLoader { public static long ZwStyleLoader( String DriverPath ) { // Load driver with 'ZwSetSystemInformation' bool en; int Status; // status String FullDriverPath = "\\??\\" + DriverPath + '\0'; // driver path Native.SYSTEM_LOAD_AND_CALL_IMAGE img; // structure initialization Status = Native.RtlInitUnicodeString( // initialization of unicode string out (img.ModuleName), // pointer to UNICODE_STRING FullDriverPath // PCWSTR ); if (Status != Native.STATUS_SUCCESS) return Native.STATUS_INITUNISTRING_FAILURE; // unicode string // is not initialized Status = Native.RtlAdjustPrivilege( // setting privileges 10, // int true, // BOOL Native.ADJUST_PRIVILEGE_TYPE.AdjustCurrentProcess, // BOOL out en //BOOL * ); if (Status != Native.STATUS_SUCCESS) return Native.STATUS_PRIVILEGES_NOT_SET; // privileges not sat IntPtr buffer = Marshal.AllocCoTaskMem(Marshal.SizeOf(img)); // this will point to // our structure like 'PVOID' Marshal.StructureToPtr(img, buffer, false); // structure to pointer // ^ that is the working example how PVOID stuff is converted to C# style // we are passing a pointer to structure to IntPtr which is actually a handle // previously allocating some memory, then we passing 'buffer' as IN parameter Status = Native.ZwSetSystemInformation( Native.SystemLoadAndCallImage, // dword buffer, // pvoid Marshal.SizeOf(img) // ulong ); return Status; // just get status code to know what to do next } /***************************************************************************************/ The coolest thing in this small piece of code is that, man, it is C#! C# for me is a more higher level than interpreted languages, like Perl for example, it is almost like real human language already. Just compare: static inline char * x_strcpy(char *dest, char *src) { int d1, d2, d3; __asm__ __volatile__ ( "1:\tlodsb\n\t" "stosb\n\t" "testb %%al, %%al\n\t" "jne 1b\n\t" : "=&a" (d1), "=&S" (d2), "=&D" (d3) : "1" ((ulong) src), "2" ((ulong) dest)); return dest; } and String.Copy And, still here we got access to Native Windows API! Well, otherwise I would never code a thing in C# Back to the subject. This is how it's done. In case of NtLoadDriver and Service control manager too, 3 methods you have in one single class. Now we call it: NtLoadDriver Status = DriverLoader.ZwStyleLoader(FileNamePath); // or... Status = DriverLoader.NtStyleLoader (FileNamePath, UnloadDriver, DeleteDriver, DeleteRegEntry, UnloadMode); // or .. DriverLoader.ScmStyleLoader (FileNamePath, UnloadDriver, DeleteDriver, DeleteRegEntry, UnloadMode); Next thing: Exit Action. So, this is a standard action which will be performed after loading driver routine. When finally DriverEntry has been called, what do we do? For example, if your driver just prints "Hello World from driver!" and returns, you simply want it to be unloaded just after execution. So in such a case, you check 'Unload driver' and 'Delete driver registry entries'. If you want driver file to be deleted, you check 'Delete driver file'. And so on. But, if you are testing a more advanced driver, server for example, or driver working with IOCTLs, you don't want it to be unloaded just after execution, in such a case you need to uncheck every Exit Action options. DriverEntry The next thing is the 'Injection' option - if you will check it, DLoad will expand presenting a new set of options: Well, the first thing, injection function is implemented in a separate DLL which is built in DLoad itself. If you check 'use injection' DLoad will unpack this DLL into windows/ folder and import function from it. Here is a prototype: DWORD LoadDriverWithInjection( int ProcID, char *DriverPath, int LoadMode, BOOL UnloadDriverMode, BOOL DeleteDriverMode, BOOL DelDrvRegMode, int Mode, char *name, bool un_load ); So if you will ever need such functionality - you got my DLL already. You may ask, what for? Injection I mean? Well, actually I don't know for sure myself :P It is just here. Anyway, show me another driver loader which uses injection method for loading drivers? Ha! There is no such! And mine is significant by this. Ok, about functions. [1]. RtlCreateUserThread - Should work on any windows against any type of process (Notepad, svchost, etc.)[2]. CreateRemoteThread - I don't know what more to say about this one than has already been said[3]. NtCreateThreadEx - This is the latest Windows specific function (Vista, Server 2008, etc.) RtlCreateUserThread CreateRemoteThread NtCreateThreadEx Usage from main app: [DllImport("DLoadDLL.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern int LoadDriverWithInjection( uint TargProc, byte[] DriverPath, int LoadMode, Boolean UnloadDriverMode, Boolean DeleteDriverMode, Boolean DelDrvRegMode, int Mode, byte[] DriverName, Boolean UN_LOAD ); And then in our main class, we call it: String TargetProcess = targ_proc_ent.Text; uint ProcID = Native.GetPidByName(TargetProcess); String DriverFile = Path.GetFileName(FileNamePath); System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding(); byte[] Test1 = encoding.GetBytes(DriverFile); byte[] Test2 = encoding.GetBytes(FileNamePath); StaTus = Native.LoadDriverWithInjection( ProcID, Test2, Native.NTLOADMODE, UnloadDriver, DeleteDriver, DeleteRegEntry, InjectionModeEx, Test1, UnloadMode ); The next option, if you have checked 'use injection', is to select target process, by default it is Notepad.Then we got 2 radio buttons: So, default mode is LOAD mode, and DLoad v.2 [Gtk+] has this mode only. C# version has UNLOAD too. UNLOAD mode is responsible for ... unloading driver! simple. For example, if you have loaded driver, without quick unload (I was talking about this somewhere above) and now want to unload it, simply select your driver again (select driver file), check 'UNLOAD' and hit 'Execute' button. Driver will be unloaded. Here is one thing, while selecting method for unload, you can't select 'via ZwSetSystemInformation' and, method should be the same as when you were loading your driver, so if your driver was loaded with NtLoadDriver method, it should be unloaded with NtLoadDriver method (I mean method, not function xD). Hope it is clear. ZwSetSystemInformation' And last 'widgets' we got here: buttons on the very bottom. From left to the right:Execute, Info, Exit, Reboot, Shutdown.Reboot and Shutdown buttons are responsible for rebooting or shutting down machine. They are using Native functions: void button_reboot_Click(object sender, EventArgs e) { bool en; int Status; Status = Native.RtlAdjustPrivilege( 19, true, Native.ADJUST_PRIVILEGE_TYPE.AdjustCurrentProcess, out en ); Status = Native.NtShutdownSystem( Native.SHUTDOWN_ACTION.ShutdownReboot ); if (Status != Native.STATUS_SUCCESS) { String Debug = String.Format("Failure, Status: {}", Status); status_output.Text = Debug; } } What makes them more effective: no wait time, no logout window. They are here because I need them here. Personally, I am testing drivers on vmware and sometimes there are situations when you need to reboot the machine. Something went wrong but you cannot unload driver coz it got stuck somewhere, or any other bad thing may have happened, or simply fast reboot was needed (I hate to wait until all this business with logging out is done). Off the subject, here is an interesting thing: There is a function, while you are coding Linux kernel modules, which shuts down the computer, without any logging out, in, probably, less than a second! Don't remember its name right now... KernelShutdown maybe? Well, nothing more to explain. Read the code - it is the best self explanatory thing. Thanks for your attention,.
http://www.codeproject.com/Articles/43461/Driver-Loader-DLoad-from-Scratch?msg=3314795
CC-MAIN-2015-11
refinedweb
1,684
57.77
[B]int[/B] - An integer number [B]bool[/B] - Boolean, it can be only True or False [B]float[/B] - Floating point number [B]double[/B] - Double precision floating point number [B]char[/B] - A single character [B]string[/B] - String of textBefore using a variable, it needs to be initialized. The syntax is: type identifier; or type identifier = value;In this lesson we will talk only about integer, these are the easiest to work with. Here is an example of initializing an integer without and with an initial value. I will call the variable myvariable: int myvariable; //without initial value int myvariable = 5; //the initial value is 5As I said the value of the variables can be changed using operators. The basic operators in C++ are: + , - , * , / , % , = , += , -= , *= , /= , %= , and here is a long list of examples:To write out the value of a variable just leave the double quotes from the cout line: cout << myvariable;, but we will talk more about cout in the next lesson. Here is an example program that uses integer variables: #include <iostream> using namespace std; int main(){ cout << myvariable << endl; system("pause"); }The above program should write out 1. Play with the code, remove and add new lines to familiarize with integers. Increment and Decrement Using the incrementing (++) or decrementing (--) operators means to add or to subtract 1 from the value of an integer variable. These operators can be used in two ways: ++number; //the value is increased before the value is used in the rest of the expression or number++ //the value is increased after the value is used in the rest of the expressionFor example try out the following code: #include <iostream> using namespace std; int main(){ int numberone = 2; int numbertwo = 2; cout << ++numberone << endl << numbertwo++ << endl; system("pause"); }As you probably noticed the two written values are not the same. The first is 3, because the value is increased before it`s written and the second value is 2,because it`s increased only after it`s written. Note: The increment and decrement operators are working only with integers! float and double Float and double are working exactly like integers, but these can be floating point numbers, too (2.75 or 9.24323). The difference between float and double is that a double can have more numbers after the decimal point then a float. Initializing a float or a double should look like in the bellow example: float myvariable; or float myvariable = 4.23;Making the result of integer division a float Try out to give the value of dividing two integers to a float. The result will don`t have floating point any numbers you use. To solve this look at my example: int numone = 1; int numtwo = 2; float result = (float) numone / numtwo;Try out the above example, it should work. double myvariable; or double myvariable = 5.24;Booleans Booleans can have only two values: true(1) or false(0). Initializing a boolean should look like in the bellow example: bool myvariable = true; or bool myvariable = false;Note: In C++ a and A are not equal. Be careful, don`t use uppercase letters to write true or false. This is true for the name of the variables, too. For example: myvariable and Myvariable are two different variables.
http://forum.codecall.net/topic/60570-c-lesson-about-variables/
CC-MAIN-2019-51
refinedweb
547
58.11
I'd read somewhere that developers are lazy. It's an old joke passed around, most likely stemming from a quote often miscredited to Bill Gates: "I will always choose a lazy person to do a difficult job because a lazy person will find an easy way to do it." Now this may be true for some, but when I rediscovered my love for coding I dismissed this, I wasn't going to be lazy, I was going to be a productive dev, a 10x. I'm now just over a year into my journey, having learned python through a bootcamp, and I'm currently teaching myself JavaScript. I have been on this journey for a whole year, rarely taking a day off, and all I can say is that it makes you lazy. And thus was the impetus for this piece. The real catalyst for the subject was when I found myself opening the four apps I use most when writing code, namely VS Code, GitHub Desktop, Chrome Canary and Hyper Terminal. Now I understand that it's just four apps, it's really not that difficult, yet I still huffed and rolled my eyes whenever I had to do it. The thought had then occurred to me: "There must be an easier way to do this", and this was the seed. It was small, I could've shrugged it off and continued the way I had for the past months, but I gave it a bit of heed and thus the result was what I've called my DevStart. The more I thought about it, the more excited I got, I could finally put what I'd learned to good use, to build something that I could use to solve a problem I was having in my daily routine. When starting I knew I would have to access the os using python, with import os first step was done. Next I knew that there was a way to open files using python, but I didn't know how to open a program, and after a bit of digging I found some rather verbose and overcomplicated ways of doing it, importing a whole host of modules to get this done. I did try one or two, but they never worked, throwing error after error. This lead me to what I do whenever I run into an issue with my JS, and something that I've taken to be something like a mantra... "Just go look at the docs". After a lot of searching and punching keywords that were relevant to my goal into the search bar I came across the function os.startFile(), into which you plug the path to your apps exe. The code ended up looking like this (paths removed): import os os.startFile(path to VS Code) os.startFile(path to Hyper Terminal) os.startFile(path to GitHub Desktop) os.startFile(path to Chrome Canary) I typed all this out, and upon running the file it worked smoothly. Great! Job done! Not quite. I didn't want to have to open one program, only to have it open another three, my aim was to make this as easy as possible. And making it as easy as possible meant making my new little script work like an executable. To do this I created a batch file, which looks like this: "C:\Users\Connor\AppData\Local\Programs\Python\Python37-32\python.exe" "C:\Code\dev-start\dev-start.py" pause Creating the batch file was relatively easy, create a new file in your editor and save it as a .bat file. Now what this does is starts IDLE, which then runs the python script and presto, we have four apps open. I do understand that I spent probably ten times the amount of time automating a process that takes me about 30 seconds, but I'm not going lie and say I wasn't impressed with myself. It was shortly after I had gotten it to work, while I was showing it off to a friend, that the quote drifted into my mind, I now understood the core of it. It's less about being lazy, and more about being efficient, finding a way to remove the small tasks that aren't helping you be productive, or minimizing unproductive behavior. I've tried to implement this into the way I work, "how do I get the job done, to the best of my ability, using minimal effort". What this has done is develop a better problem solving mindset. I hope you enjoyed my delve into "laziness", you might even find that you're able to use my script to your own advantage. Thanks for reading! Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/connorgladwin/python-and-the-art-of-laziness-31hi
CC-MAIN-2021-17
refinedweb
789
75.24
We had a lengthy discussion recently during code review whether scala.Option.fold() is idiomatic and clever or maybe unreadable and tricky? Let’s first describe what the problem is. Option.fold does two things: maps a function f over Option‘s value (if any) or returns an alternative alt if it’s absent. Using simple pattern matching we can implement it as follows: val option: Option[T] = //... def alt: R = //... def f(in: T): R = //... val x: R = option match { case Some(v) => f(v) case None => alt } If you prefer one-liner, fold is actually a combination of map and getOrElse: val x: R = option map f getOrElse alt Or, if you are a C programmer that still wants to write in C, but using Scala compiler: val x: R = if (option.isDefined) f(option.get) else alt Interestingly function; Actually cata has some theoretical background, Option.fold just sounds like a random name collision that doesn’t bring anything to the table, apart from confusion. I know what you’ll say, that TraversableOnce has fold value! This breaks the consistency, especially when realizing Option.foldLeft() and Option.foldRight() have correct contract (but it doesn’t mean they are more readable). The only way to understand folding over option is to imagine Option as a sequence with 0 or 1 elements. Then it sort of makes sense, right? No. def double(x: Int) = x * 2 Some(21).fold(-1)(double) //OK: 42 None.fold(-1)(double) //OK: -1 but: typeclass describes various flavours of folding in Haskell. There are familiar foldl/ foldr/ foldl1/ foldr1, in Scala named foldLeft/ foldRight/ reduceLeft/ reduceRight accordingly. While other folds are quite complex, this one barely takes a foldable container of ms (which have to be Monoids) and returns the same Monoid type. A quick recap: a type can be a Monoid if for any x) and under multiplication with neutral 1 ( x * 1 == 1 * x == x for is the simplest: the program folds over elements in list and concatenates them because concatenation is the operation defined in Monoid String typeclass instance. Back to options (or more precisely Maybe). Folding over Maybe monad having type parameter being Monoid (I can’t believe I just said it) has an interesting interpretation: it either returns value inside Maybe or a default Monoid value: > fold (Just "abc") "abc" > fold Nothing :: String "" Just "abc" is same as Some("abc") in Scala. You can see here that if Maybe String is Nothing, neutral String monoid value is returned, that is an empty string. Summary Haskell shows that folding (also over Maybe) can be at least consistent. In Scala Option.fold is unrelated to List.fold, confusing and unreadable. I advise avoiding it and staying with slightly more verbose map/ getOrElse transformations or pattern matching. PS: Did I mention there is also Either.fold() (with even different contract) but no Try.fold()?
http://www.javacodegeeks.com/2014/06/option-fold-considered-unreadable.html
CC-MAIN-2015-40
refinedweb
483
65.73
Part 1 introduced Redis, focusing largely on the 5 data structures and showing how you might use them. In this part we'll build a simple app backed by Redis. Before we start, you might have noticed that Redis' API isn't like most. Rather than having 4 generic CRUD methods, Redis has a number of specialized methods. So far we've only looked at a small percentage of them. Our application will only make use of a handful. This is a pretty common usage pattern. Some commands you might never use, some commands make you think wow, that's exactly what I need when you happen to be browsing through the online reference. Mastering Redis isn't about memorizing all the commands (not that there's an insane amount). It's about (a) understanding the 5 data structures, (b) understanding how to model data and query it using Redis and (c) combining a and b to easily tell whether Redis is a good fit. With that said, we'll be looking at the Redis portion of jobs.openmymind.net. All it does is collect programming jobs from various places, displays them and tweets them. The full source is available on github. I admit upfront that using a relational or document database would probably be more straightforward. A background process runs and hits various json and RSS services to get current jobs. Our approach will be to hold each job in its own String value. The key format will be job:SOURCE:SOURCE_ID. So, if we give github a source of 1, then the job at will have a key of job:1:73c9e09a-09b0-11e1-9819-355783013ce0. The value for this key will be the job details. Assuming we've parsed our job into a hash, saving it into Redis will look something like: def save(job) key = "job:#{job[:source]}:#{job[:source_id]}" redis.set(key, job.to_json) end Now, we simply want to display the jobs in reverse chronological order. We don't even do paging, we just display the last X jobs. We can't do that with the above code. We need keys to be sorted by date. My first attempt was to simply use a list, like we saw in part 1: def save(job) key = "job:#{job[:source]}:#{job[:source_id]}" redis.multi do #begins a transaction redis.set(key, job.to_json) redis.lpush('jobs', key) end end But that doesn't really work because the time at which we insert a job into Redis doesn't necessarily map to when the job was posted. Using this approach jobs are sorted by date we process them, rather than we the jobs are posted. The solution? Use a sorted set instead of a list: def save(job) key = "job:#{job[:source]}:#{job[:source_id]}" redis.multi do redis.set(key, job.to_json) redis.zadd('jobs', job[:created_at], key) # :created_at is already an integer (seconds since ..) end end What happens if we've already stored the job? It turns out that this isn't a problem. We can simply re-set the our main job String (incase any of the job details have changed) and, since we are using a set, can re-add the key to our set. We'll come back to this in a bit, but for now, this is a good start. We not only have the details of each job saved, but also a sorted list of job keys. From this, getting the jobs for display isn't difficult: def get_latest_jobs keys = redis.zrevrange('jobs', 0, 150) jobs = redis.mget(*keys) jobs.map do |j| job = JSON.parse(j) job['created_at'] = Time.at(job['created_at']) job end end Sorted sets are ordered from lowest score to highest. That means that more recent times will have a higher scores (more seconds have passed from 1970 to today than have passed from 1970 to 1980). If we grabbed the first 150 values (using zrange) we'd grab the oldest 150 jobs. We want to grab the 150 more recent jobs starting, which would have the highest scores. This is why we use zrevrange (rev for reverse). All we get from our set is an array of keys. We use mget to get the actually job values (which we then deserialize). If you aren't familiar with ruby, the splat operator (*) pretty much turns an array (which is what we have) into a varargs, which is what mget takes (in the Ruby driver anyways). That's really all there is to it Since we only display the latest 150 jobs, there's no reason to keep a bunch of stale jobs around. Memory is money after all. What we want to do is grab stale jobs, delete the keys and remove them from our jobs sorted set. Let's look at the whole thing in one step: redis.multi do keys = redis.zrange('jobs', 0, -300) redis.del(*keys) redis.zrem('jobs', *keys) end Everything here is probably pretty clear, except for this -300 when we get the jobs. Notice here that we are using zrange, which means the lowest score (or oldest jobs since we are sorting by date) first. If we have 500 jobs, this will delete jobs 1 to 200 (500-300). Since we only display 150 jobs, we could use -150, but I decided to keep a buffer around for some reason. As an alternative, we could have used the EXPIRE key-command to let Redis automatically clean up old jobs (say, 10 days old). We'd still need to clean our sorted set though. You can't expire individual values, remember, things are very key-focused. Redis has a nice publication and subscription API. And there are nice libraries like Resque which lets you build robust queues on top of Redis. I went with a much more basic approach. When we have a new job, we'll add the key to a list. So, saving a job now looks like: def save(job) key = "job:#{job[:source]}:#{job[:source_id]}" if !redis.exists(key) redis.rpush('jobs:new', key) end redis.multi do redis.set(key, job.to_json) redis.zadd('jobs', job[:created_at], key) end end We can then run a background task to pop jobs off and tweet them: def get_new_job key = redis.lpop('jobs:new') key ? JSON.parse(redis.get(key)) : nil end Basic, not robust, but it's all I needed. Maybe you don't feel like a master just yet. In truth, there is more to learn. Hopefully though you now have a good enough foundation to spend a bit of time and really get comfortable with Redis. There are various ways to install Redis. It can be found in most package managers (include brew), or you can download the source. Window users should grab this port (which I've never had any problem with during development). Once downloaded, you can start the server via redis-server, and start up a client via redis-cli. You can also download a client for your favorite programming language. Alternatively, you can try the online interactive tutorial. Finally, if you are interested in more Redis modeling discussions, you might be interested in these other two posts of mine: Practical NoSQL - Solving a Real Problem with MongoDB and Redis and Rethink your Data Model. If you are interested in a more complex real-world application, check out LamerNews.
https://www.openmymind.net/2011/11/8/Redis-Zero-To-Master-In-30-Minutes-Part-2/
CC-MAIN-2019-13
refinedweb
1,236
74.39
December 2, 2002 These release notes contain important information and caveats for the Cisco Content Transformation Engine 1400 Series (CTE) and Design Studio Release 2.7. This document supplements information in the Cisco Content Transformation Engine 1400 Series Configuration Note and the Design Studio User Guide. These release notes contain the following topics: Improper configuration of the CTE can result in a security risk. Before you deploy the CTE, verify that it does not have access to protected intranet sites. By default, the CTE proxies only the web pages that it has identified (transcoded in Design Studio) to prevent access to protected servers that are on the same subnet as the CTE. If you choose to override that default, do not put the CTE on the same subnet as protected servers. Also, be aware of the following security considerations: Because IP phones do not support SSL, the connection between them and the CTE is not secure. We recommend that you locate the connection between an IP phone. These sections describe how to upgrade your CTE and Design Studio to release 2.7: To upgrade to release 2.7 from release 2.2a or 2.5, perform these steps: Step 2 Insert the CTE Release 2.7 Restore CD-ROM into the CD-ROM drive of the CTE. Step 3 Power down the CTE. Step 4 Wait approximately 10 seconds. Step 5 Power up the CTE. The serial console displays the upgrade progress and indicates when the installation has successfully completed. Step 6 When the upgrade is complete, eject the Restore CD and reboot the CTE. Step 7 When the CTE restarts, the CTE Serial Console appears. To upgrade from release 1.1.6, perform these steps: If you do not have the 2.2a Restore CD, you can obtain it in these two ways: For instructions on downloading these files, see the "Downloading Upgrades from Cisco.com" section. Step 2 Once you have upgraded to 2.2a, obtain a Release 2.5 Restore CD from Cisco TAC by calling 1-800-553-2447 in the United States. To obtain a directory of toll-free Cisco TAC telephone numbers for all countries, go to this URL: Step 3 Once you have the Release 2.5 CD, follow the instructions in the "Upgrading from Release 2.2a or 2.5" section. There are three types of patch files that you can download: To download a partial or full upgrade file, perform these steps: Step 2 Download the appropriate patch file. For a partial upgrade that does not overwrite your configuration files, download the following patch file: CTEServerPartialUpgrade-version.tgz For a full upgrade that overwrites any existing configuration information, download the following patch file: CTEServerFullUpgrade-version.tgz Step 3 Log on to the CTE Administration screens from any web browser, entering the following URL: where: Step 4 For a full upgrade, use your regular administrative username and password to log on to the Administration screens. For a partial upgrade, use root as the username and cteadmin as the default password. Step 5 From the Upload Server Upgrade field on Administration > Uploads screen, browse for the file to upload, and select it. Step 6 Click the Submit button. Step 7 Once the upload is complete, the CTE will automatically restart. Step 8 If you downloaded the full patch file (CTEServerFullUpgrade-version.tgz), you will need to reconfigure the CTE from the CTE console. For more information on reconfiguring the CTE, refer to the description of the CTE Console menus in the CTE 1400 Configuration Note. To download an ISO image file, perform these steps: Step 2 Download the ISO Images file (CTEServerISO-version.zip). Step 3 Burn a CD containing the ISO Images file information. Step 4 Go to the "Upgrading from Release 2.2a or 2.5" section, and use the CD that you have created in this procedure to upgrade your CTE. To upgrade Design Studio, install the Design Studio software from the CD. For more information on installing Design Studio, refer to the Design Studio User Guide. When you open a configuration file created in a previous release, Design Studio displays the "Update CTE Design Studio File?" dialog box. You must upgrade the file to work on it in release 2.7. To upgrade a configuration file to release 2.7, perform these steps: Step 2 In the "Update CTE Design Studio File?" dialog box, choose whether or not you want to use the same file name: Step 3 Before you change or add rules to a page, we recommend that you refresh the page. In Design Studio, click a page, click the Browse tab, and click the Refresh icon. Alternatively, you can browse to the page in Design Studio and click Apply Rules. If the original HTML was malformed (for example, it contained two body elements or a meta element inside of a body element), some transformation rules may no longer work. These sections describe new features in the CTE and Design Studio release 2.7: The CTE includes the following new features: You can make ScreenTop Menu always available to IP phone users by using the soft switch to set a phone or phone group's idle URL to the CTE IP address. ScreenTop Menu displays on any device when the device connects to the CTE. Design Studio includes the following new features: For more information, refer to the Design Studio User Guide. The CTE does not support the following software features: For a list of unsupported JavaScript features, refer to "JavaScript Features Not Supported by the CTE" in Chapter 6 of the Design Studio User Guide. These sections supplement the CTE and Design Studio documentation: The DDF contains a new subkey, convert-inputs, under the keys geometry > display > images. Use convert-inputs for devices that support images but not input type="image" elements. This key converts input elements to type="submit". The syntax for the XHTML extension that disables URL rewriting is incorrectly stated in the Design Studio User Guide. The correct syntax is as follows: < Design Studio now enables you to copy all transformation rules from one device to another. This new feature is useful when you upgrade Design Studio to support additional device types. To copy all rules from one device to another, perform these steps: Step 2 From the Tools menu, choose Map New Devices. Step 3 In the Open dialog box, choose the project that you want to update, and click Open. The project that you choose should be the project to which you will copy rules. Step 4 In the From Device area of the Device Mapping dialog box, select the device that has the transformation rules that you want to copy. Step 5 In the To Device(s) area, select each device to which you want to copy the rules. If a device already has rules defined, those rules will be replaced with the new rule set. Step 6 Click Map. Step 7 Open the configuration file and verify that the rules have the intended results for each device. These options have been removed from the CTE Administration Console: This section contains the following topics: This section describes the known limitations for the CTE and Design Studio release 2.7: The CTE does not support web pages that exceed 256 KB. Workaround: None. You cannot delete or edit multiple rules simultaneously by selecting them in the Transformation Rules or Identifier Rules lists and then clicking the Delete or Edit button. Design Studio performs the command only on the last rule selected. Workaround: Delete or edit the rules individually. Design Studio and the CTE support only the XSL and XHTML namespaces. If you import an XSL style sheet that contains references to other namespaces, the console displays a message such as the following: "Unsupported stylesheet operation: no such prefix: rdf." Design Studio might stop responding after you load an XSL or XML page. Workaround: Remove references to unsupported namespaces in XSL style sheets. When you enter either File > Open or File > Save As and the appropriate dialog box appears, the list/detail icons in the upper-right corner of the dialog box do not work. Workaround: Use your Windows file browser to view file details. The CTE does not correctly render animated GIFs. Workaround: Use Design Studio to clip animated GIFs. The Dial Number rule does not work on an IP Phone when applied to an element if the label is the same as the element. For example, if you attempt to apply the rule to a phone number 888-123-1234 and the label is also 888-123-1234, the rule will not work, and the phone number will not display on the IP phone. Workaround: Choose a different element for the label. After you remove a page from a project, Design Studio does not allow you to undo the operation. Workaround: Before removing a page, back up the configuration file. When using the Paginate rule to paginate an anchor that contains a link to slash (a href="/"), the Paginate rule does not appear to work while you are in Design Studio. The Paginate rule, when used as documented, works fine for content displayed on devices. Workaround: The Paginate rule used as described works fine when content is displayed on devices. Design Studio checks login passwords for validity up to the eighth character only. Workaround: Do not create passwords longer than eight characters for Design Studio users. The CTE does not process content with a content-type of gzip. If an IP phone or wireless device user attempts to navigate in more than one frame of a multi-frame page, the navigation attempt will fail and the browser will report an internal error 500. Workaround: Use Design Studio transformation rules to change the page so that only one frame is sent at a time. The CTE currently does not support frames. If you receive a dialog box about a merge conflict while attempting to upload a configuration file to the CTE and you click Ignore and Continue, the configuration file on the CTE will be overwritten even if you originally requested a merge. Workaround: When the merge conflict dialog box appears, click Cancel, wait a few minutes, and try the upload again. This section describes caveats resolved in the CTE and Design Studio release 2.7: Design Studio now supports XSL style sheets that have an xsl:include element. You can no longer save a configuration file to a blank file name. The problem was a condition of Java 1.2.2 and does not occur now that Design Studio uses Java 1.3. You can perform multiple uploads of a DDF file from Design Studio without reconnecting to the CTE before each upload. The following documents are available for the CTE and Design Studio:" section.
http://www.cisco.com/en/US/products/hw/contnetw/ps797/prod_release_note09186a0080118c2c.html
crawl-002
refinedweb
1,805
63.8
Problem: Determine if a number is prime, with an acceptably small error rate. Solution: (in Python) import random def decompose(n): exponentOfTwo = 0 while n % 2 == 0: n = n/2 exponentOfTwo += 1 return exponentOfTwo, n def isWitness(possibleWitness, p, exponent, remainder): possibleWitness = pow(possibleWitness, remainder, p) if possibleWitness == 1 or possibleWitness == p - 1: return False for _ in range(exponent): possibleWitness = pow(possibleWitness, 2, p) if possibleWitness == p - 1: return False return True def probablyPrime(p, accuracy=100): if p == 2 or p == 3: return True if p < 2: return False exponent, remainder = decompose(p - 1) for _ in range(accuracy): possibleWitness = random.randint(2, p - 2) if isWitness(possibleWitness, p, exponent, remainder): return False return True Discussion: This algorithm is known as the Miller-Rabin primality test, and it was a very important breakthrough in the study of probabilistic algorithms. Efficiently testing whether a number is prime is a crucial problem in cryptography, because the security of many cryptosystems depends on the use of large randomly chosen primes. Indeed, we’ve seen one on this blog already which is in widespread use: RSA. Randomized algorithms also have quite useful applications in general, because it’s often that a solution which is correct with probability, say, is good enough for practice. But from a theoretical and historical perspective, primality testing lied at the center of a huge problem in complexity theory. In particular, it is unknown whether algorithms which have access to randomness and can output probably correct answers are more powerful than those that don’t. The use of randomness in algorithms comes in a number of formalizations, the most prominent of which is called BPP (Bounded-error Probabilistic Polynomial time). The Miller-Rabin algorithm shows that primality testing is in BPP. On the other hand, algorithms solvable in polynomial time without randomness are in a class called P. For a long time (from 1975 to 2002), it was unknown whether primality testing was in P or not. There are very few remaining important problems that have BPP algorithms but are not known to be in P. Polynomial identity testing is the main example, and until 2002 primality testing shared its title. Now primality has a known polynomial-time algorithm. One might argue that (in theory) the Miller-Rabin test is now useless, but it’s still a nice example of a nontrivial BPP algorithm. The algorithm relies on the following theorem: Theorem: if is a prime, let be the maximal power of 2 dividing , so that and is odd. Then for any , one of two things happens: or for some . The algorithm then simply operates as follows: pick nonzero at random until both of the above conditions fail. Such an is called a witness for the fact that is a composite. If is not a prime, then there is at least a 3/4 chance that a randomly chosen will be a witness. We leave the proof of the theorem as an exercise. Start with the fact that (this is Fermat’s Little Theorem). Then use induction to take square roots (the result has to be +/-1 mod p), and continue until you get to . The Python code above uses Python’s built in modular exponentiation function pow to do fast modular exponents. The isWitness function first checks and then all powers . The probablyPrime function then simply generates random potential witnesses and checks them via the previous function. The output of the function is True if and only if all of the needed modular equivalences hold for all witnesses inspected. The choice of endpoints being 2 and are because 1 and will always have exponents 1 mod . Very interesting, will you be covering AKS in a future post? Hi, are you sure that j in thesecond condition cannot be equal to s? I tried following the statements using p=7 and if I didn’t make mistakes it is needed. Wikipedia also has that j can be s. Thanks for an interesting read. Fermat’s little theorem says that if j =s then the quantity in the second condition will be 1. (it’s so hard to write about math on a mobile device!) Reblogged this on traversals of a schizoverted mind. Reblogged this on What a Computational Biologist Do?.
https://jeremykun.com/2013/06/16/miller-rabin-primality-test/?like_comment=2992&_wpnonce=f1c7ca38a2
CC-MAIN-2020-24
refinedweb
712
53.21